A little ignorance can be a good thing

Picture a city where most drivers use the same navigation app. At 9 am, the app says one side street is the quickest shortcut to get from area A to area B. Thousands of commuters accept this option and drive through that street, and soon it becomes jammed with traffic. As a result, the route that was until a few minutes ago a shortcut on the app becomes undesirable, even as nearby streets remain underused.

Now imagine a small change. Say some drivers don’t fully trust the label “fast”, or they see slightly different estimates, so they spread out across several streets. As a result no single street bears the entire traffic load, and in turn the average travel time (across all commuters) falls even though no road has been widened and no signal has been reconfigured.

According to a preprint paper uploaded to arXiv in March, this limited ignorance on the drivers’ part effectively reduced congestion created by completely informed yet selfish choices.

This view of the traffic should be reminiscent of any public space you’ve navigated in India, where seemingly small selfish choices add up to inconveniences that affect many people. Motorists spill into junctions when the signal is on red, with motorcyclists in particular clogging the sidewalk and even blocking the path of oncoming traffic. Commuters rush at bus and train doors rather than queuing in, thus slowing the boarding process as well as rendering existing queues useless. Families often reserve extra seats or rush baggage carousels and stall others. Each act yields a momentary private gain but together they congest and frustrate.

Curiously, the new study posits that if people who route themselves through a crowded network know a little less, the whole system might work a little better.

Prior work has already shown that when every user pursues the quickest route for themselves, the system settles into a state that isn’t globally optimal. The study began by quantifying this shortfall with a “price of anarchy”. The authors — all from Ohio State University — also defined a parallel idea that they called the “price of ignorance”, which tracked how outcomes change when users are uncertain about the network’s links.

Next, the model mixed two kinds of links. “Slow” links had a fixed travel time that didn’t change with traffic. “Fast” links were faster when they were underused but slowed down in a linear fashion as more users pile on. The study’s model then assumed that users didn’t know for sure which kind of link they were facing. Instead, the model continued, they planned their route using a perceived cost that blended the two possibilities. The authors ‘measured’ the users’ ignorance using a single parameter denoted ɑ. If ɑ = 0, the users had no ignorance and complete knowledge; if ɑ = 1, the users were completely ignorant. The more the users were ignorant, the more they’d think all routes are equally suitable.

The network took the form of a directed square lattice; ‘directed’ means users could only move through it in a fixed way, from left to right. Each link between two points in the lattice was said to be “fast” with probability p and “slow” with probability 1 – p. Users chose those routes that they believed would minimise travel time. The authors evaluated the true average travel time and compared it to the ɑ = 0, and defined the “price of ignorance” as the ratio of the average travel time with and without ignorance.

The main finding was stark: the authors found that a small amount of ignorance always helped. For all compositions of the network, increasing ignorance from 0 reduced the average travel time up to a particular threshold. The paper proved analytically that for every probability of a lane being “fast” (i.e. for all values of p), any ɑ ≤ 2/3 guaranteed the price of ignorance was PI ≤ 1. That is to say, a limited ‘amount’ of ignorance softened the traffic by redirecting it into more tempting “fast” links. In this scenario, because some users overestimated the “slow” links and “underestimated” the fast ones, the traffic spread more evenly across the network. This reduced congestion on the fast links — which is good.

The team also found a special sweet spot, so to speak. Say the network has a tipping point between “not enough fast links” and “plenty of fast links”. Near this tipping point, if users’ ignorance is around ⅔, their self-chosen routes spread out in just the right way. The result matches the best possible routing that a planner might pick. Put another way: imagine some links are marked “fast” and others are marked “slow”. If everyone fully trusted the labels, most people would chase the same link and eventually clog it. But if people trusted the labels only partly (as implied by ɑ = ⅔), some would choose alternative links nearby. This small hesitation thus spreads traffic across several routes.

Alas, ignorance beyond the helpful range eventually starts to hurt. When people have no idea which links are fast or slow, they spread out almost evenly. That sounds fair but it also ‘wastes’ good options. Even in cities with many quicker routes, some travellers drift to slower side streets or paths that fizzle out, so the average travel time rises. This waste grows further in larger networks. Thus there is a sort of separatrix between “helpful doubt” and “harmful cluelessness”. If fast links are scarce, a planner can tolerate more doubt before the network’s performance drops. If fast routes are plentiful, on the other hand, only near-total cluelessness can cause harm. That is, in very large and well-served networks, things go bad only when people are almost completely in the dark.

So to be clear, ignorance in the study didn’t mean carelessness or lack of effort. Users still knew the map, could see congestion, and chose the route that looked best to them. What they didn’t know for sure was which roads were actually quicker and which were actually slower. And this specific kind of ignorance had two virtues: first, it prevented users from overreacting to “fast” labels and kept a small subset of links from being overloaded; second, users’ ignorance caused different users to make different routing decisions even when some links looked attractive, thus keeping them from clogging these links in a coordinated way.

These virtues might sound familiar if you’ve been to other parts of physics. In statistical physics, for example, adding noise to a weak signal can help it cross a valuable threshold, a phenomenon called stochastic resonance. If the amount of this noise is just right, it can improve the system’s response. A familiar example is in hearing assistance. Some hearing aids add a soft, random ‘hiss’ under speech. On its own, this hiss is too weak to notice — but for listeners with mild hearing loss, it helps small sound cues, like faint consonants in Hindi, cross the ear’s detection threshold more reliably. Thus speech becomes somewhat more clear at lower volumes or in quiet rooms.

In ecology and evolution, some seeds have been known to germinate over multiple seasons because they’re not sure which ones are going to be bad. Similarly, with algorithms and machine learning, a bit of uncertainty can make models work better. During training, a program can turn off some parts of the network at random, so a model doesn’t simply memorise patterns from the data. Small, carefully added noise in the training labels can have a similar effect. In reinforcement learning, letting the program try some actions at random can help keep it from getting stuck on a strategy that looks good early but isn’t actually the best.

And in behavioural game theory, people don’t always pick the mathematically ‘best’ move. They just pick a pretty good move most of the time. This can help alleviate crowding because as a result not everyone chases the same option at once. A similar idea can help in clinics as well: if a sign or app always says “counter 3 is fastest”, for example, everyone might rush there and block it. If instead the app randomly assigned people to different counters, everyone isn’t steered to the same counter.

The overall lesson isn’t that ignorance is good in itself but that perfect certainty can produce brittle, crowded choices in systems with congestion or competition. A carefully controlled amount of uncertainty can instead spread the load and pull the system as a whole from a state governed by selfish dynamics to one by social optima.