This is a running list of open-ended questions I've been reflecting on. Some technical, some philosophical and are related to the practice of forecasting. These questions often emerge from my own forecasting experience or while reading papers and blogs. While I don’t yet have definitive answers, I believe grappling with these questions is valuable in itself.
This question struck me recently: What’s the real difference between conventional wisdom and the wisdom of crowds? At first glance, they seem closely related - both are, after all, aggregations of what people believe. But I think there's a crucial distinction, and it becomes clearer when we consider the key ingredients of what makes collective intelligence actually intelligent. In fact, we can break it down along three major lines.
Conventional wisdom is often a product of repetition across generations. Its strength lies in familiarity, not necessarily in truth. The problem? A lack of independent thinking. Over time, people stop questioning the original assumptions and begin to echo what’s widely accepted. Crowd forecasting, on the other hand, depends on independence. Their forecasts are made in light of evidence, not in deference to tradition. This independence is what gives the aggregate its predictive power.
NOTE: This is at least in principle. I’ve seen plenty of questions where crowds extremize - confidently pushing toward 0% or 100% - without sufficient reasoning or evidence. In those cases, forecasting starts to resemble a form of local conventional wisdom, shaped more by groupthink than genuine insight. So while independence is a cornerstone of good forecasting, it’s also a fragile one.
Conventional wisdom tends to lack diversity. It’s shaped by dominant narratives, cultural norms, and social reinforcement. Once a belief becomes “common sense,” dissenting perspectives are either ignored or ridiculed. Crowd forecasting, at least in principle, thrives on diversity. You want people from different backgrounds, with different priors, using different mental models to arrive at their predictions. The beauty lies in the balance: overestimates cancel out underestimates, optimism offsets pessimism. The collective judgment, when properly aggregated, can land closer to the truth than any individual guess.
Conventional wisdom spreads through vague social transmission - conversations, media, traditions. Beliefs accumulate not because they’ve been tested, but because they’ve been repeated. Crowd forecasting, by contrast, uses explicit aggregation methods - averaging probabilities, scoring forecasts, applying Bayesian updates. I am not saying that this is fool proof but at least you can convince yourself that this is grounded in some form of reality or truth.
Basically, the question I am asking is:
What to do in the absence of updates for a forecasting question? Is there any strategy to keep updating or should I refrain from updating?
Why am I asking this? Because I have seen people allocate a certain percentage to each month before updating their forecasts. For example: you believe that China will not invade Taiwan in the next three months. You are convinced that they won’t. But you are not willing to go 100%. So what you do? You allocate 2-4% to each month. This way you initially forecast 88% no. One month goes by without any updates and you update to 92%, one more month and you are at 96%. Your updates are uniformly increasing as time goes by. Seems reasonable right? But what about a sudden event happening? You are completely taken aback now! But at least there is a mental model. Could be a good start and it really boils down to the important question as to what makes a good update? And how can we actually model this?
What would a Bayesian update look like? No new information means no update? But wouldn’t this mean that I am taking no action when apparently the risk is decreasing?