• BTC $17252.63 -0.06 %
  • ETH $1284.06 -0.03 %
  • BCH $112.10 0.00 %
  • SOL $13.73 0.00 %
  • XRP $0.39 -0.08 %
  • BNB $290.10 0.07 %

How You Can Know Cathie Woods Is Wrong About Bitcoin

Curtis White· 4 min read

Cathie Woods recently called for 1 million Bitcoin by 2030. Nobody knows what Bitcoin will be worth in 2030. Many influencers and fin media “experts” have been known to make wildly inaccurate predictions and are never held to account. Jim Cramer is wrong so often that many have poked fun at him with the meme “inverse Cramer” trending. On the other hand, there are some people who can call markets somewhat accurately — in fact, I called but did not publish many accurate market calls or predictions. The question is how can you distinguish the fake experts from the real experts? How can you differentiate nonsense speculations from accurate speculations? Read on to learn how.

Principle #1 Falsifiable Statement

A prediction must be encoded into a falsifiable statement or set of probabilistic statements before it can be tested. David Aronson, in his book, “Evidence-Based Technical Analysis” faulted much of technical analysis because it wasn’t falsifiable. Many market calls are not actually predictions because the statements themselves cannot be falsified. Vague calls or calls without timeframes cannot be tested. In this case, Cathie Woods statement can be tested and thus is a prediction.

Principle #2 Feedback

Malcom Gladwell of “Outliers” fame, published that he discovered that it takes about 10,000 hours of deliberate practice to achieve expertise and that holds true across diverse endeavors. However, for practice to be meaningful one must get meaningful and responsive feedback.

When “experts” make rare predictions, like where Bitcoin, will be in 8 years we know that there one simply cannot gain enough feedback with predictions like that to develop meaningful expertise. And, true to form, the predictions I made that were accurate were only 1–2 days duration and were made in real-time.

Some people like to crow when they call the top or bottom in markets. While they may have did that accurately once or twice, we should be skeptical that they have developed any true expertise in doing that because there simply isn’t enough feedback to develop expertise. On the other hand, we may give more weight if the calls are on shorter time frames where feedback is possible.

This is the second litmus test — is the prediction of a nature that one can gain sufficient feedback to develop expertise?

Principle #3 Explainable Model

Good predictions should derive from explainable models. The most explainable models are surely quantitative models with specific rules that can be tested. However, expertise can also benefit from qualitative factors and it is possible to develop expertise without the ability to elucidate the specific rules that govern the insight.

I do not recall Cathie Woods offering any specific model as to why Bitcoin should be 1 million by 2030. Indeed, by referencing a model, it invites others to be able to better question your work.

Principle #4 Costs of being wrong

Any good model should elucidate the costs of being wrong. Some influencers have called market tops or bottoms somewhat “accurately” but never specified a stop loss or condition where they would be wrong. A market call with no understanding of the costs is not a good market call. We can only assume someone is willing to assume a 100% loss if they do not provide a stop loss in a call. Think about what that means, if someone is willing to incur a 100% loss — even if they make 100% on their call they have only made about as much as they risked. That’s not to say stops are always required — but we need to understand the cost of being wrong to understand the value of a call.

Likewise, if someone is betting on a long shot event and Bitcoin is just 1 out of 100 other long shot plays, one needs to understand this to understand the value of a speculation.

Principle #5 What if…

Markets are complex. With long odds plays, in order demonstrate one has factored in the complexities, they should be able to answer when or where they might be wrong. A market call that one will stick with no matter what has little value. A great example is Michael Saylor — there is no “what if..” with his position. It will eventually work or it won’t. He is unwilling to accept any alternative.

What if — a superior technology comes along compared to Bitcoin?

What if — governments impose regressive regulations on Bitcoin?

What if — a central bank digital currency is offered?

A forecast unlike a prediction offers a range of possibilities based on the complexities of reality.

Will Bitcoin Be A Million Dollars By 2030?

The reality is that smart people get things wrong all the time. Expertise rarely transfers across disciplines. This type of prediction or speculation fails most of our tests: it is not a reasoned speculation but a wild speculation. It might be or it might not be but the speculation itself is wild. The key questions one must ask are: What model are you using to project that? What ifs have you considered? What is the risk to reward analysis have you considered?


All Comments