The future is bright.
Yet, negativity bias is a well-documented phenomenon in cognitive science:
- First, things that are more negative in nature have a greater effect on our psychological state than neutral or positive things.
- Second, stemming from this psychological state of overweighing negativity, media feeds us with more pessimistic views. We crave negativity and, therefore, gladly pay negative media with our clicks and attention.
- Third, you will be labeled naive when you’re crazy enough to share positive expectations. Some research suggests that negative people are perceived as more intelligent, competent, and expert than positive people.
As a result, third-party input data flowing into your brain is biased towards negativity.
From LLM training, we know what happens with input data biases – they become output biases ingrained in the neural nets.
But any bias is dangerous: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.” – Mark Twain βοΈ
Now, let’s go to the more practical applications of this – how can we exploit negativity bias?
I think a simple exploit of negativity bias is arbitraging perception and reality of technological disruptions and exponential long-term trends.
We are famously bad in understanding exponential growth and compound interest. Exponential trends can easily lead to extreme positive outcomes that are 100x of what we initially expected.
- Ken Olsen (1977): “There is no reason for any individual to have a computer in their home.”
- Sir Alan Sugar (2005): “Next Christmas the iPod will be dead, finished, gone, kaput.”
- Steve Ballmer (2007): “There’s no chance that the iPhone is going to get any significant market share.”
This gross underestimation of exponential disruptions is a huge opportunity for tech enthusiasts like us.
We know that costs of AI training reduces by 70% per year, for example.
So, spending $100 on AI training will get us 10x more in two years.
100x in four years.
1000x in 2030.
The AI Scaling Laws teach us that neural nets keep improving in performance as we throw more compute, data, and parameters on the problem.
In other words, the intelligence explosion is unstoppable. Essentially, we have solved the problem of cheaply scaling human intelligence.
You can count on it. So you better position yourself to benefit from change now before reality has caught up and change is here.
For example, we don’t need to train a single human brain for 30 years, spending $230,000 to put it through medical school. We still can, of course.
But with LLMs, we can now also replicate this brain 8 billion times and put it on 8 billion smartphones giving every human access to first-class medical care, 24/7, basically for free.
When I talk about billions of humanoid bots, self-driving cars, autonomous AI agents, intelligent factories, replacing inefficient management layers from companies, automating policy and monetary systems, I need to do it with you as a fellow tech enthusiast and reader of this Finxter newsletter.
Because most people outside our little tech bubble don’t understand the extent of upcoming disruptions.
It will happen much faster than 99% of people expect due to the exponential nature of declining cost curves.
Humanity has woken up dead matter and injected life energy and intelligence into it. We are in the midst of a giant and literal explosion (see PS below).
The future is bright in every meaning of the word.
Check out this conversation of Lex Fridman prompting Tesla’s former head of AI, Andrej Karpathy (transcript starts at 23:59):
Andrej: And so basically, it does seem like synthetic intelligences are kind of like the next stage of development. And I don’t know where it leads to. Like At some point, I suspect the universe is some kind of a puzzle. And these synthetic AIs will uncover that puzzle and solve it.
Lex: And then what happens after, right? Because if you just fast forward Earth many billions of years, it’s like it’s quiet and then it’s like tormal, you see like city lights and stuff like that. And then what happens at the end? Like is it like a poof? Or is it like a calming, is it explosion?
Is it like Earth like a giant, because you said emit roasters, will it start emitting like a giant number of satellites?
Andrej: Yes, it’s some kind of a crazy explosion. And we’re stepping through a explosion and we’re like living day to day and it doesn’t look like it. I saw a very cool animation of Earth and life on Earth and basically nothing happens for a long time and then the last like 2 seconds, like basically cities and everything and the lower orbit just gets cluttered and just the whole thing happens in the last 2 seconds and you’re like, this is exploding, this is a state of explosion.
Lex: So if you play, yeah, yeah, if you play it at normal speed, it’ll just look like an explosion.
Andrej: It’s a firecracker, we’re living in a firecracker.
Lex: Where it’s going to start emitting all kinds of interesting things. And then, so explosion doesn’t, it might actually look like a little explosion with lights and fire and energy emitted, all that kind of stuff, but when you look inside the details of the explosion, there’s actual complexity happening where there’s like, yeah, human life or some kind of life.
Andrej: We hope it’s not a destructive firecracker. It’s kind of like a constructive firecracker.