In this series we examine some of the most popular doomsday scenarios predicted by modern AI experts. Previous articles have included Misaligned Purpose, Artificial Stupidity, Wall-E Syndrome, Humanity Joins the Hivemind and Killer Robot.
We’ve covered a lot in this series (see above), but nothing even comes close to our next topic. The “democratization of expertise” may sound like a good thing – democracy, expertise, what’s not to like? But our intention is to convince you that by the time you read this article this is the biggest AI-related threat to our species.
To understand this properly, we need to revisit a previous post on what is known as “WALL-E syndrome”. It is a persisting condition where we become so dependent on automation and technology that our bodies become soft and weak until we are able to function without the physical help of machines.
Subscribe to our newsletter now to get weekly recaps of our favorite AI stories delivered to your inbox.
When we’re defining what a “democratization of expertise” is, we’re specifically talking about what can most easily be described as the “WALL-E syndrome for the mind” .
I want to be careful noting that we are not talking about democratization of information, which is vital for human freedom.
There is a popular board game called “Trivial Pursuit” that challenges players to answer completely unrelated trivia questions from different categories. It dates back long to the beginning of the Internet and, thus, is designed to be played using only the knowledge that is already in your mind.
You roll some dice and move a game piece around a board until it comes to rest, usually on a colored square. You then draw a card from a large deck of questions and try to answer the one that matches the color you landed on. To determine if you succeeded, you turn the card over and see if your answer matches the printed answer.
Trivial Pursuit game is only “accurate” as its database. This means that if you play the 1999 edition and ask you a question about which MLB player has the record for most homers in a season, you will have to answer the question incorrectly to match the printed answer.
The correct answer is “Barry Bonds with 73.” But, because Bond didn’t break the record until 2001, the 1999 edition most likely surpasses previous record-holder Mark McGwire’s 1998 record of 70.
The problem with databases, even though they are curated and hand-labeled by experts, is that they only represent one piece of data at a certain moment.
Now, let’s expand that idea to a database that isn’t curated by experts. Imagine a game of trivial quest that functions in much the same way as the vanilla version, except the answers to every question were crowd-sourced from random people.
“What is the lightest element in the periodic table?” Answer, in total, according to 100 random people we asked in Times Square: “I dunno, maybe helium?”
However, in the next version the answer may change to something like “According to 100 random high school juniors, the answer is hydrogen.”
What does this have to do with AI?
Sometimes the wisdom of the crowd comes in handy. Like, when you’re trying to figure out what to look for next. But sometimes it’s really silly, like if the year is 1953 and you ask a crowd of 1,000 scientists whether women can experience orgasm.
Whether this is useful for Large Language Models (LLMs) depends on how they are used.
LLM is a type of AI system used in a wide variety of applications. Google Translate, the chat bot on your bank’s website, and OpenAI’s infamous GPT-3 are all examples of LLM technology in use.
In the case of translation and business-oriented chatbots, AI is typically trained on carefully curated datasets of information because they serve a narrower purpose.
But many LLMs are intentionally trained on huge dumpsters full of unneeded data, so that the people who build them can see what they are capable of.
Big Tech has convinced us that it’s possible to develop these machines so big that eventually, they just become conscious. The promise there is that they will be able to do everything a human can do, but with the mind of a computer!
And you don’t have to look very far to imagine the possibilities. Take 10 minutes and chat with Meta’s Blenderbot 3 (BB3) and you’ll see what all the fuss is about.
It’s a brittle, easily confused mess that’s often vague and puny “Let’s be friends!” Crap than anything coherent, but it’s a lot of fun when the parlor trick works just fine.
Not only do you chat with the bot, but it is also gamified in such a way that you can create a profile with it. At one point, the AI decided it was a woman. On another, it was decided that I was actually actor Paul Green. All this is reflected in its so-called “long term memory”:
It also provides me with tags. If we talk about cars, it might give me the tag “likes cars”. As you can imagine, it could one day be extremely useful for Meta if it could link the profile you create with bots to your advertising services.
But it doesn’t tag itself for its own benefit. It can pretend to remember things without sticking labels in its UI. They are there for us.
Those are the ways meta can make us feel connected and even a little responsible for chatbots.
This is my BB3 bot, it remembers me, and it knows what I taught it!
It is a form of gamification. You have to earn those tags (both yours and the AI) by talking. My BB3 AI likes the Joker from the Batman movie with Heath Ledger, we had a full chat about it. There isn’t much difference, at least as far as my dopamine receptors are concerned, between earning that feat and getting a high score in a video game.
The truth of the matter is, we are not training these LLMs to be Smart, We are training them to be better at text output so that we want them to output more text.
Is this wrong?
The problem is that BB3 was trained on a dataset that is so large that we call it “internet-sized”. It contains trillions of files that range from Wikipedia entries to Reddit posts.
It would be impossible for humans to filter out all the data, so it’s impossible for us to know exactly what’s in there. But billions of people use the internet every day and it seems that every person is saying something smart, with eight people saying things that no one understands. That’s all in the database. If someone said it on reddit or twitter, it’s probably been used to train the likes of bb3.
Despite this, Meta is designing it to mimic human credibility and apparently maintain our engagement.
It’s a small leap from building a chat bot that seems to humanly optimize its output to convince the average person that it’s smarter than them.
At least we can fight killer robots. But if even a fraction of the number of people using Meta’s Facebook app starts trusting chatbots over human experts, it could have a terrifyingly harmful effect on our entire species.
What could be worse than that?
We saw this drama to some extent during the pandemic lockdown. Millions of people without medical training decided to disregard medical advice based on their political ideology.
When faced with the choice of trusting politicians with no medical training or the overwhelming, peer-reviewed, research-backed consent of the global medical community, millions decided they “trust” politicians more than scientists. .
The democratization of expertise, the idea that anyone can be an expert if they have access to the right data at the right time, is a serious threat to our species. It teaches us to believe any idea as long as the crowd thinks it makes sense.
That’s how we believe Pop Rocks and Coca Cola are a deadly combination, bulls hate the color red, dogs can only see in black and white, and humans only use 10 percent of their brains. These are all myths, but at some point in our history, each of them was considered “common sense”.
And, while it can be quite human to spread misinformation out of ignorance, the democratization of expertise capable of being reached at the meta scale (about 1/3 of the people on Earth use Facebook on a monthly basis) potentially can be. The devastating impact on humanity’s ability to differentiate between dirt and Shinola.
In other words: it doesn’t matter how smart the smartest people on earth are if the general public places their trust in a chatbot that was trained on data created by the general public.
As these machines get more powerful and better at imitating human speech, we’re about to reach a terrifying inflection point where their ability to convince us that what they’re saying makes sense, would far exceed our ability to detect rubbish.
Democratization of expertise occurs when everyone believes they are an expert. Traditionally, a marketplace of ideas settles things when someone claims to be an expert but doesn’t know what they’re talking about.
We see this a lot on social media when someone gets the ratio of ‘explaining something to someone who knows a lot about the subject’.
What happens when all the armchair experts get an AI companion to lay eggs on them?
If the Facebook app can demand our attention so much that we forget to take our kids to school or end up texting while driving because it overrides our logic centers, what do you think meta is a What can one do with a state-of-the-art chatbot that’s designed to tell everyone crazy about the planet whatever they want to hear?