The Scary Impact AI Could Have On Society As We Know It

The Scary Impact AI Could Have On Society As We Know It

You can’t scroll through a news website without coming across a story about artificial intelligence at the moment. Advancements in this technology are coming at breakneck speed, and AI is starting to infiltrate our day-to-day lives more and more.

The ChatGPT boom is a prime example of this, with companies all over the world changing their business models to accommodate artificial content. And while AI boasts a number of benefits to our lives, there are also a number of scary risks and dangers that come with relying on machines for human development.

In a recent Summit talk, Centre for Humane Technology Co-Founders, Tristan Harris and Aza Raskin, gave a lecture on such risks. After they were contacted by senior people within major Artificial General Intelligence companies, Harris and Raskin took it upon themselves to educate the masses on AI. That is because they were told there is a ‘dangerous’ arms race going on to deploy AI as fast as possible, and it is ‘not safe’.

One survey found that 50% of AI researchers believe there is a 10% chance or greater that humans go extinct from an inability to control AI. Harris and Raskin gave a scary plane analogy to hammer that point home. If 50% of engineers said a plane had a 10% chance of crashing, you wouldn’t get on the plane, would you? So, why are we continuing to hurtle headfirst into an AI arms race?

Social media is described as humanity's first contact with Artificial Intelligence, and many would consider that to be one of the biggest innovations in modern history. Think about it, you can’t do anything without social media these days. Prominent journalists rely on social media to reach the masses. Companies use social media to engage with their customers. Politicians use social media to broadcast their policies. And advertisers use social media as one of the most effective marketing tools in the world. But does that mask the issues that come with Twitter, Facebook, Instagram, and the like?

Harris and Raskin point out that, while social media boasts a number of key benefits, it also comes with a worrying number of risks. Issues with addition. People struggling with their mental health. Deep fakes invading the right to privacy. Fake news spreading dangerous misinformation. The sexualisation of minors. Information overload leading to shortened attention spans. Doomscrolling. A breakdown of democracy. How many of these risks are still a problem today, 20 years after the invention of social media?

As we approach the second contact with AI, are we really ready to mitigate the even bigger risks when we haven’t been able to solve the issues with the first contact?

Harris and Raskin explain that social media has already become entangled with our very society, which is why the likes of Twitter and Facebook are so hard to regulate. Once something is already entangled, it becomes very difficult to back-peddle and make retrospective changes. This is perhaps why the same risks and concerns about social media in the early days are still there today. In fact, many of them have worsened over time as the impact and entanglement of social media has grown.

While the next wave of AI, including ChatGPT, is growing and more people are using it by the day, there is still time to make significant changes and improvements. Fail to do so, and AI will become fully entangled with our society, forever unregulated and able to spiral beyond our control.

The Centre for Humane Technology Co-Founders even created a list of risks that could arise from this AI entanglement. A reality collapse. Fake everything. A collapse in trust. Automated loopholes in law. Automated fake religions. Automated cyberweapons. Exponential blackmail. Automated exploitation of code. Automated lobbying. Biology automation. Exponential scams. A-Z testing of everything. Counterfeit relationships. And much more.

Imagine if everyone had access to cyberweapons that could take down a website at the click of a button. Imagine if everyone had access to deepfake technology that was so realistic, revenge porn was impossible to verify. Imagine scams became so realistic that you couldn’t tell the real thing from a fake. Not only do these things present very real dangers to society, but they also create this huge implosion of trust. Your first instinct would be to doubt something or someone’s authenticity, which is a rather sad way to live.

There is a big discussion going on about mental health and the impact social media has on our brains. Whatever way you look at it, that problem is going to increase exponentially with the mass adoption of AI technology, before we have even found a solution to the initial problem.

The phrase ‘arms race’ is an interesting one for senior AGI people to use. Because that is exactly what AI could become, a weapon. A weapon used to control, to wage war, and to erase. And the longer global powers look at AI as an ‘arms race’ the more likely all of these risks are to become a reality.

On the other side of the coin, if the world gets ahead of AI and works together to regulate and develop at a safe pace as one collective unit, rather than engaging in a dangerous, fast-paced race, we could enjoy the benefits while limiting the risks. Artificial intelligence has the potential to make our lives better, but that should not come to the detriment of our basic rights and quality of life.

It comes back to that statistic…

Why are we racing to board a plane that 50% of experts think will crash 10% of the time?

Why not work together and slow down development to ensure that 100% of experts and confident AI will benefit us 100% of the time?

Share this article

Subscribe to our newsletter

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.