Artificial Intelligence (AI) has long captured the human imagination. We’ve all watched movies where humans interact with machine intelligence that mimics or surpasses human intelligence.
In these movies, there is a split between utopian and dystopian visions of AI. In films like The Matrix and the Terminator series, AI becomes self-aware and seeks to supplant humanity as the dominant species on earth.
In other movies like Star Trek, AI becomes a boon to humanity – assisting in building a society where all people have enough material resources to focus on building human society to explore the Universe further.
What was once seen as a future vision for AI is already here. Enter ChatGPT.
In the past year, ChatGPT has taken the world by storm. It’s an AI power chat application capable of having human-like conversations.
It became the “next killer app,” quickly becoming one of the fastest applications to reach one million users. At its core, it is a large language model, a fancy program that merely picks the next word to say next, according to statistics.
But the implications are vast. ChatGPT has shown itself to rival human reasoning. It can solve standardized tests, create novel business plans, tease out the legal jargon of contracts, examine vast swaths of medical literature, and generate creative content in ways that rival fiction authors.
With the release of ChatGPT, there came an incredible surge of investment and development of AI technology. Every Big Tech company, from Google to Apple to Tesla, suddenly wanted to create their own more powerful version of ChatGPT.
For example, Google quickly announced its ChatGPT rival called Google Bard; Quora has a version called Poe; and, true to its style, Microsoft quickly incorporated ChatGPT into its Bing search engine.
You might wonder how a mere chat app, albeit a very intelligent one, could pose a risk to humanity. Still, leading AI experts are concerned enough to call for a six-month moratorium on developing even more powerful AI systems.
The question is not “What does AI do now?” but “What will AI do in the future?” A new technological arms race has been created for AI dominance, not just among tech companies but among nation-states and their militaries. Russian President Vladimir Putin predicted in 2017 that the nation that leads in AI will “become the ruler of the world.”
The stage is set for a chaotic showdown as the world’s most powerful militaries incorporate AI into their weapons and defense systems. The US military already has autonomous robot dogs equipped with sniper rifles and has long had the capability to conduct assassination of targets using drones.
To have AI robots without morals, deliberately programmed to decide when and how to maximize human casualties, is the stuff of nightmares. Unfortunately, it appears to be the default future we are heading towards.
Potentially, nations could use AI technology on their citizens to augment existing surveillance systems or predict and control human behavior.
What if AI was used for sophisticated propaganda? To bias information and messaging so people could no longer tell the truth from falsehood? What if AI was used for censorship and shadow-banning, so that only government-approved messaging could be seen by the public?
The solution is for every concerned citizen to become familiar with AI technology and its risks. We should consider ethical restrictions to prevent the use of AI technology in military settings to kill humans. We should support the work of AI critics to place restrictions on unbridled AI use.
Finally, citizens should realize they do have civic power and can organize and effect change. Democracy works when citizens work together.
The threat of AI teaches us a crucial lesson: We need more humanity, and we need each other.