Over the past 100 years, even with technology advancing constantly, AI seemed like a far-off possibility. Different perceptions of it have come up in popular culture, with ideas of robots, humanoid interfaces, and more.
However, in just the last few years, AI has exploded onto the world stage and it is becoming bigger every day. While AI promises efficiency and innovation, its unchecked growth raises alarming questions about what is possible for its future.
Geoffrey Hinton, a Nobel Prize winner who people call the ‘Godfather of AI’, is a computer scientist known for his work on artificial neural networks. In a quote, he expressed his concerns about artificial intelligence in the time to come saying, “The rapid progress of AI is going to transform society in ways we do not fully understand, and not all of the effects are going to be good.”
Hinton, along with many other artificial intelligence experts have shared their concerns on the future of AI. However, societal dangers are mostly flying under the radar for most.
In 2022, the company OpenAI released ChatGPT to the public. ChatGPT (which many are familiar with and is almost the leading face of AI right now) is an AI chatbot. It uses natural language processing to create humanlike conversational messages and answers to questions. Soon after, many other AI programs were released following ChatGPT.
Fast forward to the present day, the list of types of AI’s stays ever growing. On the internet, you can easily find artificial intelligence that generates a spectrum of different things: videos, images, music, and many more unexpected things. The shocking factor for many- artificial intelligence’s ability to create things faster and sometimes subjectively better than the human equivalent.
Right now, understandably in the eyes of most, artificial intelligence seems like a fun and imaginative tool, with AI-generated things getting more and more common on social media platforms and the internet. But soon, it might be less unknown to everyone less the fact that AI has risks like job displacement to cybersecurity in the possible near future.
According to the National University, 77% of companies are using or exploring AI, with 83% prioritizing it in their business plans. According to IBM, “Bad actors can exploit AI to launch cyberattacks. They manipulate AI tools to clone voices, generate fake identities, and create convincing phishing emails—all with the intent to scam, hack, steal a person’s identity, or compromise their privacy and security.”
Although much of this presents a scary possibility for the future, there are still solutions and regulations that many are pursuing to ensure a good future for humanity.
Daniel O’Leary, a professor at USC Marshall School of Business, shedded some light on people working for a positive future with artificial intelligence. He warned that AI is and will be an issue, but also mentioned that “There is a development called ‘AI for Good’ that will be a driving positive factor”. AI for Good is an organization identifying trustworthy AI applications, building skills, and standards, and advancing AI governance for sustainable development. AI for Good, along with other groups, are working towards a better future with AI.
However, besides looking into the future of artificial intelligence, its roots go back decades, and its history reveals both promise and dangers. AI’s journey began in the 1950s, but its impact today far surpasses anything early developers could have imagined. In 1950 Alan Turing published his work “Computer Machinery and Intelligence” which is used to measure computer intelligence, and the term “artificial intelligence” was first created. Since then AI has developed at a fast rate, eventually leading to what we are familiar with today.
Throughout artificial intelligence’s development, it wasn’t always at the forefront of research with big corporations and investments into it. Until around the last 30 years, niche academic research was most of AI’s progress. Different people with smaller operations and less practical uses, like simple chatbots or a cart that successfully navigated a room full of chairs without human interference.
Looking forward to August 2024, almost 40% of the U.S. population ages 18–64 used generative AI to some degree (according to the Real-Time Population Survey). With so many people using AI, problems have increased risks.
In an interview, an NYC iSchool student shared this same sentiment, saying if we “put too much trust in AI, and it malfunctions—it can be a danger to society.”
In April of 2024, US auto-safety regulators said that their investigation into Tesla’s Autopilot had identified at least 13 fatal crashes in which the feature had been involved. Tesla is an automotive company that has a self-driving feature on their cars. According to their website, “Autopilot is an advanced driver assistance system that assists your vehicle with steering, accelerating and braking”. While the self-driving feature on Tesla cars seems like an innovative and useful idea using AI, its malfunctions have revealed red flags. This in part prompted an ongoing investigation by the NHTSA (National Highway Traffic Safety Administration) on Autopilot safety. Overall, the trust put in AI by many people every day could eventually or unexpectedly turn out to be ill-advised.
While previously mentioned examples of AI’s history or dangers have taken place in the United States, the growth of AI is not confined to any single country or industry; it is a global phenomenon constantly evolving for the better or worse. A big contributor to the worldwide spread of artificial intelligence is its accessibility.
As AI continues to develop, many people are creating and utilizing programs fully consisting of or incorporating artificial intelligence to assist them, which can be found with a simple Google search. Even without a search engine, many companies are adding AI to assist their services. For example, Grammarly, an English language writing assistant software tool. According to their website, their AI features help you “find the words you need—to write that tricky email, to get your point across, to keep your work moving.”
Besides simpler things like internet assistance around the world, another measure of the scope of AI is evident in major industries. Artificial intelligence is transforming areas like education, healthcare, and manufacturing. In a paper by Prof. Gabriela Fernandes and others, the intertwining of AI and healthcare is partly explained: “The healthcare industry is undergoing dramatic transformations at present. One of the most recent industries to heavily use AI is tele-health, which is used for anything from issuing electronic healthcare cards to providing individual counseling.” Besides that, in healthcare/hospitals, AI is already helping doctors spot fractures, triage patients, and detect early signs of disease.
While all of this is undeniably good, the many negative possibilities that relying on AI so heavily brings on a world scale are being unnoticed. On a more emotional level, artificial intelligence implemented in hospitals doesn’t have any connection to you. Robotic Arms and Systems that are being used lack empathy and compassion, especially for patients in deep pain. It is better to be able to have trust in a real human doctor. The level of trust in the doctor is crucial for the success of any treatment. Besides mental concerns, on a physical level dangers and concerns also arise: Robotic systems can fail due to human error and mechanical failure, and electrical currents can misapply to surrounding tissues, causing accidental burn injuries.
The same example from healthcare can be taken for other industries with AI expanding into their world markets- there are many possible dangers of depending on it so heavily around the world. However, that is not to say that artificial intelligence is fully bad this way. With the right precautions, AI can add many benefits and possibilities to industries.
However, one big area that does not benefit is the job market. Right now, 14% of all workers have been displaced from their jobs by AI, and 60% of the jobs in advanced economies are at risk of being replaced by AI. With artificial intelligence being way cheaper and more productive than real people, many companies are using it to enhance their businesses. AI doesn’t stop working, doesn’t need pay, and can continuously do boring or long jobs.
Some big names like Elon Musk have shared their ideas on the situation, predicting the extreme and saying, “Probably none of us will have a job” in the future due to AI. At a conference, he described a future where jobs could be optional. “If you want to do a job that’s kinda like a hobby, you can do a job,” Musk said. “But otherwise, AI and the robots will provide any goods and services that you want.” While that can sound intriguing for some and scary for others, that is not the only sentiment on the situation.
The many workers worried about their jobs are reflected in a recent FlexJobs poll. It revealed that 34% of workers believe AI will lead to job displacement in their field within the next five years. However, some say artificial intelligence will also create jobs, keeping balance. The World Economic Forum has predicted that 97 million new jobs are also likely to be created by the ‘AI Revolution’. Overall, while there are many reputable ideas about the future of AI in the world, for now, we just have to make sure that nothing gets out of hand, have regulations, and use AI to our advantage.
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it,” said Eliezer Yudowsky, a computer scientist and researcher, frames well the many different ideas going around about certainties of AI. However, it should be known that it can be not so hard to let AI out of control, take jobs, malfunction, and have different aforementioned issues. Strengthening government regulation and overall ethical framework on AI is a good way to have artificial intelligence only for our benefit. As of now, with AI developing and branching out so fast, it’s harder to regulate and overall manage, but hopefully soon this issue will be taken care of.
One company making sure of ethical frameworks of artificial intelligence is AI for Good. AI for Good is the United Nations’ leading platform on Artificial Intelligence for sustainable development. They were established in 2017 and its founding mission is to leverage the transformative potential of artificial intelligence to drive progress toward achieving the UN Sustainable Development Goals, which overall benefits the world. Besides AI for Good, many other organizations and people are navigating AI for a positive cause and hopefully can have a big impact. On the world stage, overall the direction AI takes in the coming years depends on whether societies can navigate its risks responsibly.
In summary, the effects of AI are already rippling through society, touching everything from jobs to education. While there are many uncertainties surrounding it, what is for sure is its future will shape the world we live in. Just looking at the last few years, we can see the speed of artificial intelligence’s growth continuing or even doubling. AI has exploded onto the world stage and it is becoming bigger every day. While AI promises efficiency and innovation, its unchecked growth raises alarming questions about what is possible for its future.
As for now, artificial intelligence seems like a fun and imaginative tool, with AI generation, use, and implementation becoming more and more common everywhere. While there are many possible dangers that we should be out on the lookout for, artificial intelligence can be a great tool. In conclusion, the future of AI will be determined by our ability to harness its power responsibly, balancing innovation with caution to ensure it serves humanity rather than harms it.