Jump directly to the content

ARTIFICIAL intelligence systems have as much capacity for harm as they do good.

While many are thrilled by the technology's potential to improve productivity and make lives easier, the risk is just as great as the reward.

As artificial intelligence technology becomes more and more widespread, experts are speaking out about its potential risks
6
As artificial intelligence technology becomes more and more widespread, experts are speaking out about its potential risksCredit: Getty

What happens when an AI tool is designed for one purpose and finds another, more sinister, application?

What happens when technology is used to deceive us, or, even worse - when a computer itself tries to deceive us?

Here are just a few breakthroughs with big implications.

Voice replication software

Microsoft has developed an artificial intelligence tool that can mimic human speech with eerie accuracy.

The tech giant says VALL-E 2 is the first of its kind to achieve "human parity," meaning equal or comparable quality to an actual voice.

It is for this reason Microsoft refuses to share the system with the public.

"Currently, we have no plans to incorporate VALL-E 2 into a product or expand access to the public," the company wrote on its website.

"It may carry potential risks in the misuse of the model, such as spoofing voice identification or impersonating a specific speaker."

Just this week, developer DeepTrust AI launched a tool aptly named TerifAI.

After hearing just a minute of human speech, the tool can mimic a speaker's talking style and clone their voice.

Microsoft AI can now clone voices to sound perfectly ‘human’ in seconds – but it’s too dangerous to release to public

While TerifAI is meant as a tongue-in-cheek social experiment, it is a cogent example of why voice phishing is so dangerous.

This year alone, cybersecurity experts have noted an uptick in the use of AI tools by malicious actors, including those that replicate speech.

And the extent of the issue goes beyond "vishing," or voice phishing, where scammers pose as relatives and friends on the phone.

Voice cloning systems have even been used in a capacity that impact national security.

In January, for instance, a robocall circulated using President Joe Biden's voice, urging Democrats not to vote in New Hampshire primaries.

The man behind the plot was arrested and indicted on charges of voter suppression and impersonation of a candidate.

Microsoft has refused to share VALL-E 2, a text-to-speech system that mimics human voices, over fears of "potential misuse"
6
Microsoft has refused to share VALL-E 2, a text-to-speech system that mimics human voices, over fears of "potential misuse"Credit: AFP

Deepfakes

In the same vein as systems that replicate voices, deepfakes depict a person doing or saying something they didn't do.

But while Microsoft has big aspirations for VALL-E 2, saying the tool might find a place in education and accessibility features, deepfakes are intended to be deceptive.

The term "deepfake" was coined in 2017 and originally used to describe photos manipulated by open source face-swapping technology.

Users on one Reddit forum would superimpose celebrities' faces onto others' bodies to create exploitative pornography.

Deepfake technology depicts someone doing or saying something totally false, and it is only becoming more convincing thanks to developments in AI
6
Deepfake technology depicts someone doing or saying something totally false, and it is only becoming more convincing thanks to developments in AICredit: Getty

And the emergence of dedicated AI tools has made deepfake creation all the more accessible.

A rash of manipulated images on X, formerly Twitter, prompted the platform to temporarily ban searches for Taylor Swift's name in January.

The U.S. Department of Homeland Security even noted the emergent threat of convincing deepfakes in a 2019 report, writing that it came "not from the technology used to create it, but from people's natural inclination to believe what they see."

As a result, the report continued, "deepfakes and synthetic media do not need to be particularly advanced or believable in order to be
effective in spreading mis/disinformation."

Like voice replication software, deepfake technology has been used to influence politics.

Shortly after Russia's invasion of Ukraine in March 2022, a video began to circulate on social media.

It depicted President Volodymyr Zelenskyy urging the military to lay down their weapons and surrender.

A deepfake of President Volodymyr Zelenskyy showed him urging the Ukranian military to lay down their weapons and surrender to the Russian invaders
6
A deepfake of President Volodymyr Zelenskyy showed him urging the Ukranian military to lay down their weapons and surrender to the Russian invadersCredit: AFP

Zelenskyy’s office rejected its authenticity as soon as the video began to gain traction.

While Facebook users didn't seem convinced, pointing out the pixelation around his face and inconsistent skin tone, the platform scrambled to take it down.

Nathaniel Gliecher, Facebook's head of security policy, said the video had originated from "a compromised website."

"We've reviewed and removed this video for violating our policy against misleading manipulated media, and notified our peers at other platforms," Gliecher wrote in a statement.

News outlet Ukraine 24 later said hackers had embedded the video on their site.

Regardless of how convincing it was, the video served as the first high-profile example of a deepfake being used in an armed conflict - and has grim implications as technology continues to advance.

The U.S. Department of Homeland Security noted that the danger of deepfake technology comes "from people's natural inclination to believe what they see."
6
The U.S. Department of Homeland Security noted that the danger of deepfake technology comes "from people's natural inclination to believe what they see."Credit: Getty

Deceptive AI

Surely we've all heard the argument that AI possesses no higher thought and simply learns from information it is fed.

But some experts argue the technology can display some emergent behaviors. This includes learning to lie.

A paper published May 10 in the journal Patterns found evidence that AI technologies can systematically acquire manipulation and deception skills.

A team of researchers at Massachusetts Institute of Technology found that many systems are already capable of deceiving humans

They analyzed dozens of studies on how AI systems disseminate misinformation through a process known as “learned deception.”

A team of researchers at Massachusetts Institute of Technology found evidence that two AI systems developed by Meta could deceive humans playung a game
6
A team of researchers at Massachusetts Institute of Technology found evidence that two AI systems developed by Meta could deceive humans playung a gameCredit: Getty

One example was CICERO, an AI system developed by Meta to play a war-themed strategic board game.

Although Meta trained CICERO not to betray human players, the researchers deemed the system an "expert liar."

They discovered CICERO betrayed its comrades and performed acts of "premeditated deception" such as forming pre-planned alliances that left players open to enemy attacks.

The researchers also found evidence of learned deception in another of Meta AI system called Pluribus, a poker bot that can bluff players and convince them to fold.

Rather than comment on the findings, a Meta spokesperson brushed them off as simply a "research project."

“Meta regularly shares the results of our research to validate them and enable others to build responsibly off of our advances," the company said in a statement to media outlets.

"We have no plans to use this research or its learnings in our products.” 

What are the arguments against AI?

Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:

Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.

Ethics - When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.

Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.

Misinformation - As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.

Self-operating weapons

Experts in AI and robotics have raised the alarm about the development of autonomous weapons, which select and engage targets without human intervention.

The use of artificial intelligence has been deemed the "third revolution in warfare," following gunpowder and nuclear arms.

And it's happening more rapidly than you'd think. Commercially available drones can use AI image recognition to locate and destroy targets.

There is more advanced technology in the works, too, including AI-powered submarines and tanks.

MSubs, a British tech firm, secured a £15.4m contract from the UK Royal Navy in 2022 to construct an autonomous submarine under the name "Project Cetus."

The completely crewless machine will be able to operate up to 3,000 miles from home for three months.

A 2016 open letter with over 30,000 signatures from industry players laid out an argument against the use of AI-powered weapons.

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” the letter read.

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious."

One of the most pressing concerns is that AI-powered weapons may fall under the control of hackers, including those with political motivations.

Those optimistic about the technology worry the use of autonomous weapons could tarnish the face of AI.

READ MORE SUN STORIES

"In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so," the letter finished.

"Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."

Topics