Quick Links

AI technologies may be disrupting numerous industries, but at least in most cases, we can see how they'll be more useful than harmful in the long run. However, these new tools are also opening many new opportunities for nefarious types.

Natural Language AI for Supercharged Phishing Attacks

Being able to understand and produce natural human language has been one of the prime goals of AI research from the earliest days. Today, we have synthetic voice production, highly-sophisticated chatbots, natural language text generators, and many more related AI-driven technologies.

These applications are perfect for phishing attacks, where hackers pose as legitimate entities and their representatives as a way to get sensitive information out of people. With these new technologies, you could have AI agents impersonate people en-masse through email, telephone calls, instant messaging, or wherever else humans talk to each other through a computer system.

Unlike the phishing we know, this would be like turbo-charged "spear" phishing, where the attempts are targeted against specific individuals with information about them in particular to make the scam more effective. For example, the AI software could pose as someone's boss and ask for money to be paid into an account in a variant of phishing known as CEO fraud.

Deepfaked Social Engineering

Social engineering is a practice within hacking that targets weaknesses in human psychology and behavior to get around tough technological security measures. For example, a hacker might phone an important person's secretary posing as a sanitation worker, asking them about where their trash is currently thrown out. Then the criminal heads to that location to look for discarded documents or other clues that can be pieced together to create exploits.

Deep learning systems that can replicate faces and voices (known as deepfakes) have advanced to the point where they can be used in real-time. There are services where you can submit samples of your own voice and then have text-to-speech that sounds just like you. In principle, technology like this could be used to clone anyone's voice. Then all you'd have to do is phone or video call someone impersonating whomever you like, with public figures being the easiest target of all.

For example, Podcastle Revoice is one such service that promises to "create a digital copy of your own voice" based on voice samples you submit. Podcastle passed along a statement to us about how it addresses these concerns:

The potential for deepfakes and social engineering using voice cloning is a serious one, and that's why it's essential that companies mitigate the possibility for abuse. Podcastle's Revoice technology can be used to create a digital copy of your voice and as such we have clear guidelines on how voices can be created, as well as checks to prevent misuse. In order to generate a Digital Voice on our platform, a user must submit a live voice recording of 70 distinct (i.e. determined by Podcastle) sentences -- meaning a user cannot simply use a pre-recording of someone else's voice. These 70 recordings are then manually checked by our team to ensure accuracy of a single voice, and then the recordings are processed through our AI model.

Smarter Code Cracking and Automated Vulnerability Discovery

It takes humans hours and hours to scour lines of code to look for vulnerabilities, either to fix them or exploit them. Now we've seen that machine learning models such as ChatGPT can both write code and recognize vulnerabilities in submitted code, which opens up the possibility that AI could be writing malware sooner rather than later.

Malware That Learns and Adapts Through Machine Learning

The key strength of machine learning is how it can take huge amounts of data and extract useful rules and insights from it. It's reasonable to expect that future malware may take advantage of this general concept to adapt to countermeasures rapidly.

This may lead to a situation where both malware and anti-malware systems effectively become warring machine learning systems that rapidly drive each other to higher levels of sophistication.

Generative AI to Create Fake Data

AI technologies can now generate images, video, text, and audio seemingly out of thin air. These technologies have reached the point where experts cannot tell they're fake (at least not at a glance). This means you can expect a flood of fake data on the internet in future.

As an example, fake social media profiles right now can be fairly easy to spot, so avoiding catfishing scams or simple bot campaigns to sow disinformation hasn't been that hard for informed audiences. However, these new AI technologies could generate fake profiles that are indistinguishable from real ones.

"People" with unique faces with generated photos of their entire fake lives, unique coherent profile information, and entire friend and family networks consisting of other fake people. All having conversations with each other just like real people. With networks of fake online agents such as these, malicious actors could pull off various scams and disinformation campaigns.

Related: 5 Technologies That Mean You Can Never Believe Anything on the Internet Again

Is AI Both the Cure and the Disease?

It's inevitable that some people will try and use any new technology for malicious reasons. What makes this new generation of AI technology different is how it's quickly going beyond human capability to detect it.

This means, somewhat ironically, that our best defense against these AI-enhanced avenues of attack will be other AI technologies that fight fire with fire. This would seem to leave you with no choice but to watch them duke it out and hope that the "good guys" come out on top. Nevertheless, there are several things you can do to stay protected online, avoid ransomware, and spot scams on popular platforms like Facebook, Facebook Marketplace, PayPal, and LinkedIn.