Your Business and Cybersecurity: AI Technology Can Create Deceptive Phishing Attacks

Mack Jackson Jr
4 min readAug 16, 2021

--

Photo by Markus Winkler on Unsplash

Artificial intelligence technology is encroaching into previously unexplored areas, and this time around, the attacks are at phishing email campaigns. The researchers observed that leveraging the deep learning language model GPT-3 and other AI-as-a-service platforms could significantly lower the entrance barrier for spearphishing campaigns on a wide scale.

Researchers have long argued whether fraudsters should invest the time and effort necessary to train machine learning algorithms capable of creating convincing phishing messages. After all, bulk phishing emails are already relatively strong due to their formulaic and fundamental nature. Producing highly targeted and personalized “spearphishing” emails, on the other hand, involves additional effort. This form of cyber attack is where artificial intelligence technology may prove to be unexpectedly valuable to hackers.

At the yearly Black Hat and Defcon security conferences in Las Vegas, Singapore’s Government Technology Agency team discussed a recent experiment. They sent 200 of their colleague’s targeted phishing emails, some of which they created themselves and others generated by an AI-as-a-service platform. Both emails included links that were not dangerous but instead offered information on clickthrough rates to the researchers. The conventioneers were astounded to see that many more people clicked on email links in the AI-generated messages than those authored by humans.

Cybersecurity researchers state that phishing attacks using AI requires a great deal of sophisticated software and development cost. However, when AI technology is paired with AI-as-a-service, creating attack campaigns is a fraction of the cost. The new AI attack systems that work in conjunction with social engineering make emails that like and sound like the personality of a human.

Photo by Sigmund on Unsplash

In Singapore, cybersecurity researchers combined OpenAI’s GPT-3 platform with other AI-as-a-service technologies focusing on personality profiling to create phishing emails suited to the histories and characteristics of their coworkers. The researchers used machine learning in personality analysis to anticipate a person’s biases and mindset based on behavioral inputs. Then the researchers created a pipeline for grooming and enhancing emails before delivery by running the data through multiple systems. They argue that the outputs sounded “abnormally human” and that the platforms provided surprising information, such as referencing Singaporean legislation while tasked with creating content for Singapore people.

While the researchers were delighted with the quality of the synthetic messages and the number of clicks received from colleagues, they caution that the experiment was only the beginning. The sample size was quite modest, and the target group was relatively homogeneous in terms of occupation and geography. Additionally, office staff generated both human and AI-as-a-service pipeline messages, not by remote attackers attempting to strike the appropriate tone.

Nonetheless, the researchers considered the potential role of AI-as-a-service in future phishing and spearphishing operations. For instance, OpenAI researchers expressed worry about the misuse of the potential of their service or those of others. According to the researchers, it and other ethical AI-as-a-service providers adhere to an explicit code of conduct, conduct platform audits for potentially harmful activity, and even attempt to authenticate user identities to some level.

The researchers believe that frameworks for artificial intelligence governance, such as those created by the Singaporean government and the European Union, might aid businesses in preventing exploitation. However, they focused a portion of their work on systems capable of identifying phishing emails created by synthetic intelligence. This complicated issue has gained further attention in the wake of the proliferation of deep fakes and AI-generated fake news. The researchers used deep learning language models such as OpenAI’s GPT-3 to create a framework for differentiating AI-generated content from human-written material once again. The objective is to develop systems that can recognize synthetic media in emails, making it easier to identify possible AI-generated phishing messages.

AI-generated phishing campaigns will become a challenge for security experts worldwide. Hackers create phishing attacks that are not a one-off incident. These attacks are continuous striking campaigns by hackers.

The hackers only need to find one person that clicks on one email to launch an attack.

About the author

Mack Jackson Jr is a cybersecurity consultant, speaker, TV host, and professor of business management. He has work in cybersecurity for over 15 years and has over 25 years in the information technology industry. For further information, please email mjackson@mjcc.com.

--

--

Mack Jackson Jr
Mack Jackson Jr

Written by Mack Jackson Jr

Mack Jackson Jr, is a cybersecurity speaker, TV host, and author. He brings increased awareness to his audiences on cybersecurity and cybercrime protection.

No responses yet