Social engineering is a cyberattack trying to manipulate people into divulging sensitive information or performing actions that compromise their security. It can take many forms, from phishing scams to impersonation and baiting. Some current trends include:
- Phishing Scams: Phishing remains one of the most common forms of social engineering attacks. Attackers use emails to trick individuals into disclosing sensitive information, such as login credentials or financial information. The trend of phishing scams has increased in recent years, as attackers have become more sophisticated in their tactics and use more convincing phishing emails to lure victims.
- CEO-Fraud: CEO-Fraud is a social engineering attack that targets businesses and organizations. Attackers use tactics such as impersonating executives or suppliers, or tricking employees into disclosing sensitive information or transferring money to a fraudulent account. CEO-Fraud attacks have become increasingly common and often result in significant financial losses for organizations.
- Social Media Scams: Social media has become an increasingly popular target for social engineering attacks. Attackers use tactics such as phishing scams or impersonating trusted individuals or organizations to trick victims into disclosing sensitive information or installing malware on their devices.
- Vishing and Smishing: Vishing and smishing are social engineering attacks that use voice and text messaging, respectively, to trick victims into disclosing sensitive information. These types of attacks have become more prevalent in recent years, as attackers seek to exploit the convenience and immediacy of voice and text communication.
- ‚Hi Mum‘ Messenger Scam: A variation of smishing is that scammer contact potential victims impersonating a family member or a friend. They mostly use messenger services like WhatsApp. The scammer explains the different/new number by a lost device and after exchanging a couple of messages to build trust, asks for money to urgently pay a bill, i.e. for the replacement of the lost phone which adds to their story of the lost device.
A first tool which comes to mind to assist in phishing attacks is ChatGPT, an artificial-intelligence chatbot developed by OpenAI and launched in November 2022. While ChatGPT has some protection built in, turning request of writing a phishing mail into an ethical lecture (cf. Figure 1), it has no problem fulfilling specific request which could be used as a phishing mail also [1] (cf. Figure 2). Note that the request does not include any information about security patches or bug fixes and that ChatGPT added that completely on itself. The mail sounds legitimate. However, an attacker would need to adjust the steps to refer to the included / attached file. Using ChatGPT or machine learning tools with similar capabilities, could be used to automate the writing of phishing mails. This in term means that the quality of mass phishing mails might increase and can also be individualized similar to spear phishing mails, which target only a small number of individuals and adapt the mails closer to an appropriate context.

Figure 1: Request to ChatGPT to write a phishing mail
Other machine learning tools which can be used for social engineering attacks produce deep fakes, a realistic looking video (audio) of an impersonated person. Deep fake tools have the potential to make impersonation attacks more effective by producing computer-generated images, videos, or audio that are mimicking the appearance and speech patterns of real people. They can be used to create highly convincing impersonations of individuals, organizations, or even for news broadcasts making these attacks more effective. A recent example was a call to several European mayors impersonating Vitali Klitschko, the mayor of Kyiv [2]. While the intent in this example was not clear, a deep fake video could be used to impersonate a trusted executive or authority figure, convincing an unsuspecting employee to disclose sensitive information or transfer money to a fraudulent account – as discussed for the previously described attacks.
Again, the threat of these attacks is not only that they render this type of attacks more effective, but they also allow to automate the process of creating deep fakes, making it easier for attackers to carry out these types of attacks at scale. Machine learning algorithms could also be used to analyse the reaction of the human victims to that attack and predict how individuals are likely to respond in order to improve the accuracy and effectiveness of these attacks.
In conclusion, the use of machine learning and in particular deep fakes is likely to have a significant impact on social engineering attacks. They have the potential to make these attacks even more convincing and effective. This poses a serious threat to individuals, organizations, and governments around the world. It is important to stay informed about these developments and take steps to protect against them, including educating employees about the dangers of social engineering attacks and implementing robust security measures.

Figure 2: Request to ChatGPT to write a tech support mail
Author
PD Dr. Sebastian Pape
Social Engineering Academy
References:
[1] Bree Fowler: „It’s Scary Easy to Use ChatGPT to Write Phishing Emails”, CNET, Feb 16th 2023, https://www.cnet.com/tech/services-and-software/its-scary-easy-to-use-chatgpt-to-write-phishing-emails/
[2] Philip Oltermann: “European politicians duped into deepfake video calls with mayor of Kyiv”, The Guardian, Jun 25th 2022, https://www.theguardian.com/world/2022/jun/25/european-leaders-deepfake-video-calls-mayor-of-kyiv-vitali-klitschko