What are deepfakes?
A deepfake is a synthetic audio or video created using deep learning, a type of machine learning that mimics the human brain's ability to process information. While deepfake technology can be used for entertainment and creative purposes, it also poses significant risks due to its ability to spread misinformation and commit fraud.
Deepfakes can be exploited by cybercriminals in several ways:
- Social Engineering Attacks: Cybercriminals use deepfakes to impersonate trusted individuals, such as employees or family members, to deceive victims into sharing personal information or financial assets.
- Blackmail and Extortion: Exploitative deepfake content can be created to blackmail or extort money from individuals or organizations, threatening to release the fake material unless demands are met.
- Misinformation: Deepfakes can generate realistic but false content, spreading misleading information or propaganda to manipulate public opinion.
Real-World Examples of the Threat
Recently, a deepfake campaign impersonated the U.S. Secretary of State Marco Rubio, using AI to mimic his voice and writing style in text messages and encrypted signal communications to politicians. The attacker’s goal was to gain access to sensitive information and accounts. This incident highlights how deepfakes are increasingly used to impersonate high-profile figures. These impersonations of political and high profile figures are often used to spread misinformation and distort political discourse. Similar attacks have targeted other government leaders, such as a deepfake video of Ukrainian President Volodymyr Zelensky falsely urging soldiers to surrender.
Deepfakes are also emerging as a tool for job-related scams. Cybercriminals have been caught creating entirely fake job applicants, complete with AI-generated resumes, profile photos, and deepfake interview videos. The purpose of this attack is to infiltrate companies and gain access to data or internal systems. These scams are especially common in remote work hiring, where identity verification can be harder. Experts warn that such AI-powered fake candidates are growing rapidly, with some projections estimating that a quarter of job applications could be fraudulent by 2028.
How to Spot Deepfakes
To identify deepfakes, watch out for audio synchronization issues and ensure the audio matches the lip movements of the speaker. Also, pay attention to facial features, noting if the speaker has unnatural blinking, flickering around the eyes, or inconsistent lighting and shadows. In hiring situations, consider real-time verification steps, such as asking candidates to perform spontaneous actions on video, to test authenticity.
Using Artificial Intelligence (AI) to Combat Deepfakes
AI technologies are being deployed across many platforms to counter deepfakes. Microsoft has developed a tool that assesses photos and videos for authenticity, while Intel’s FakeCatcher analyzes image pixels to detect manipulation.
For further learning and practice, visit Kellogg School’s Deepfake Detection. If you encounter deepfake content involving you or others, report it to the hosting platform to help address the issue.
Don’t forget to check out UCSB’s lineup of UCCAM events this month! Students, staff, faculty, family, and friends are all encouraged to join and learn. The UC Cyber Champions group also has a full list of systemwide events occurring throughout October. We appreciate your engagement and hope you stay cyber safe!