Home STAY CURRENTArticles Why Deepfakes with AI is a Deadly Cybersecurity Threat

Why Deepfakes with AI is a Deadly Cybersecurity Threat

by CISOCONNECT Bureau

Deepfakes and AI could stretch beyond propaganda as cybercriminals leverage deepfake-as-a-service. Read on to know more about it…

The impact of fake video and audio could stretch beyond propaganda as cybercriminals leverage deepfake-as-a-service toolkits to wage disinformation wars on corporates and, worse, to power sophisticated phishing attacks. In the begining of this year, Forrester predicted that deepfakes could end up costing businesses as much as $250 million.

Advent of Deepfakes
Deepfakes first emerged in 2017, created initially by one anonymous Reddit user who gave Internet users access to the Artificial Intelligence (AI)-powered tools they would need to make their own deepfakes. Since then there have been a number of high-profile deepfake videos, with examples like movie director Jordan Peele’s Barack Obama Public Service Announcement (PSA) — this intended as a warning about the dangers and convincing nature of deepfake videos – and the fake video of Facebook CEO Mark Zuckerberg telling CBS News “the truth of Facebook and who really owns the future” showcasing their power.

The Real Threat
Deepfakes can cause significant problems for commercial organizations. There was an instance in which an employee of a UK-based energy company was tricked into believing he was talking to the CEO of their German parent company, who convinced the employee to transfer $243,000 to a Hungarian supplier. It turned out the employee was not speaking to the real CEO but to a scam artist impersonating the CEO using a voice-altering AI tool.

Previously, the German CEO voice scam appears to be the first of its kind using AI, or at least the first that we have heard about in the public sphere. But AI is now being used for more sophisticated phishing attacks and to fool biometric IS scanners with things like fake fingerprints. These threats could come from individuals, cybercriminal gangs or state-sponsored hackers who want to create disruption in financial markets.

What is different about the potential impact of deepfakes is that they could disrupt enterprises by making them think that deepfakes might be used to deceive them. According to a report by Deeptrace Labs, there have been no actual occurrences of deepfakes used in disinformation campaigns or in ways that could affect enterprises, but that isn’t to say there won’t be in time.

Mitigation
While combating deepfake technology and protecting data against deepfake attacks is a challenging task, it’s possible to keep your data secured. You need to look at two things — human and technology.

Deepfake relies on the same hacking principle. Human error — more specifically error of judgment. The human aspect involves training employees to understand the difference between real and fake while protecting their identities on the internet.

The technology aspect means equipping yourself with the best cybersecurity solutions. Although there are automated tools to spot deepfake media, unfortunately, the technology is not scalable. You need a solution that provides comprehensive data protection.

To secure your data and ensure business continuity, one needs a robust 256-bit AES object-level encryption, intrusion detection and compartmentalized access — discouraging deepfake criminals to gain access to your data.

Ground Reality
What is different about the potential impact of deepfakes combined with AI is that they could disrupt enterprises by making them think that deepfakes might be used to deceive them. According to a report by Deeptrace Labs, there have been no actual occurrences of deepfakes used in disinformation campaigns or in ways that could affect enterprises, but that isn’t to say there won’t be in time.

Recommended for You

Recommended for You

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Close Read More

See Ads