Caught off Guard: Steve's Story. Steve was at his desk when he received a frantic video call from his manager, Bela. She looked stressed in the video call, her voice hurried. “I need you to send the confidential client report to this new email right away!” she insisted. Seeing her familiar face and hearing her distinct voice, he didn’t hesitate, he sent the confidential report to the new email address.
Hours later, Bela walked into his office and asked about the report. Confused, Steve mentioned the video call. Bela’s expression turned to shock — she hadn’t called him. The person he saw on the video wasn’t Bela. It was a deepfake, created by a cyber-criminal to trick him.
Steve couldn’t believe how real the fake call seemed. The face, the voice, everything matched his boss perfectly. He had fallen victim to a growing cyber threat where criminals use Artificial Intelligence (AI) to create highly convincing fakes.
AI can create images, audio, or videos that look real. These capabilities have many legitimate uses. For instance, marketing companies use this technology to create images for ad campaigns, movie companies use it to de-age certain actors, and teachers use it to create dynamic video lessons for their students.
A deepfake is when AI is used to create fake images, audio, or videos for the purpose of deceiving others. The name “deepfake” comes from a combination of “deep learning” (a type of AI) and “fake.”
Often the most damaging deepfakes are when cyber criminals create fake images, audio, or video of people that you may know, making them do things they actually never did. For example, cyber attackers may create fake pictures of famous celebrities or politicians committing a crime and spread them as fake news. Or they may clone someone’s voice and use it in a call to deceive a victim’s family or colleagues. What makes deepfakes especially dangerous is how easily cybercriminals can replicate anyone, making them do anything, and make it appear real.
Do not try to detect deepfakes by looking for technical mistakes. Both AI and the cyber attackers who use them have become very sophisticated. Instead, focus on context. Does the image, audio, or video make sense?
Guest Editor Dhruti Mehta is an Information Security Analyst at Physicians Health Plan of Northern Indiana and President of WiCyS Northern Indiana. She is passionate about building a diverse cybersecurity workforce and bridging educational and skill gaps in the field.
Reprinted with permission. The views, information, or opinions expressed in this article are solely those of the author and do not necessarily represent the views of Citizens State Bank and its affiliates, and Citizens State Bank is not responsible for and does not verify the accuracy of any information contained in this article or items hyperlinked within. This is for informational purposes and is no way intended to provide legal advice.