Detecting doctored media has become tricky — and risky — business. Here’s how organizations can better protect themselves from fake video, audio, and other forms of content.

The idea that artificial intelligence AI can help create video, audio, and other media that can’t be easily separated from “real” media is the stuff of dystopian science-fiction and film-makers’ dreams. But that’s what deepfakes are all about. Pundits and security analysts have spent hundreds of thousands of words worrying about the dangers deepfakes pose to democracy, but what about the dangers they pose to the enterprise?

“The concern that I would have for the enterprise is that the sophistication of existing deepfake technologies are certainly beyond most humans’ threshold for being tricked by fake imagery,” says Jennifer Fernick, chief researcher for the NCC Group.

Images and words that go beyond the human recognition threshold can be used for purposes as “prosaic” as very effective spear-phishing campaigns, she says. It’s also a growing problem because the deepfake technology is getting better while our ability to detect deepfakes is not.

“The current machine-based defenses don’t solve all of our problems,” she explains.

As an example of how difficult the deepfake problem is to solve, Fernick points to last year’s Kaggle Data Science Competition called the Deepfake Detection Challenge. With more than 2,200 teams participating and, according to Fernick, approximately 35,000 detection models submitted, the best model could detect a deepfake less than two-thirds of the time.

Criminal applications for this difficult-to-detect technology are becoming more varied.

“Now you’re getting a voicemail message that sounds just like your boss. She’s mad and she wants you to wire the money now,” says Tom Pendergast, chief learning officer at MediaPro. “The urgency in her voice — and you’re sure it’s her voice — overwhelms your caution, and you send the money. And now you’ve been duped.”

‘Don’t Always Believe What You See’
So with detection beyond the ability of humans and out of reach for most technologies, what can an organization do to be safe from deepfakes?

“Moving forward, the best way to defend against deepfakes is to hold those platforms who host and make deepfakes available to the public accountable and responsible for them,” says Joseph Carson, chief security scientist at Thycotic. “If a post has not had any type of trusted source or context provided, then correct labeling of the content should be clear to the viewer that the content source has been verified, is still being analyzed, or that the content has been significantly modified.”

Without clear notice of a media source, employees with proper training are critical cogs in the deepfake security machinery. Chris Hauk, consumer privacy champion at Pixel Privacy, says it begins with basic media literacy.

“Don’t always believe what you see. Videos from questionable sources are always to be taken with a grain of your favorite salt-free substitute,” he explains. “If a video or photo is not from an established media sources, investigate it by consulting with other sources.”

Hank Schless, senior manager, security solutions at Lookout, says employee training should be updated to take both the new realities of work and new deepfake threats into account.

“Audio and social media deepfakes start with social interaction, and you need to train your employees on how to identify these suspicious activities,” he says. “The best first step is to make sure your security training includes identifying modern tactics like deepfakes and mobile phishing – especially while people work remotely. Since we can’t walk down the hall to validate communication from a co-worker, encourage your employees to reach out over different channels.”

As an example, he suggests sending a message through a collaboration system to verify that an unusual phone call was legitimate.

 (Next page: Deepfake video “tells” )

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

1 of 2

Next

 

Recommended Reading:

More Insights



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here