Deepfakes in Business: How Can You Protect Your Rep?
Having a response plan is key
on June 8, 2021
Updated on April 21, 2022
If any of the 1.4 million fans of @deeptomcruise on TikTok don’t read the fine print, they may walk away from a video of Tom Cruise rhapsodizing over a strawberry Blow Pop and think it’s legit. (It’s not). The popular account exists solely to share deepfakes of the actor—fake videos named for the deep-learning artificial intelligence that helps users create them.
While @deeptomcruise may be silly fun, the FBI predicts that this same kind of AI will increasingly be levied in an effort to harm.
“There are fake videos that do funny things, like the old JibJab ones,” says tech lawyer Jennifer Beckage with Beckage Law Firm in Buffalo. “But we’ve also got people using this kind of artificial intelligence to put people’s faces on pornographic images, or to take legitimate videos but tweak the messaging slightly to cause an uproar or political upheaval, or even to impersonate the voice of a CEO. Obviously, in the wrong hands, this AI can cause significant and serious threats to the public, to individuals, to overall safety, not to mention ignite political tensions.”
And, she notes, a deepfake can make for easy DIY.
“There are a few ways that you can create one,” she says. “Basically, you can take a target video, and essentially swap a photo in. It’s easy, and there are apps to help anyone do that. But higher caliber ones are created with an algorithm that will quickly help identify all of the characteristics of the underlying target, and the characteristics of the new face you want to impose, so that it can line up in such a way that it looks seamless.”
So how can you thwart against you or your business becoming a victim? Beckage says it starts with education.
“The first step is educating the board and executive teams that these things can be out there, and be used to cause harm or embarrassment,” she says. “It’s not unusual for an executive to have something like this happen to try to smear or tarnish their reputation.”
She also notes there are tools available to help monitor for deepfakes, like Microsoft Video Authenticator, for example, which analyzes photo and video and gives users a confidence score regarding the validity of the sample.
And then there’s common sense. “We all should be looking more critically at certain things in certain circumstances,” she says. “For example, if an employee receives a video from the president of their company directing them to wire money, ask, ‘Would the president usually send me a video?’”
Beckage notes that the FBI, in its recent warning about the rise of deepfakes, offered clues to help consumers detect a deepfake. “The agency suggests looking between the eyes—does it seem like there’s too much space? Also, does there seem to be an issue with lip and mouth synchronization?” Further guidance includes looking for strange movement in relation to the head and torso.
“But there’s nothing more important than having an incident-response plan,” Beckage says. “If you have a business continuity plan that walks an organization through a fire or a flood, you should have a plan in place that addresses the unique circumstances of a data-security incident. What we often see is that deepfakes are usually part of something else—they tend to arise in the context of a data breach.”
First, identifying risk factors will help influence your plan. Ask, ‘Who might launch a digital smear campaign?’ “Consider your competitors, if you have disgruntled employees or unhappy former partners,” she says. “There’s also foreign state actors: Does your organization run that risk? Going through a regular risk assessment is critical.”
Next, outline roles. “You’ll want to organize a public-relations, technical and legal response,” Beckage says. “And you’re going to need a point person’s number on speed dial to launch the response plan. Having a trusted advisor on hand, coupled with your incident response plan or business-continuity plan, will help guide you through.”
While Beckage is not blind to the harm posed by this kind of artificial intelligence, she also sees opportunity. “The concept of impersonation is not new; there’s just a new technology to do it,” she says. “But it doesn’t mean the technology is bad across the board. This AI actually has a lot of potential for good, and I think those should be things we’re talking about as well.”
If you have questions or concerns about technology as it relates to your business, an Upstate New York tech lawyer can help. For more information on this area of law, see our overview of business and corporate law.