Deepfake technology allows anyone to generate fake images and videos of a person. Though the current technology is still in its infancy, many tech experts warn that future deepfake videos can be impossible to distinguish from real ones. Realizing the privacy threat posed by deepfake, U.S. politicians have proposed laws to regulate the technology and punish those who misuse it.
Last month, a bipartisan group of senators introduced the “Deepfake Report Act” sponsored by Senate Artificial Intelligence Caucus founders Martin Heinrich and Rob Portman. “As AI rapidly becomes an intrinsic part of our economy and society, AI-based threats, such as deepfakes, have become an increasing threat to our democracy… Addressing the challenges posed by deepfakes will require policymakers to grapple with important questions related to civil liberties and privacy,” Portman said in a statement (Meri Talk).
The Act mandates the Department of Homeland Security to carry out a yearly assessment of deepfakes and relevant content. The department is also required to study the technologies involved in deepfakes and suggest regulations to control its use. The controversy surrounding deepfakes peaked after a recent fake video showed U.S. House of Representatives Speaker Nancy Pelosi looking drunk. Facebook flagged the video as fake but did not take it down. It went viral and many people initially thought that the video was actually true. YouTube pulled down the video after it came to their attention. The proposed legislation has garnered support from industry experts.
“The Deepfakes Report Act of 2019 is an important proactive step. It calls on several agencies with relevant expertise to examine the state of technology, including both the benefits and potential threats presented by deepfakes, and assess countermeasures that can be deployed. Although we have concerns about the broad definition of ‘deepfakes’ outlined in the bill, we hope examination of this technology will help policymakers take an evidenced-based, measured approach,” Elizabeth Banker, Vice President of the Internet Association, a U.S.-based internet lobbying group, said in a statement (Internet Association).
In California, lawmaker Marc Berman introduced the Assembly Bill 730, which would give courts the right to order people who knowingly distribute deceptive media of a political candidate to pay for damages. The Senate Elections and Constitutional Amendments Committee cleared the bill in the first legislative hearing itself. Candidates will have the right to sue organizations or individuals that circulate their deepfake videos around Election Day without notification that it is fake.
“As more and more bad actors try to influence our elections with misinformation campaigns that sow confusion and doubt throughout the electorate, I think we can all agree with the premise that voters have a right to know when video, audio, and images that they are being shown have been manipulated,” Berman said in a statement (Courthouse News).
Spotting a deepfake
Though deepfake videos and images look real to people at first, it is possible to identify such content. First, check whether there is blurring around the face but not anywhere else in the video. This usually occurs since current AI technology is not fully developed to mimic faces perfectly. A blurred face in an otherwise sharp video is a clear signal that what you are watching is a fake vid. There might also be a change in skin tone around the edge of the face.
Another thing to watch out for is blinking. On average, human beings blink once every 2 to 10 seconds, with a single blink taking somewhere between one-tenth to four-tenths of a second. Current deepfake technologies have trouble with realistically depicting blinking.
“When a deepfake algorithm is trained on face images of a person, it’s dependent on the photos that are available on the Internet that can be used as training data. Even for people who are photographed often, few images are available online showing their eyes closed. Not only are photos like that rare — because people’s eyes are open most of the time — but photographers don’t usually publish images where the main subject’s eyes are shut. Without training images of people blinking, deepfake algorithms are less likely to create faces that blink normally,” according to Fast Company.
Watch out for variation in some sections of the video as it indicates tampering. Some deepfake videos may also have box-like shapes or cropped effects around the eyes, neck, and mouth. If the person in the video is moving unnaturally, that is also an indication that the video might be a fake.