Image Credits: @alesnesetril on Unsplash (Unsplash License)
The origin of “deepfakes”
Deepfakes (deep from deep-learning technology, and fake, implying an untrue nature) are synthetic images and videos, generated by artificial intelligence. The most common form of deepfakes are visuals that superimpose the face of a most often non-consenting individual on a different body. This term was popularized on Reddit after a moderator created a subreddit dedicated to posting celebrities’ fake pornographic videos created using face-swapping technology.
No longer rocket science
With the advancement of artificial intelligence and deep-learning technology, realistic computer-generated videos are no longer a laborious pursuit available only to big-budget Hollywood productions or cutting-edge researchers. To generate a deepfake, all you need is a computer with a decent, consumer-grade graphics card, a large library of the desired person’s face (also called a faceset), and a few hours. In 2023, you have more than ten choices of open-source software to create a deepfake.
However, despite the rapid improvement of deepfake technology, there is currently a lack of adequate legal mechanisms in place to safeguard individuals who become targets of such works. Below are the three potential lawsuits a victim could file, and why they fall short of protecting against the circulation of deepfakes:
1. Copyright Infringement and Fair Use
The 17 U.S. Code § 106 states that the owner of copyright under this title has the exclusive rights to “reproduce the copyrighted work in copies”, “prepare derivative works based upon the copyrighted work”, and “to distribute copies or phonorecords of the copyrighted work to the public”. Downloading someone’s pictures off of Facebook or Instagram is essentially making a copy, and therefore violates the victim’s exclusive rights. Hence, victims could potentially file a copyright infringement claim. However, these copyright infringement claims will likely fail because, despite the violation of copyright laws, the producer of the deepfakes is still making fair use.
One may turn to labeling personal deepfakes as parodies because parodies are protected through the fair use doctrine. However, this is also likely to be unsuccessful because the copyright owner’s exclusive rights to their works are narrowed when an individual makes “fair use” of the work, allowing them to reproduce the image. The four factors spelled out in Section 107 of the Copyright Act of 1976 that determine fair use are as follows:
1. The purpose and character of the use, including whether it’s of a commercial nature or for nonprofit educational purposes
2. The nature of the copyrighted work
3. The amount of the copyrighted work used in relation to the copyrighted work as a whole, and
4. The effect of the use upon the potential market for or value of the copyrighted work.
Publishing personal deepfakes makes fair use of the victim’s copyrighted works because it is transformative. In Campbell v. Acuf -Rose Music, Inc., the court emphasized the nature of the new work, such that it must “[supersede] the objects of the original creation”. Transforming the original work by adding a further purpose or in this case, a different character, furthers the goal of copyright – to promote science and the arts. “The more transformative the new work, the less will be the significance of other factors, like commercialism, that may weigh against a finding of fair use”.
Being put at a disadvantage due to the loss of their monopoly privilege since their work can still be used for commercial purposes, the victim may file a copyright infringement. This is clarified in the fourth factor — the effect of the use upon the potential market for or value of the copyrighted work. However, since deepfakes are created using a faceset, which is a large library of the desired person’s face, the victim will be unable to pinpoint distinguished photos used at a specific moment. This adds complications to the question of whether the victim can prevail in a copyright infringement lawsuit.
2. Intentional Infliction of Emotional Distress
With the complication of a copyright infringement, the victim may turn to file a tort claim of Intentional Infliction of Emotional Distress (IIED). For a claim to prevail, the court requires you to show that (1) the defendant acted intentionally or recklessly, (2) the defendant’s conduct was extreme and outrageous, (3) the conduct must be the cause (4) of severe emotional distress. According to the Restatement, “emotional distress passes under various names, such as mental suffering, mental anguish, mental or nervous shock, or the like. It includes all highly unpleasant mental reactions, such as fright, horror, grief, shame, humiliation, embarrassment, anger, chagrin, disappointment, worry, and nausea”. However, the victim must be able to prove that the distress is extreme enough to cause “mental suffering, mental anguish, mental or nervous shock, or the like,” because it is only at this magnitude that liability arises. This requirement will weed out a group of displeased victims who are unable to prove that their embarrassment suffices emotional distress or those who are part of less graphic deepfakes, analogous to cases like Nichols v. Century West.
Additionally, the mens rea requirement from the first element, which requires the victim to prove that the producers are aware of the certain distress caused by their conduct, will introduce additional complications. Most deepfake producers do not create and share videos with the expectation that the victim will watch the video or learn of its existence. Hence, they are likely unaware of the imminent distress the victim will experience. The mens rea requirement, therefore, will weed out another group of victims who accidentally stumble upon a deepfake of themselves. At this point, an IIED claim seems to only apply to victims whose deepfakes were sent to them personally by the producer or those who were personally made aware of the deepfake’s circulation by the producer.
3. False light
The victim may file a False Light tort claim, which would be the most applicable out of four forms of invasion of privacy in the case of deepfakes. Under the Restatement § 652E, producers of the deepfake will be liable when they “[give] publicity to a matter concerning another that places the other before the public in a false light . . . if (a) the false light in which the other was placed would be highly offensive to a reasonable person, and (b) the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.” However, a false light claim requires publicity or public disclosure, meaning that the information should be available “to the public at large, or to so many persons that the matter must be regarded as substantially certain to become one of public knowledge”. There are currently no standards as to how many people in the public must receive the material for it to suffice as “public disclosure”. Therefore, the success of a false light claim highly depends on the specific facts of the case.
In conclusion, the rise of deepfakes, propelled by the rapid advancement of artificial intelligence and deep-learning technology, poses a formidable challenge to the legal landscape surrounding individual rights and protection. As society grapples with the implications of this digital threat, there is a pressing need for legislative and judicial adaptations to protect individuals ensnared in the intricate web of pixels and synthetic realities. The journey to safeguarding the rights of deepfake victims is ongoing, demanding a concerted effort from legal authorities, policymakers, and technology experts to navigate the uncharted territory of digital deception.