The Responsive, Accomplished Legal Allies
You Want In Your Corner

  1. Home
  2.  • 
  3. Criminal Defense
  4.  • Will judges and juries be able to identify deepfake evidence?

Will judges and juries be able to identify deepfake evidence?

On Behalf of | Jul 3, 2020 | Criminal Defense |

A deepfake video uses artificial intelligence to alter an image so that it appears to be someone else, or say something else, or depict someone doing something other than what they really were. Done with care, a deepfake can show almost anything you like, and it can be virtually impossible for the naked eye to recognize the illusion.

Even when done relatively poorly using commercially available software, a deepfake can be convincing. These have been used against politicians and celebrities in order to make them appear to be saying unpopular things or simply slowed down to make the person seem unsophisticated.

Deepfakes are especially convincing, according to a recent paper, when there is already a narrative of distrust in place. The audience is primed to believe what they suspect or fear will be the case.

Courts are beginning to grapple with faked video evidence. According to a review by the ABA Journal, one California court of appeals threw out a MySpace image because the prosecution hadn’t brought in an expert or eyewitness to authenticate it. However, other courts have expressly refused to require authentication before a video can be admitted. This puts the defense in the position of having to de-authenticate questionable videos.

When that de-authentication process occurs, will juries believe it? Or will they be more inclined to believe what their eyes show them? Conversely, will juries become so wary of deepfakes that they don’t believe video evidence at all?

The courts have proven to be resilient to technology-driven forgeries in the past.

“As long as there’s been evidence in court, there’s always been fakes, and the courts have come up with rules to deal with those as they come up, and are aware that there’s always the possibility that somebody is trying to hoodwink them,” says Riana Pfefferkorn, a professor at Stanford Law School.

Yet there might not be enough experts in debunking deepfakes to go around, especially if the technology becomes even more widespread. The effort to identify and unmask deepfakes could easily drive up the cost of defense.

A new risk on the horizon

If you look at them one way, deepfakes are simply another risk. Most types of evidence come with some risk — scientific tests can be done incorrectly or even faked, eyewitness testimony is widely known to be vulnerable to changing circumstances. Human memory is imperfect even when the rememberer feels relatively certain they remember accurately.

Judges and juries will need to be educated on this new risk so they can take it into account. Defense attorneys will need to be much more skeptical of video evidence that has not been authenticated.

Archives

Categories

FindLaw Network