Deepfake Artificial Intelligence
Background History:
There have been many breakthroughs in artificial intelligence over the past few years. Although the use of artificial intelligence is supposed to be beneficial to humanity, developments like Deepfake AI are proving the malicious potential of artificial intelligence.
The term deepfake refers to forged, computer-generated video and audio that is difficult to discern from authentic, unchanged content. Deepfake AI is essentially the equivalent of Photoshop, but for video. Deepfake has been utilized to reproduce unsettling audios of presidential voices and to completely delete objects once present in video.
How It Works:
Deepfake depends on the the use of generative adversarial networks (GANs), which includes two subset artificial intelligence systems. One system generates content and the other guesses or determines whether images are real or counterfeit. The two AI adversaries progressively get better and better at their function over time, and eventually develop the ability to produce content that looks impeccably life-like.
Impacts & Future Growth:
Numerous concerns have been raised with the improving effectiveness of Deepfake artificial intelligence. One of the largest concerns of the near perfect video and audio reproduction capabilities of Deepfake AI is the application of the technology to manipulate political discussion and news. Despite the negative connotations of Deepfake, the technology proves the rapid advancement of artificial intelligence and the potential for more powerful, yet possibly dangerous tools in the future.
Advantages:
- Opens up peoples' eyes to the possibility of counterfeit videos.
- Encourages people to be more critical of what they read and view.
Disadvantages:
- Creating fake, incriminating video evidence to support false accusations.
- Creating fake news and propaganda.
- Can be used for blackmailing and iniquitous purposes.
-----
Information Resources:
Hwang, Tim. “The Future of the Deepfake - and What It Means for Fact-Checkers.” Poynter, Poynter Institute , 17 Dec. 2018, www.poynter.org/fact-checking/2018/the-future-of-the-deepfake-and-what-it-means-for-fact-checkers/.
Mann, Adam. “Deepfake AI: Our Dystopian Present.” LiveScience, Future US, 31 Sept. 2019,
www.livescience.com/deepfake-ai.html.
McFadden, Christopher. “Deepfakes Are Bad but They Could Also Have Some Advantages.” Interesting Engineering, Interesting Engineering, 8 July 2019, interestingengineering.com/deepfakes-are-bad-but-what-are-some-of-the-possible-advantages.
Multimedia Sources:
Baker, Henry, and Christian Capestany. It’s Getting Harder to Spot a Deep Fake Video. YouTube, Bloomberg , 27 Sept. 2018, youtu.be/gLoI9hAX9dw.
Kimery, Anthony. “Deepfake Technology.” Biometric Update, Biometrics Research Group , 11 Dec. 2018, www.biometricupdate.com/201812/deep-fake-technology-outpacing-security-countermeasures.
Leon, Harmon. “At UC Berkely, Computer-Science Professor Hany Farid Is Developing a Deepfake Detection System.” Observer, Observer Media, 26 June 2019, observer.com/2019/06/deepfakes-combat-2020-elections/.
Garrit Witters
Comments
Post a Comment