DEEPFAKES

WHY IN NEWS? 

  • Deepfakes have emerged as a new tool to spread computational propaganda and disinformation at scale and with speed. 
  • Deepfakes are the digital media (video, audio, and images) manipulated using Artificial Intelligence. This synthetic media content is referred to as deepfakes. 

POSITIVE USE CASES OF DEEPFAKE 

Accessibility 

  • AI-Generated Synthetic media can help make the accessibility tools smarter, affordable and personalizable, which can help people augment their agency and gain independence. 
  • Microsoft’s Seeing.ai and Google’s Lookout leverage AI for recognition and synthetic voice to narrate objects, people, and the world. AI-Generated synthetic media can power personalized assistive navigation apps for pedestrian travel. 
  • Technology companies are working to enable and develop AI-Generated synthetic media scenarios for people living with ALS (Lou Gehrig’s Disease). 
  • Synthetic voice is also essential to enable such patients to be independent. Deepfake voice can also help with speech impediments since birth. 

Education 

  • AI-Generated synthetic media can bring historical figures back to life for a more engaging and interactive classroom. This will have more impact, engagement, and will be a better learning tool.For example, JFK’s resolution to end the cold was speech, which was never delivered, was recreated using synthetic voice with his voice and speaking style will clearly get students to learn about the issue in a creative way. 
  • Synthetic human anatomy, sophisticated industrial machinery, and complex industrial projects can be modeled and simulated in a mixed reality world to teach students. 

Arts 

  • AI-Generated synthetic media can bring unprecedented opportunities in the entertainment business that currently use high-end CGI, VFX, and SFX technologies to create artificial but believable worlds for compelling storytelling. 
  • Samsung’s AI lab in Moscow brought Mona Lisa to life by using Deepfake technology. 
  • In the video gaming industry, AI-generated graphics and imagery can accelerate the speed of game creation. Nvidia demoed a hybrid gaming environment created by deepfakes and is working on bringing it to market soon. 

Autonomy & Expression 

  • Synthetic media can help human rights activists and journalists to remain anonymous in dictatorial and oppressive regimes. Deepfake can be used to anonymize voice and faces to protect their privacy. 
  • Deep Empathy, a UNICEF and MIT project, utilizes deep learning to learn the characteristics of Syrian neighborhoods affected by conflict. It then simulates how cities around the world would look amid a similar conflict. 
  • Deep Empathy project created synthetic war-torn images of Boston, London and other key cities around the world to help increase empathy for victims of a disaster region 

THE OTHER SIDE OF DEEPFAKE 

Such technologies can give people a voice, purpose, and ability to make an impact at scale and with speed. But as with any new innovative technology, it can be weaponised to inflict harm. 

  • Overriding Consent: Deepfake technologies make it possible to fabricate media — swap faces, lip-syncing, and puppeteer — mostly without consent and bring threat to psychology, security, political stability, and business disruption 
  • Damage reputations: Deepfakes can depict a person indulging in antisocial behaviours and saying vile things. These can have severe implications on their reputation, sabotaging their professional and personal life. Even if the victim could debunk the fake via an alibi or otherwise, it may come too late to remedy the initial harm. 
  • Targeting Women: The very first use case of malicious use of a deepfake was seen in pornography, inflicting emotional, reputational, and in some cases, violence towards the individual. 
  • Exploitation: Malicious actors can take advantage of unwitting individuals to defraud them for financial gains using audio and video deepfakes. Deepfakes can be deployed to extract money, confidential information, or exact favours from individuals. 
  • Social Harm: Deepfakes can cause short- and long-term social harm and accelerate the already declining trust in news media. Such an erosion can contribute to a culture of factual relativism 
  • Creation of Echo Chambers in Social Media: Falsity is profitable, and goes viral more than the truth on social platforms. Combined with distrust, the existing biases and political disagreement can help create echo chambers and filter bubbles, creating discord in society. 
  • Undermining Democracy: False information about institutions, public policy, and politicians powered by a deepfake can be exploited to spin the story and manipulate belief. This can aid in altering the democratic discourse and undermine trust in institutions. 
  • Misused as tool of authoritarianism: Deepfakes can become a very effective tool to sow the seeds of polarisation, amplifying division in society, and suppressing dissent. 
  • Liar’s dividend – an undesirable truth is dismissed as deepfake or fake news. It can also help public figures hide their immoral acts in the veil of deepfakes and fake news, calling their actual harmful actions false. 

SOLUTION FOR DEALING WITH DEEPFAKES? 

To defend the truth and secure freedom of expression, we need a multi-stakeholder and multi-modal approach. 

  • Regulation & Collaboration with Civil Society- Meaningful regulations with a collaborative discussion with the technology industry, civil society, and policymakers can facilitate disincentivising the creation and distribution of malicious deepfakes. 
  • New Technologies-There is also need easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources. 
  • Media literacy -for consumers and journalists is the most effective tool to combat disinformation and deepfakes. As consumers of media, we must have the ability to decipher, understand, translate, and use the information we encounter 
  • Even a short intervention with media understanding, learning the motivations and context, can lessen the damage. Improving media literacy is a precursor to addressing the challenges presented by deepfakes. 

CONCLUSION 

  • Collaborative actions and collective techniques across legislative regulations, platform policies, technology intervention, and media literacy can provide effective and ethical countermeasures to mitigate the threat of malicious deepfakes to counter the menace of deepfakes, we all must take the responsibility to be a critical consumer of media on the Internet, think and pause before we share on social media, and be part of the solution to this infodemic. 

Contact Us

    Enquire Now