Deepfake: dangerous form of misinformation
Deepfakes refers to a video/image that has been edited using an algorithm to replace a person in the original video/image with someone else, in a way that makes the video look authentic.
o Deepfakes use a form of artificial intelligence called deep learning to make images of fake events, events that haven’t happened.
o Deep learning is a machine learning subset, using artificial neural networks inspired by the human brain to learn from large data sets.
• Deepfake imagery could be an imitation of a face, body, sound, speech, environment, or any other personal information manipulated to create an impersonation.
Deepfake is a term that combines “deep learning” and “fake”.
It refers to a video/image that has been edited using an algorithm to replace a person in the original video with someone else, in a way that makes the video look authentic.
- Deep learning is a machine learning subset, using artificial neural networks inspired by the human brain to learn from large data sets.
How DeepFake Works?
- Deepfakes employ a deep-learning computer network called a variational auto-encoder, a type of artificial neural network that is normally used for facial recognition.
- Auto-encoders detect facial features, suppressing visual noise and “non-face” elements in the process.
- Autoencoder enables a versatile “face swap” model using shared features of person/image etc.
Deep fakes also use Generative Adversarial Networks (GANs), which consist of generators and discriminators.
♦ Generators take the initial data set to create new images.
♦ Then, the discriminator evaluates the content for realism and does further refinement.
Issues associated with Deepfake
• Misinformation and Disinformation: Deepfakes can be used to create fake videos of politicians or public figures, leading to misinformation and potentially manipulating public opinion.
• Privacy Concerns: Deepfakes can be used to damaging content featuring individuals without their consent, leading to privacy violations and potential harm to reputations.
o Deepfakes are, thus, a breach of personal data and a violation of the right to privacy of an individual.
• Lack of Regulation: Major issue is the lack of a clear legal definition of deepfake technology and the activities that constitute deepfake-related offences in India.
o Thus, it becomes difficult to prosecute individuals or organisations that engage in malicious or fraudulent
activities using deepfakes.
• Challenges in Detection: Developing effective tools to detect deepfakes is an ongoing challenge, as the technology used to create them evolves.
Opportunities with Deepfake technology
• Entertainment: Voices and likenesses can be used to achieve desired creative effects.
• E-commerce: Retailers could let customers use their likenesses to virtually try on clothing.
• Communication: Speech synthesis and facial manipulation can make it appear that a person is
authentically speaking another language.
• Research and Simulation: It can aid in training professionals in various fields by providing realistic
scenarios for practice, such as medical training.
Legal provisions in India
- In India there are no specific legal provisions against deepfake technology.
- However, some laws address deepfake, viz.,
♦ Section 66E of the IT Act of 2000, an act involving capturing, publishing, or transmitting a person’s images in mass media, violates their privacy.
♦ Indian Copyright Act of 1957 provides for penalties for the infringement of copyright.
Global measures against Deepfake
- Bletchley Declaration: Twenty-eight major countries including the United States, China, Japan, the United Kingdom called to tackle the potential risks of AI.
- China: prohibits the production of deep fakes without user consent.
- Google announced tools e.g., watermarking to identify synthetically generated content
Way ahead for Deepfake
• Strengthening legal framework: Need to establish and update laws and regulations specifically addressing the
creation, distribution, and malicious use of deepfake and associated content.
• Promote Responsible AI Development: Need to encourage ethical practices in AI development, including the
responsible use of deep learning technologies.
o Asilomar AI Principles can act as a Guide to ensuring safe and beneficial AI development.
• Responsibility and Accountability of social media platforms: The need will be to create a uniform standardization that all channels can adhere to and is common across borders.
o For example, YouTube has recently announced measures requiring creators to disclose whether the content is
created through AI tools.
• International Cooperation: Establish shared standards and protocols for combating use of deepfakes across borders.
• Invest in Research and Development: Allocate resources to support ongoing research into deep fake technologies, detection methods, and countermeasures.
Read About:Scientists claim that the Sun Cycle Activity is reaching its peak earlier than expected