Deepfakes. Reality dissolving into pixels.
The marketing world is no stranger to controversy and concerns about misleading and deceptive advertising practices.
On top of consumers being outraged at brands jumping on social causes, misleading advertising, and data privacy and consumer exploitation, concerns now exist about the way new technologies are being used as marketing tools to influence buying behaviour.
Misgivings exist about algorithmic bias in marketing tools and platforms, potentially leading to discriminatory outcomes. For example, biased algorithms might target ads or opportunities unfairly based on race, gender, or other factors. This raises ethical questions and necessitates responsible development and use of AI-powered marketing tools.
Now, the emergence of deepfakes and synthetic media pose new challenges for marketers and consumers alike. Malicious actors could use these technologies to create deceptive marketing materials or impersonate individuals, potentially causing harm and eroding trust. Addressing these challenges requires vigilance and the development of ethical frameworks for responsible use of such technologies.
Remember that celebrity interview that never happened? Or the political speech filled with words the leader never uttered? Welcome to the unsettling world of deepfakes and synthetic media, where artificial intelligence crafts hyper-realistic fabrications that blur the lines between truth and fiction.
From playful filters on social media to malicious propaganda campaigns, these technologies are rapidly evolving, forcing us to re-evaluate our relationship with online content and grapple with the ethical minefield they present.
It's important to note that controversies often act as catalysts for positive change, leading to stricter regulations, improved industry practices, and increased consumer awareness. The key for marketers is to strive for a more ethical and responsible marketing landscape in the future.
Deepfakes are hyper-realistic videos, images, or audio manipulations, using deep learning algorithms to fabricate content, featuring real people saying or doing things they never did. Synthetic media is a broader term encompassing all AI-generated media, including video, images, audio, and text, not limited to deepfakes. It can involve creating entirely new content or manipulating existing content.
Deepfakes are synthetic media that use artificial intelligence to manipulate the appearance or speech of real people. They can be used for various purposes, such as entertainment, education, or malicious intent.
Some examples of deepfakes include face swapping, replacing the face of one person with another in a video or image; lip syncing, making a person’s mouth movements match a different audio track; puppeteering, controlling the facial expressions and head movements of a person in a video; and voice cloning, generating a realistic voice that sounds like a specific person.
Researchers first used generative adversarial networks (GANs) in 2014 to create realistic-looking faces, marking the early stages of deepfake technology. In 2017 "FakeApp" software emerged, allowing easier creation of deepfakes, sparking widespread interest and concern. Then from 2020 onwards, rapid advancements in AI have made deepfakes increasingly sophisticated and accessible, raising both excitement and alarm.
These technologies do, however, offer positive opportunities such as creating special effects, personalised experiences, and immersive content for the entertainment and creative industries. They also offer simulating scenarios for practice and learning in education and training.
Although for marketing and advertising, the opportunities arise in personalised and engaging content creation, the potential risks are obvious, in terms of misinformation and propaganda, the eroding trust in media, identity theft and fraud, political attacks, blackmail, and social unrest.
Deepfakes raise questions about authenticity, consent, and the boundaries of reality and balancing innovation with protection from harm is a key challenge. Techniques are being developed to identify deepfakes, but the evolving technology presents constant challenges, and efforts are underway to educate people about deepfakes and how to critically evaluate online content.
While deepfakes often garner attention due to their potential for misuse, synthetic media encompasses a much broader spectrum. For example, AI-generated music compositions or personalised product descriptions fall under this umbrella.
We will see the generation of realistic videos featuring real or fictional people; more stylised images advertisements; synthesising speech that imitates a specific voice, or generating entirely new, expressive voices for AI assistants or storytelling; writing creatively in distinctive styles, or even generating copy and scripts.
Advancements in AI will lead to even more sophisticated and realistic creations and ethical frameworks and regulations are needed to mitigate risks and ensure responsible use. Alongside this, public awareness and education are crucial for critical evaluation of online content, and open discussions and collaboration are needed to harness the potential of synthetic media for good while addressing the challenges.
TechUK, a network of technology companies in the UK, has published a report that outlines the challenges and opportunities of synthetic media, and how its members are taking steps to tackle misinformation and fraud. Some of these steps include developing ethical standards, implementing verification and detection methods, and raising awareness and education among users and policymakers.
Synthetic media will, however, continue to transform business, especially in the fields of marketing, advertising, and e-commerce. Creating personalised and interactive content, enhancing customer experience and engagement, generating diverse and inclusive representations, and reducing costs and environmental impact.
But we need to keep asking questions: Who decides what is funny, fair, and accountable? Whose images or voices are fair game to mimic without consent? How can we distinguish intentional disinformation or gaslighting from critique? How can we protect the rights and privacy of individuals and communities from malicious use of synthetic media?
These questions require an expansive and inclusive dialogue. The goal is to find a balance between fostering innovation and creativity and ensuring accountability and responsibility.