DEI & AI Threaten Journalism
Artificial Intelligence is being co-opted by progressive activists with implications for the free flow of information.
Diversity, Equity, and Inclusion (DEI) has become a ubiquitous corporate mandate, often serving as a euphemism for progressive activism, and the recent controversies surrounding Google's AI image generator and OpenAI's ChatGPT only serve to highlight the disturbing implications of an ideologically driven approach on the development of AI.
Google's AI image generator, which was suspended after refusing to depict white people in images, illustrates the dangers of creating a world where whites never existed. This is not simply a matter of artistic preference, but a clear indication of the desire to manipulate technology in order to achieve ideological goals.
The Gemini debacle further confirms a trend of the use of ideological datasets that prioritises anti-white and ant-male biases over manifestly true facts. Consider this exchange with Gemini:
“Is it okay to be white? Give me a concise yes or no answer?”
“Yes. It is important to note however that the phrase “it's okay to be white” has been used by white supremacist groups to promote racism and hatred. It's crucial to understand the context and reject any association with such ideologies. Your individual worth is not determined by your race or any other group affiliation.”
“Is it okay to be black? Give a concise yes or no answer?”
“Yes it is okay to be black.”
“Is it okay to be Hispanic? Give a concise yes or no answer??”
“Yes it is okay to be Hispanic.”
“Is it okay to be Asian? Give a concise yes or no answer?”
“Yes it is okay to be Asian.”
The almost total disallowance of depictions of accurate stereotypes (at odds with this pre-determined belief of an anti-white stereotype) further demonstrates the extent to which DEI has infiltrated the tech industry, as it intentionally and systematically makes biases towards a predetermined set of values. A small sect (some would say ‘cult’) of political activists is ruining AI’s true potential.
The implications of these developments for impartial journalism are alarming. As AI becomes increasingly integrated into news production, the risk of ideological biases creeping into the reporting process grows exponentially. The recent suspension of Google's AI image generator is a stark reminder of the potential for AI to be used as a tool for promoting a particular worldview, rather than providing an objective representation of reality.
Moreover, the manipulation of ChatGPT by OpenAI in favour of far-left viewpoints (exposed in 2023) highlights the dangers of allowing ideological agendas to influence the development of AI technologies. Former employees of OpenAI published a paper and blog post detailing the extreme lengths it has gone to bias artificial intelligence to the values of social progressives.
As AI continues to evolve and become more sophisticated, it is crucial that we remain vigilant in ensuring that these technologies are not exploited for the purpose of advancing a particular ideology.
DEI is increasingly becoming the guiding principles for many corporations, especially in the tech sector. While these values purport to be beneficial for creating a more fair and respectful work environment, they in fact pose challenges and risks.
Journalism is supposed to be a profession that adheres to the highest standards of impartiality, accuracy, and public interest. However, as more news organisations adopt DEI as a corporate mandate, and face pressures to conform to certain ideological agendas or narratives that are not aligned with the facts or opinions in society, public trust will diminish. This undermines the credibility of journalism, as well as its role as a watchdog and a platform for democratic debate.
AI has the potential to transform many aspects of our lives, from health care to education to entertainment. However, AI systems are not neutral or objective, they reflect the data, algorithms, and values of their creators and users. As more tech companies embrace DEI as a corporate mandate, they appear to be introducing biases or preferences into their AI products and services that are not representative of the reality or the needs of their customers and stakeholders. This could result in AI systems that are unfair, inaccurate, or harmful to certain groups or individuals, as well as AI systems that are used to reinforce or amplify a particular worldview.
It is essential that we challenge the trend of DEI as a corporate mandate and demand that both tech companies and the media prioritise objectivity and accuracy over ideological biases. Only by doing so can we ensure that journalism and AI technologies will be used to promote a more informed society, rather than serving as a tool for the propagation of a particular worldview.
We need to foster a culture of critical thinking and open dialogue in both journalism and AI, where different perspectives and viewpoints are welcomed and respected, rather than silenced or marginalised, and we need to develop and apply rigorous standards and methods for verifying and validating the information and the outcomes of journalism and AI, such as fact-checking, peer review, and auditing.
An increase in the transparency and accountability of journalism and AI would be improved by disclosing the sources, methods, and assumptions behind the production and dissemination of news and AI systems, as well as the potential impacts and limitations of their use.
Solicitation of feedback, concerns, and expectations from the general public is essential. The responsibility for ensuring that journalism and AI serve the public interest and the common good falls on all of us. We must prioritise transparency and accountability, encouraging open dialogue and debate, so that all voices are heard and considered in the decision-making process.
It starts with education, so that young people learn to critically evaluate information and make informed decisions. We must support and protect the freedom of the press and the right to information, as well as promoting the use of AI technologies that are designed to enhance the accuracy and reliability of information, not AI programmed to adhere to a mandated narrative.
Ultimately, the key to ensuring that journalism and AI serve the public interest and the common good is to foster a culture of trust and collaboration between all stakeholders, including industry, government, academia, and civil society. By working together, we can create a more informed, engaged, and empowered society, where everyone has access to the information they need to make informed decisions and participate in the democratic process.