Rising AI-Driven Hatred: 3 Factors Leaders Must Evaluate Before Embracing Emerging Technology

When the term “artificial intelligence” is mentioned, it’s common to be drawn toward imagining the intelligent machines often depicted in science fiction. These depictions may align with the fears and anxieties surrounding technology, reminiscent of the apocalyptic scenarios that have captivated human imagination ever since the creation of Dr. Frankenstein’s monster.

The concept of AI in popular culture often evokes images of advanced robots, superintelligent computers, or dystopian scenarios where machines dominate humanity. These portrayals have contributed to a perception of AI as something fantastical and potentially ominous.

However, the reality of AI integrated into businesses worldwide is far from fiction. These technologies have tangible effects on real people’s lives.

While AI has been a part of the business landscape for some time, the rise of generative AI products like ChatGPT, ChatSonic, Jasper AI, and others significantly increases accessibility for the average person. Consequently, there is significant public apprehension in America regarding the potential for these technologies to be misused. According to a recent ADL survey, 84% of Americans express concerns that generative AI will contribute to the proliferation of misinformation and hate.

Leaders who contemplate adopting such technology must confront challenging questions about its impact on the future, considering both positive and negative implications as we navigate this uncharted territory. Here are three crucial considerations I urge all leaders to contemplate when integrating generative AI tools into their organizations and workplaces.

Table of Contents

1. Prioritizing Trust and Safety in the Era of Generative AI

Prioritizing trust and safety should be of utmost importance. While social media platforms have been grappling with content moderation for years, the introduction of generative AI into industries like healthcare and finance poses new challenges. These industries may suddenly find themselves confronted with unfamiliar issues as they adopt these technologies. Consider a healthcare company that relies on an AI-powered chatbot to assist patients, but suddenly encounters instances where the chatbot becomes rude or even hateful. How would such a situation be handled?

Despite the immense power and potential of generative AI, it also provides an easy and fast avenue for bad actors to generate harmful content. While social media platforms have evolved a discipline of trust and safety to tackle the complexities of user-generated content, other industries have yet to do the same.

Therefore, companies must proactively engage experts in trust and safety to guide their implementation strategies. It is crucial to develop expertise in understanding the potential misuse of these tools and to devise preventive measures. Investing in dedicated staff members responsible for addressing abuses is essential to avoid being caught off guard when these tools are exploited by malicious actors.

2. Establishing Robust Guardrails and Demanding Transparency

In both work and educational environments, it is of utmost importance to have comprehensive safeguards in place within AI platforms to prevent the generation of hateful or harassing content.

While AI platforms offer immense utility, they are not infallible. A recent test conducted by ADL using the Expedia app, featuring the new ChatGPT functionality, demonstrated that within minutes, it was possible to create an itinerary of infamous anti-Jewish pogroms in Europe and compile a list of nearby art supply stores—implicitly promoting vandalism against these locations.

While some generative AIs have shown improvements in handling questions that may lead to antisemitic or other hateful responses, others have fallen short in their commitment to preventing the propagation of hate, harassment, conspiracy theories, and other harmful content.

Before embracing AI on a broad scale, leaders must pose critical inquiries: What measures are being taken to test and ensure these products are not susceptible to misuse? Which datasets are utilized to construct these models? Are the experiences of communities most targeted by online hate being incorporated into the development of these tools?

Without transparency from AI platforms, there can be no guarantee that these AI models do not facilitate the dissemination of bias or bigotry.

3. Protect Against the Emergence of Weapons

In spite of implementing comprehensive trust and safety measures, it remains crucial for leaders to advocate for the inclusion of protective measures against the malicious exploitation of AI technology.

Regrettably, despite their remarkable capabilities and possibilities, AI tools have made it effortlessly accessible for individuals with ill intentions to generate content for various harmful scenarios. The ability to swiftly produce convincing fake news, create visually striking deep fakes, and propagate hate and harassment is now at the fingertips of malevolent actors. Moreover, generative AI-generated content has the potential to contribute to the dissemination of extremist ideologies or even manipulate vulnerable individuals into radicalization.

To counter these threats effectively, AI platforms must incorporate robust moderation systems capable of withstanding the potential flood of harmful content that perpetrators may generate using these tools.

The potential of generative AI to enhance lives and transform the way we navigate the vast realm of online information is virtually boundless. However, to fully embrace this future, responsible leadership is paramount.