Recent Articles

The hype around generative AI is dying down a bit, but much of the corporate world is still looking at the technology through rose-colored glasses... just like someone does when they’re dating a new person.  

Everything about this new relationship is incredible. This person is both mysterious and exciting – and you start to daydream of what your future might look like. However, much like a new romantic relationship, using AI can sour without a deeper understanding of what makes it tick.  

You know what they say about rose-colored glasses? All the red flags disappear when you wear them. These powerful new tools and applications could prove problematic to businesses who fail to grasp how they work. 

Even as the capabilities of generative AI improve at a rapid rate and various forms of AI become more ingrained in our lives, a general understanding of how AI works is still relatively low. To be sure, there are a wide array of useful AI applications, such as AI-enabled medical equipment for hospitals, instant transcription tools for meetings, and (arguably) chatbots.  

However, there is also the potential for generative AI models to push agendas due to biases in the content that “trains” it. As such, professional communicators must understand how these tools operate to ensure their research, fact-checking, and decision-making isn’t skewed by biased AI applications.  

How Bias Gets Baked Into AI 

A common misconception people have about generative AI is that the content it generates is completely original. In fact, it analyzes patterns and uses complex predictions to mimic human-generated content from the internet and other sources. AI is biased because those sources are inherently biased. 

People display biases in a number of ways, embedding their own opinions into comments about specific people, policies, hot-button issues, sports teams, political parties, food, or even "facts.” Generative AI is then trained on this data, mirroring that same bias in the content it generates. The danger of this cannot be understated, especially if more people are using AI chatbots, ChatGPT, Google Gemini, and other tools to learn about new subjects, summarize news stories, and conduct research.  

Take a recent New York Times experiment, which examined how chatbots can be manipulated to generate biased content. The authors created three custom bots that were trained on content that followed specific ideological lines: A progressive view, a conservative view, and a neutral view.  

The results of this experiment were stunning. Both ideologically driven bots generated answers infused with commentary about the other ideological viewpoint, calling the opposing mindset “insane,” “unethical,” and “corrupt.”  

Why Identifying Bias in AI is Tricky 

While this is an extreme example, it shows the unwanted effects of an over-reliance on AI. Sadly, ideological bias and mudslinging are quite common in modern media. However, we have “trained” ourselves to identify bias and misinformation based on the source of the information we are reading, watching, or hearing.  

For example, most of us expect the things we see on Fox News to lean one way, the things we see on MSNBC to lean the other way, and the things we read from random sources on the internet to be dubious.  

With generative AI, we don’t know where those biases exist and how to identify them. The bias is happening in a mysterious black box – and no one knows where the training data is coming from. 

The Problem with Trusting Biased Content 

As mentioned earlier, bias is everywhere – so why is it such a concern when it comes to generative AI?  

Because AI is a relatively new business tool, professionals across industries may have blind trust or over-reliance on these tools. Much of that boils down to the lack of internal guidance or policies: Without clear company AI-usage policies and guidelines, organizations may risk using biased AI-generated content. 

This bias can affect marketing efforts, especially when users rely on AI chatbots for information on industry trends, audience segments, or communications strategies. How can they be sure the information they’re getting is accurate and unbiased? 

There are tools that claim to detect whether content was generated by AI, but flaws abound with many of these detection tools. The real solution is surprisingly human.  

Taking a Critical Eye to AI 

Professional communicators can act as a suitable solution to identifying bias in artificial intelligence. Communicators, including PR professionals and journalists, consistently review and consume content from across the media landscape. This gives them a unique perspective on how information is written and delivered, as well as evidence of bias within written materials. 

Critical thinking and media literacy are often undervalued skills, but the AI revolution may change that.  Couple these skills with natural skepticism, and professional communicators have some of the best resources for identifying biased content.  

Content generated by AI will only become more prominent and more impressive, which is why it will be so necessary to take a deeper look at whatever it generates. Ultimately, responsibly using artificial intelligence means double-checking information with trusted resources and engaging with expert sources on topics.  

You may need a trusted and media-literate communications partner to help you with those efforts – or better yet, develop written and visual content the old-fashioned way. Professional communicators can help explain the risks that surround using generative AI for your internal and external communications efforts and navigate the best path forward. 

New call-to-action