May your ethics be as high as your heels: Taking AI from cringe to compliance
AI is all the rage, but because we’re big proponents of not chasing after buzzwords (looking at you ESG), we haven’t said much about it on the Broadcat blog. Until now! Today we’re breaking that silence and sharing our own experiences with AI, along with considerations on the pros and cons of using these tools in your own compliance program.
Instead of focusing on the obvious stuff—just to look like we’re hip and keeping up with the latest craze—we’ll share some actionable examples of how you can start using AI in your program right now (and, importantly, examples of how not to).
My Own AI Horror Stories
Last spring, I shared this on LinkedIn:
Little did I know that, come September, AI would take the spotlight at one of the largest compliance events of the year: It was the topic of the keynote address at this year’s SCCE Compliance & Ethics Institute!
I won’t say that I can see the future, but I won’t stop you if you do. 😏
So, now that I have the DOJ’s permission to raise some commotion about AI (in a constructive way, promise!) I’m taking a deep dive into (a) Why AI and compliance don’t have the chillest relationship, and (b) How we can move past that tension and create something positive!
AI Can Target Your Training
You probably have volumes on volumes of quality training and compliance comms saved on your servers, right? Which means it’s probably a pain to go through all that content and tailor it to specific roles. Enter: AI 🤖
But remember: AI and compliance are kind of like frenemies. Left to its own devices, AI can, for example, spit out training that might be targeted to what it thinks “conflicts of interest” are, but doesn’t really create training comms that make sense for your org or specific roles. That’s why you’ll have to be very intentional about how you approach this type of project.
AI will need precise guidance to tailor your training so that it’s relevant to the way you do things. Otherwise, your messaging can get chaotic, and that’s exactly what you DON’T want when you’re trying to get your teams to comply with a super-important rule.
Luckily, with a little trial and error, AI can help cut down the amount of time it takes you to tailor all that fantastic content you already have to specific roles or processes, thereby improving engagement and effectiveness. So, yes, it takes some work at the outset, but can be worth it in the long run—especially once you’ve found a system that works.
Now, let’s zoom out a bit. Not only can AI help target your training, it can also identify where training (and policy!) gaps exist in the first place. There are even compliance-specific tools on the market to do just that! I don't want to name names because I haven’t used them personally, but if you reallyyyyy want to know, drop me a line.
AI Can Draft Content
The keyword here is “draft"! As in, rough draft.
AI can speed up tasks like policy drafting , saving time and resources (e.g., ask it to create a policy outline). Then, once it’s time to promote your new or revised policy, you can have AI draft a newsletter article based on that policy. Imagine that! A world where writer’s block doesn’t exist because ALL YOU HAVE TO DO IS EDIT! ❌✒️
While we’re on the topic of drafting, you can also ask AI to improve the content you already have. For example, we always talk about the importance of simplifying your policies, right? Simply (ha!) ask your AI tool to “make this easier to understand” or “summarize this in a few sentences” and you’ve got yourself a streamlined policy!
Pro tip: This also works for other tones of voice, too. You can ask AI to “make this email more persuasive” or “make these presentation talking points more relatable to my audience” (just don’t forget to tell it who your audience is).
Additionally, Compliance teams with international offices will appreciate that AI can also help with writing translations. Want a policy in French? Just ask your AI robot to “translate this to French,” paste the policy, and hit “enter.” Voila!
Just make sure to get a human French speaker to proofread it. | Source: 20th Century Studios' The Princess Bride via Giphy.com
Does this sound too good to be true? It is. Always remember that AI isn’t an “easy” button! It doesn’t always care about silly things like facts or grammar or avoiding microaggressions, so you’ll have to spend a good amount of time fact-checking and editing. Because how embarrassing would it be if you actually sent an email with the line, “May your ethics be as high as your heels”??? 😬
AI Can Analyze Data
It seems like we’re all drowning in data these days, doesn’t it? AI, unlike us normal humans, can handle large volumes of data and help us parse the information to arrive at clear conclusions. Think about it: If you send out a survey with short-answer responses to 1,000 people, the feedback will be wayyyyy too bulky for you to wade through quickly. BUT! If you throw those responses into AI, it can identify trends, commonalities, risks, and areas for improvement … just like THAT. 🫰
With this kind of analytical power, you could discover common threads in complaints to Employee Relations or reports to your helpline (e.g., manager, topic, location).
Remember, however, that AI is a wild card! It won’t really grasp nuances in human feedback, so it’s important to audit the analysis you receive by selecting a handful of datapoints to double-check. Furthermore, algorithms can perpetuate biases present in the data that trains your AI tools, leading to unfair or discriminatory outcomes. (Do I need to mention that “heels” thing again? 🙄)
As for the helpline and other sensitive applications, it’s best to use private AI tools. These allow you to keep your data secure while leveraging the vast amount of data available through large public AI models. And always keep an eye on your privacy policies, actually read those we’ve-updated-our-terms emails, and make sure you’re using company-approved tools!
(Quick aside: If you’re looking into a private AI solution and want to impress your IT friends, “Retrieval-Augmented Generation,” or “RAG” is the jargon to know. RAG maintains security, privacy, and data localization while using powerful external AI sources.)
AI Is Chaotic Good
One more time: AI and compliance are frenemies! AI is unreliable. It’s inaccurate. It’s vulnerable to cyberattacks. 👾
If it were a Dungeons and Dragons character, its alignment would be chaotic good.
And that’s ok! | Source: Netflix’s Stranger Things via Tumblr
By understanding the chaos, or how AI is working against compliance, we know how to correct for its chaotic tendencies. Broadcat even has a job aid and video to help your teams do just that!
Ultimately, time saved using AI isn’t a 100% net gain: You still need to put in the work. That said, if you train your AI (and yourself!) properly, supervise its output by fact-checking and editing, and by correcting its errors, it can be a powerful ally in building and maintaining a robust compliance program.