Skip to main content
Apr 11, 2024

Should you publicly admit you use AI in IR?

IROs divided but leaning toward public admission

As a millennial raised on tech, it’s no surprise I’m an unapologetic AI user – it’s part of my weekly, sometimes daily, routine.

As a reporter, I often turn to tools like ChatGPT or Google Gemini for various tasks, whether that’s drafting social media posts, brainstorming SEO-friendly headlines, summarizing a webinar or conference transcript, or crafting interview questions. Does this mean every question I pose in an interview or every headline I write is crafted by a bot? Not at all.

While I may seek input or suggestions from an AI tool, the final decisions are always made by me. After all, why should I refrain from using tools that streamline my workload and enhance productivity? It just makes my life easier. Obviously I wouldn’t search for an encyclopedia if I wanted to know the names of the British royal dynasties in order – I would Google it.

But while I personally don’t have a problem admitting I use AI, it’s not something everyone wants to be vocal about. And I don’t know about you, but I find that quite weird.  

I’m sure we’ve all encountered the familiar disclaimer: ‘Sorry, don’t judge me, but I used ChatGPT to put this together’. But why should we judge anyone? Would you condemn someone for using the internet to gather information? I don’t think so. Just as the internet has become an indispensable tool in our professional lives, so AI is rapidly evolving into a cornerstone of modern workflows across various industries.

Yes, you need to be careful and mindful of the kind of tools you use and what for, and yes, it’s great if your company has an AI policy in place. But you should not be shy about it – if you are, then maybe you shouldn’t really be using it to start with.

In IR, AI is undeniably the talk of the town in 2024. In our research conducted for the IR Playbook: Artificial intelligence and investor relations, we found that more than 47 percent of IROs are using AI, either regularly or experimentally, for some of their IR-related tasks. Use-cases include slides, images and graphics generation, creating an earnings deck draft, theme spotting, analysts’ analysis, summaries, peer monitoring and notetaking, among many others.

Yet in the midst of this AI revolution, which continues to demonstrate how the applications of AI in IR are vast and transformative, one question arises: are IR professionals willing to publicly admit to using AI for their IR-related tasks?

At the latest IR Magazine Forum – AI for IR event, we asked attendees the same question via a poll. At first, the poll results showed a relatively evenly split, with a slightly higher percentage of negative answers.

Interestingly, however, when the same question was debated by panelists at one of the sessions and some of them said they would have no problems admitting they use AI in their daily tasks, the number of positive answers to the poll started to increase. In the end, nearly 70 percent of attendees in the room said they would publicly admit to the use of AI. It almost felt like some needed a confidence boost.

The debate highlighted different points of view but also some common denominators. Some recognized the role of AI as a valuable input in their workflows but emphasized that it should remain just that – an input. This perspective offered a balanced approach that integrates AI while retaining human oversight and accountability.

Others reiterated that transparency is one of the pillars of IR and invited peers to consider what they are using AI for. Surely, they said, there is no need to disclose whether you are using AI-powered tools such as Grammarly to check your press release punctuation but it’s paramount to disclose the use of AI in scenarios where data verification may be compromised. 

Finally, one attendee said that while he would have no problem in admitting he uses AI in a verbal exchange with peers, he wouldn’t admit he uses AI in a written document. That’s because, in his opinion, the IRO is the only person responsible for document oversight and the end-product; therefore, it doesn’t matter whether he used AI or the internet at some stage to help with that output.

I am all for forgetting the stigma and embracing the power. Undeniably, responsible AI use is the new professional edge in IR so, as the saying goes these days, it’s a case of adapt or get left behind.