Skip to main content
Jul 28, 2023

Anthropic, Google, Microsoft and OpenAI promote safe use of artificial intelligence

Industry tycoons rally together to form responsible AI body

Tech giants Anthropic, Google and Microsoft have joined forces with OpenAI to create an industry body for artificial intelligence (AI) to help ensure its safe and responsible development.

The Frontier Model Forum was created to help develop AI safely and promote it responsibly while minimizing potential risks to consumers. The body looks to support the implementation of AI as it begins to become more prominent today and will also establish an advisory board to help guide its strategy and priorities. 

Kent Walker, president of global affairs at Google and Alphabet, says the company is excited to work together to start sharing technical expertise to promote responsible AI innovation. ‘We're all going to need to work together to make sure AI benefits everyone,’ he adds.

As AI continues to take center stage, more countries are seeing the need to regulate this space with the US now taking it more seriously. The White House convened top tech companies to discuss the safe implementation of AI.

Anthropic, Google, Microsoft and OpenAI promote safe use of artificial intelligence
Kent Walker, Alphabet

At the meeting held last Friday, Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI agreed to new safeguard measures surrounding the technology. The companies pledged to undergo internal and external security testing of their AI systems before release including allowing independent experts to review the tech.

The global tech players also agreed to share information across the industry and with governments, civil society and academia to minimize AI risks.

Elevate the IR function

Generative AI has already started to become a prominent part of the IR role with Q4 boss Darrell Heaps calling it ‘amazing’ at the NIRI 2023 Annual Conference. He said in reference to its ability to ‘focus on all the things that are not relationships, such as the tactical elements, administrative aspects and high-level analysis.’

Meanwhile, Nasdaq’s global head of product for IR intelligence, James Tickner, said at the conference he would encourage people to think of AI as an enormous opportunity to elevate the IR function.

While AI has its benefits, Tickner said he is aware that when anything is this transformational, it comes with some element of risk and to avoid this, collaboration is vital. ‘The book is being written, but collaborating with policymakers and other companies to try to establish the rules here on how to make responsible AI is key.’

Bringing the tech sector together

Companies creating AI technology have a responsibility to ensure that it is ‘safe, secure and remains under human control,’ Brad Smith, vice chair and president at Microsoft, says. ‘This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.’

Regulation for AI has already started to take a more serious form across Europe with the European Union AI Act which looks at the impact this technology can have on businesses as well as the need for disclosure to ensure there is no bouts of misleading.

‘It is vital that AI companies - especially those working on the most powerful models - align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,’ says Anna Makanju, vice president of global affairs at OpenAI, developer of ChatGPT.

‘This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.’

Book your place for the IR Magazine AI in IR Forum in New York on December 1

Clicky