On March 29, the world woke up to the fracas created by an open letter signed by Elon Musk, Apple co-founder Steve Wozniak and more than 1,000 technology experts, urging AI labs to stop the training of artificial intelligence (AI) systems.
More specifically, the letter calls for an immediate six-month pause of the training of AI systems more powerful than GPT-4, a language model recently launched by OpenAI.
The reason, according to the letter, is that human-competitive intelligence systems ‘can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.’
The letter reads: ‘Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.’
It calls for the pause to ‘be public and verifiable and include all key actors’ and adds that if it can’t be implemented quickly, ‘governments should step in and institute a moratorium’.
‘Good activism measure’
‘I view the letter as a good activism measure,’ Christoph Greitemann, senior investor relations manager at Deutsche Telekom, tells IR Magazine when approached for comment.
He says raising awareness of the need for political and regulatory oversight is helpful, but stresses the focus of regulation should be on the use of AI, not on R&D.
‘Restrictions on R&D make sense for experimenting with the human genome, but that is a different caliber [of experiment],’ he says. ‘The EU is already considering a new legal framework to regulate the development and use of artificial intelligence under the AI Act. The moratorium itself, as proposed in the letter, is not practicable or helpful.’
Greitmann sees the letter as a ‘panic move’ to limit the widening gap between fast-paced technological developments and the slower ability of organizations to adapt to such changes.
‘Instead of wasting time discussing a moratorium that will never work, we should focus on better ways to cope with the new reality,’ he says.
For Greitemann there are better ways to address and limit the gap. He refers to Martec’s Law, a four-step approach conceived in 2013 by the US computer programmer and entrepreneur Scott Brinker.
‘The first step is to foster the acceptance of the change, to reduce resistance to change,’ Greitemann explains. ‘The second is to prioritize the technological changes to embrace, in order to avoid disruption; the third is to make your organization more agile so that it becomes faster at reacting to changes; the fourth is to jump up the curve of technological changes by implementing changes within your organization all at once from time to time. It’s time to jump now.’
Greitemann also suggests another way companies could stay ahead of the AI development curve: ‘Creating a platform that acts as a single source of truth for every topic [within an organization] and linking those topics to each other will reduce the time it takes to onboard new participants into a project and facilitate co-creation.’
As for IR, Greitemann says that while the day-to-day functions of IR are set to change over time, the function is not going to disappear.
‘There will always be human strategic thinking and creativity needed to manage the relationships between companies and investors,’ he says. ‘But the daily work in IR is set to change fundamentally over time and to become more strategic, dense and digital. Investors will push for companies to embed AI more and more into their practices and will demand the benefits of increasing margins.’