<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
Hippocrates - utopia vs distopia - AI Safety

Oct 23, 2023

AI Safety Summit - First Do No Harm (Primum non nocere)

Written By:

Nigel Toon

We're Hiring

Join us and build the next generation AI stack - including silicon, hardware and software - the worldwide standard for AI compute

Join our team

Be very wary of AI technology leaders that throw up their hands and say, “regulate me, regulate me.” Governments might be tempted to rush in and take them at their word. However, governments currently lack the necessary skills to craft appropriate AI regulations and instead may end up relying on these same technology leaders to help them craft the regulations. The result could easily be a lack of transparency and barriers being placed in the way of innovation and on new market entrants.

The AI Safety Summit, organised by the UK Government at the historic birthplace of electronic computing, Bletchley Park, provides us with the ideal opportunity to instead consider the best path forward. I agree with others who are calling for an expert-led body empowered to objectively inform governments about the current state of AI capabilities, but I will go further to call for a trusted independent AI institution that can cross national borders and help describe appropriate regulation, to ensure governments do not become driven by local commercial interests.

 

Just as engines augment our human strength, allowing us to do much more work, to travel at speed, and even to fly, so AI is a powerful tool that can augment our own human intelligence. AI is perhaps the most powerful tool that humans have ever created.

 

Unlike conventional computers which are told what to do, step by step in a program, AI systems are given a method, created by humans, which allows the machine to learn from information. Data plus context gives us information, from information we get knowledge, and from knowledge we can build intelligence. It is the breadth and quality of the information, plus the effectiveness of the machine learning method, that determines the capability and effectiveness of the AI system.

 

We are still just in the foothills with AI and have only seen a glimpse of the heights that it may reach. We are also discovering that from time-to-time it may appear that AI possess very human attributes, but we must always remember that these are simply outcomes from a machine that has been given a human developed method to learn from information – we should not become confused and attribute human behaviour to or blame the machine.

 

AI regulation should instead focus on controlling the developers of the AI system and on controlling the organisations that attempt to use this powerful tool.

 

Regulation cannot rely on the AI controlling itself, even though there are methods that can help. We must place controls and imbue ethics on AI’s human creators and the companies that seek to exploit this powerful technology.

 

Just as medical professionals must take a Hippocratic Oath, the principle of which dates back 2,500 years, and states: Primum non nocere – first do no harm, so we should propose a similar ethical standard for developers of AI systems. This simple principle of ‘first do no harm’ must be mandated in global AI standards and taught in computer science and AI courses. Maintaining this professional standard must accrue accolades and rewards to the AI practitioners.

At a certain level we will all need to rely on the AI experts to maintain these standards and to report on wrongdoing amongst their peers – just as we do with medical doctors.

Obviously, developers also need to take other steps too, such as ensuring the development methodology used is secure-by-design and that training data is robust, unbiased, and is not vulnerable to attacks. Building a global AI development culture, based on strong ethical standards, will be key.

The global community of frontier AI experts, researchers and commercial practitioners are already highly cohesive and interconnected.  Establishing a system of ‘Primum non nocere’ AI awards with global credibility is perhaps much easier to achieve in the short term than the ultimate goal which must be global regulatory alignment, and so we should consider strongly promoting AI ethics as a first step.

However, in the medical profession and also in the airline industry, there are independent organisations that maintain testing standards and investigate when accidents occur. For AI we will need to establish similarly trusted, global, independent AI institutions.

 

Others have pointed to the Intergovernmental Panel on Climate Change (IPCC) as a model that we should consider. I believe that these AI institutions need to go further and should help governments and global organisations set and maintain AI standards. They must be endowed with the highest levels of technical expertise and should be sponsored by both governments and the tech industry.

 

Just as building trust in economic systems, through property law and independent central banks, is critical to ensuring economic success – so the same will be true for AI. It will actually be in the commercial interest of tech organisations to support these independent AI institutions and ensure that they remain independent and trusted. As well as establishing trust, such institutions should also drive and encourage the development of new tools to test the compliance and robustness of AI systems. The UK is already home to a growing number of companies that focus on AI safety and so this could be an excellent ‘trusted’ location for such a global institute.

 

It will be in everyone’s interests that these trusted AI institutions can cross borders to ensure national governments do not become driven by local commercial interests and so establishing links to inter-governmental bodies will also be key. These AI institutions must also be informed by the economist, Joseph Schumpeter’s model of ‘creative destruction,’ to ensure that smaller, innovative commercial organisations are also able to operate with fair and open competition.

 

We need to encourage new players to emerge and challenge the current AI leaders and should make sure that this remains possible. Ultimately, we must serve the best interests of consumers and citizens rather than aligning with the interests of today’s AI tech leaders. Getting the right balance of innovation and regulation will be important.

 

AI is an extremely powerful tool that will augment our human intelligence. The biggest existential risk that companies and countries may face is to fall behind in the race to develop and deploy AI systems. They could end up missing out on the biggest technological advances, productivity gains, and economic impact, that any technology advance has yet delivered. This may lead to countries and companies pushing ahead at speed which in turn could result in negative issues only coming to light later. Strong, independent AI institutes which can serve the interests of consumers and keep us all protected will be key.

 

Nigel Toon is attending the AI Safety Summit in Bletchley Park, UK on 1-2 November 2023.