<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
intelligent systems man or machine

Oct 05, 2016

Intelligent systems: man or machine?

Written By:

Nigel Toon

Join the IPU conversation

Join our Graphcore community for free. Get help and share knowledge, find tutorials and tools that will help you grow.

Join on Slack

When you start a new company, you effectively bring a new independent being to life. The legal implications are well-defined. But what happens when you create an intelligent machine?

In the eyes of the law a limited company has a separate identity to its owners or shareholders. In most jurisdictions the shareholder’s liability is limited to the amount that they have invested, and it is the board of directors who are responsible for the behaviour of this independent being. The board have a collective responsibility to make sure that: 

  • The company acts and ‘behaves' correctly;
  • It doesn’t take on too much liability;
  • It is able to meet its commitments; and that
  • The company looks after the best interests of all its stakeholders. 

This is a system designed to oversee and control the behaviour of an entity incapable of self-determination. Perhaps a similar concept is need for intelligent machines, too?

Defining ‘intelligent’ machines

Three key ingredients are needed to make a machine exhibit ‘intelligence’:

  • A (very) large data set of related information that the machine can learn from;
  • A framework for the high dimensional probability model(s) that represent a digest of the features (and probable connections between features) that it learns from the data-set; and
  • A platform that is able to provide enough computing power to operate on the intelligent model for both learning and for inferring answers from new data or ‘situations’ that are presented.

shutterstock_411428653.jpg

These ‘ingredients’ may be sourced from different places, and the resulting machine intelligence system that is created might be used in different situations and in different ways. This means the potential for failure is high, arising from: 

  • An error in the original data
  • A lack of appropriate ‘training’
  • A fault in the underlying hardware, or from
  • The way that the system is used

Who is responsible for failure?

This raises the question: who is responsible for the failure of an intelligent machine? It may be appropriate for a person or company that buys an intelligent system to own it and to be fully responsible for it. Or perhaps the entities responsible for the training data, learning methods and resulting learnt model should be responsible? Maybe the hardware manufacturers should continue to hold some responsibility for the system and how it behaves?

Failure can have multiple causal factors, and responsibilities are not always cut-and-dry. A driverless car may crash because the ‘driver’ used the system incorrectly, or because the intelligent system failed to deal appropriately with a set of risk factors.    

The bottom line: thinking differently about failure, responsibility and intelligent machines

The bottom line is that for the last 70 years, computers have done exactly what we have told them to do. If the system went wrong it was most likely that a computer program was incomplete or wrong – we had allowed a ‘bug’ to creep in. 

But in a world of machine intelligence, we need to take a different view. We need to think of machines that exhibit intelligence as independent entities, and we should think carefully about what that might actually mean. Why? Because thinking about machine intelligence in this way opens up thoughts about new business models, about liability sharing, about machine intelligence insurance and many other issues. The implications go far beyond technology.