Skip to content

Responsible Algorithms in Business: Robots, fake news, spyware, self-driving cars and corporate responsibility

January 13, 2017

Roel Nieuwenkamp, Chair of the OECD Working Party on Responsible Business Conduct (@nieuwenkamp_csr)

Why is the topic of robots frequently being raised at recent conferences on responsible business conduct?  For example, October last year the Polish Deputy Prime Minister noted the connection between robotisation and corporate responsibility during the opening of the Conference in Warsaw celebrating the 40 years anniversary of the OECD Guidelines for Responsible Business.

The potential negative impacts of robots or automated systems have proved cause for concern. In May 2010 there was a trillion dollar stock market crash, a ‘flash crash’, attributed to algorithm trading or in other words: robot investors. And let’s not forget the mathematical models that contributed to the financial crisis of 2007 and 2008. Recent events surrounding fake news, with Pizzagate as the most extreme example, are also contributing to these concerns.

What is the common denominator of these automated systems? Algorithms! These rule-based processes for solving mathematical problems are being applied to more and more areas of our daily lives. Likely, we are only at the beginning of the era of algorithms and their widespread application is raising many ethical questions for society and businesses in particular.

For example “killer robots”, weapons systems that select and attack targets without meaningful human control raise questions about dehumanisation of killing and who is responsible? In December the United Nations decided to set up an expert group, in order to look into this issue following a campaign ‘Stop Killer Robots’ by Human Rights Watch and other NGOs.  While self-driving cars will never be at risk of driving while intoxicated they can make decisions that might pose moral dilemmas for humans.  Online face recognition technology raises concerns around privacy.  These are just a few examples.

The pervasiveness of the use of algorithms may result in many unintended consequences.  In her book ‘Weapons of Math Destruction’ Cathy O’Neil describes how algorithms in combination with big data increase inequality and threaten democracy. She provides examples of the financial crisis and the housing market, but also of a college student who does not get a minimum wage job in a grocery store due to answers provided on a personality test, people whose credit card spending limits are lowered because they shopped at certain stores, etc. She also discussed predictive policing models such as those that predict recidivism and algorithms that send police to patrol areas on the basis of crime data, which can have a racist effect because of harmful or self-fulfilling prophecy feedback loops.

Scholars and practitioners in this field are beginning to consider the ethical implications of application of algorithms. Julia Bossmann of the Foresight institute described her top 9 ethical issues in artificial intelligence. Prof Susan Leigh Anderson of the University of Connecticut stated: “If Hollywood has taught us anything, it’s that robots need ethics.”  Cathy O’Neil proposes a ‘Hippocratic oath’ for data scientists. Recently a group of scholars developed Principles for Accountable Algorithms.  In the private sector Elon Musk, SpaceX CEO and other business leaders have founded OpenAI, an R&D company created to address ethical issues related to artificial intelligence. Amazon, Facebook, DeepMind, IBM and Microsoft founded a new organisation called the Partnership on Artificial Intelligence to Benefit People & Society. The partnership seeks to facilitate a dialogue on the nature, purpose of artificial intelligence and its impacts on people and society at large.  It is encouraging that certain industry efforts are being undertaken in this area. Additionally one thing should be clear for businesses that create and use these technologies: when things go wrong, using algorithms as a scapegoat won’t do the trick.

What guidance on these issues can be found in the most important instrument on business ethics, the OECD Guidelines for Multinational Enterprises (MNE), a multilateral agreement of 46 states on corporate responsibility. Cases brought to National Contact Points, the globally active complaints mechanism of the Guidelines, provide a good illustration of what the Guidelines recommend with respect to these issues.  For example, in February of 2013 a consortium of NGOs led by Privacy International (PI) submitted a complaint to the UK National Contact Point (NCP)  alleging that Gamma International had supplied a spyware product – Finfisher – to agencies of the Bahrain government which then used it to target pro-democracy activists.

The NCP concluded that Gamma had not acted consistently with the provisions of the OECD Guidelines requiring enterprises to do appropriate due diligence, to undertake a policy commitment to respect human rights and to remediate human rights impacts. Furthermore the company’s approach did not meet with the OECD Guidelines’ standards to respect human rights and the engagement of the company with the NCP process was unsatisfactory, particularly in view of the serious nature of the issues. The NCP recommended that the company engage in human rights due diligence.

What is human rights due diligence and what does it mean for companies developing algorithms? Under the Guidelines due diligence is a process that should be carried out by corporations as part of a broader range of actions to respect human rights. The right to privacy, freedom of speech, freedom from torture and arbitrary detention are examples of the many potential human rights that could be impacted. Due diligence is the process of identifying, preventing and mitigating actual and potential adverse human rights impacts, and accounting for how these impacts are addressed. If there is a risk of severe human rights impacts a heightened form of due diligence is recommended. For example, significant caution should be taken with regard to the sale and distribution of surveillance technology when the buyer is a government with a poor track record of human rights. Due diligence should be applied not only to a company’s activities but across its business relationships. In the context of a company producing algorithms therefore it is not sufficient that they behave responsibly in the context of their own operations but due diligence should also be applied to ensure buyers of the technology are not using it irresponsibly. In instances where this is the case, the company that created and sold the technology is expected to use its leverage in the value chain to prevent or mitigate the impact.

A number of valuable tools to respect human rights and implement the ’know your client’ principle have been developed in the context of ICT business operations. For example, the European Commission has developed a useful guide for companies on respecting human rights in the ICT sector. TechUK, an industry association of ICT companies in the UK, in partnership with the UK government has published a guide on how to design and implement appropriate due diligence processes for assessing cyber security export risks. Additionally the Electronic Frontier Foundation has developed a guide on How Corporations Can Avoid Assisting Repressive Regimes and the Global Network Initiative has developed Principles on Freedom of Expression and Privacy.

Beyond the human rights related recommendations, the OECD Guidelines make other relevant recommendations for companies developing algorithms. For example the Environment Chapter recommends environmental, health and safety impact assessments.[1] The Consumer Chapter advises companies to provide accurate, verifiable and clear information to consumers.[2] In addition companies should respect consumer privacy and take reasonable measures to ensure the security of personal data that they collect, store process or disseminate.[3]

Businesses that create algorithms should do their due diligence on potential human rights impacts. Companies should also carry out due diligence on labour, environmental and health and safety impacts. They should provide accurate verifiable and clear information about their algorithms and take measures to protect personal data. Collaborative industry efforts on responsible algorithms are highly needed to shape these expectations in concrete terms. Responsible algorithms will not only generate profit, but protect the rights of individuals worldwide while doing so.

Useful links

There’s an algorithm for that. Or there soon will be Marina Bradbury on Insights

[1]               OECD Guidelines for Multinational Enterprises, Chapter VI.3

[2]               OECD Guidelines for Multinational Enterprises, Chapter VIII.2

[3]               OECD Guidelines for Multinational Enterprises, Chapter VIII.6

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s