AI: A serious ethical challenge for consumer businesses



In his Reith lectures on artificial intelligence, Professor Stuart Russell read an extract from Alan Turing’s 1950 essay on computer machinery. “Once the machine thinking method had started,” he read, “it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.” This is an astonishing and terrifying notion – and businesses must learn to live with artificial intelligence and mitigate against its serious risks.

Historically, grappling with ethical and moral impact of AI was very much the domain of philosophers. But today, any business developing or implementing AI – that is to say, most businesses in the consumer sector – must to a varying degree necessarily consider the relationship between AI, innovation, safety and ethics.

While most businesses understand AI’s transformative potential, few have real clarity on how to ensure – and on who is responsible for ensuring – its ethical use. Without a coherent strategy and a framework of accountability for the corporate governance of data and machine learning, businesses put themselves and others at risk.

But where does final responsibility lie for these ethical questions on AI for businesses in our sector? Our most recent white paper, which you can read here, considers this question.

Our report sets out to answer who is responsible for the ethical use of data and AI in consumer-facing businesses.

The topic is especially pressing today. The vast majority of the world’s data – more than 90% – has been created in the last two years alone, and the amount we produce is expected to double every two years. In our consumer-facing sectors, never have organisations had more information on those they serve. The boom in ecommerce and the emergence of app-based ordering systems have provided never-before-accessed insight on customer behavior. In many ways, across the consumer sectors, it feels like we are entering into a new era – one that is underpinned by machine learning and collecting vast swathes of consumer data.

Businesses which fail to prioritise the ethical use of data and AI run serious legal, reputational and moral risks. In a recent white paper, Superhuman Resources: Responsible deployment of AI in business, the law firm Slaughter and May and data company Faculty AI identified six categories of risk associated with the emerging technology:

• Failure to perform
• Discrimination
• Vulnerability to misuse
• Malicious repurposing
• Privacy
• Social disruption

At a minimum, incomplete governance around data usage means that any company that collects or processes personal information about EU or UK citizens runs the risk of non-compliance with GDPR guidelines. However, it is far too simplistic to think that GDPR compliance alone will mitigate even a small proportion of the risk around data and AI.

As retailers diversify their services into quasi banking and insurance products, they will soon be put under similar microscopes to the Financial Services sector in terms of AI and data governance. Likewise, consumer businesses may need to rethink the algorithms used in pricing. Thinking back to the early days of the pandemic, the ludicrous prices of antibacterial hand gel provided examples of what could happen when competitive pricing algorithms played out without human oversight. In many sectors, algorithms are starting to price goods differently for different communities of customers – perhaps leading to “variable pricing” risking being seen more as “price discrimination”, particularly when groups of customers with similar ethnic and/or socioeconomic backgrounds are faced with more expensive products.

There are also plenty of risks associated with hyperpersonlisation. The Amazon case is an apt demonstration of this, in which the site began recommending the products required to create a homemade bomb in its “frequently bought together” section. Another infamous example is when US retailer Target sent coupons for baby products to a pregnant teenager based on her shopping habits, much to her unaware father’s alarm.

Perhaps most pressingly, the potential for bias in AI presents a huge risk to businesses across the sector. AI and other cognitive systems are trained by historical data sets collected by real people, and so are almost always laced with the unconscious bias inherent to humans.

In July 2020, for example, following the murder of George Floyd and the subsequent acceleration of the conversation around race and racism, Amazon issued a ban on police use of its facial recognition software Rekognition. The technology had been frequently criticised for having a disproportionately negative impact on communities of colour and fueling racial profiling: an experiment run by the American Civil Liberties Union in 2018 showed that Rekognition had incorrectly matched 28 members of Congress to photos of people arrested for a crime. It overwhelmingly misidentified Congress members who are not white. Failing to mitigate against bias in AI is a major ethical oversight and brings significant reputational risk for consumer businesses.

There are lessons the consumer sector can learn from healthcare organisations. Information on patient health is by its nature more sensitive than most customer data, and organisation are acutely attuned to the risks involved with a potential breach or a case of misused data, and have in place specific frameworks of accountability. In healthcare , for example, responsibility for the ethical use of data often lies with the relatively newly-appreciated role of the Chief Clinical Information Officer – a figure who combines a clinical background with an understanding of the systems in place and the risks associated with misuse of information.

In consumer businesses, however, there is no set precedent. Structures of accountability vary from business to business, depending on the organisation and how it uses data and AI.

In consumer businesses, there is no set precedent. Structures of accountability vary from business to business, depending on the organisation and how it uses data and AI.

In many cases, responsibility will lie with the chief data officer (or the chief technology or digital officer, in companies where technology has become synonymous with data). While this function may be best placed to handle enormous data sets, and normally has the most comprehensive understanding of its usage in the business, assigning ultimate accountability for the ethical use of data to the CDO poses serious risks. First, the pace at which data has become integral to companies means that CDOs often arrive in a role without an existing framework of accountability or the ‘big picture’ approach to operations that comes from board or executive committee experience. Moreover, imbalance of data literacy within an organisation can make any checks and balances that are in place difficult to effectively uphold.

Alternatively, with customer safety, security and engagement in mind, businesses may opt to give responsibility for data activity to their chief customer officer. A chief customer officer can apply a customer-focused lens when considering their businesses’ use of data, ensuring that the safety and security of the customer remain front of mind. However, chief customer officers may not have sufficient technological and digital understanding to effectively scrutinise the algorithms, code and data being used.

On the other hand, a CFO might be a good place for accountability to lie because of the high number of pre-existing checks and balances to which the role is subject. From internal planners to external auditors, the finance function is uniquely positioned with numerous standards that it must adhere to. In some ways, this makes the CFO well placed to take responsibility for the ethical use of data –especially when a company’s primary concerns around data and AI fall into the finance and regulatory compliance category. Clearly, however, most finance leads simply won’t have an adequate level of understanding around algorithms, data and AI to truly hold data teams to account.

In many cases, final executive accountability must lie with the CEO – especially if data is embedded as a core element of the business. Where the impact of missteps will impact company reputation, the bottom line and consumer trust, chief executives should be upskilled to the point where they can lead the conversation on use of data and AI, and be accountable for any failures (or, indeed, successes) in this space.

Where this is the case, businesses should ensure that there is an appropriate framework of accountability at every level of the business – especially within the data teams. While CEOs can take responsibility, most will not have the capacity (or ability) to oversee the building of code, collection of data or other processes around AI and machine learning – and indeed, may be unaware of the moral and ethical issues surrounding AI that they need to grapple with.

Businesses should ensure that there is an appropriate framework of accountability at every level of the business – especially within the data teams.

Maintaining a business’s ethical values, guaranteeing legal compliance, and ensuring that new technologies are being used optimally also requires leadership from the non-executive Board. Today, many NEDs recognise the importance of data and ethics, but few are equipped with the expertise needed to take final responsibility (indeed, most NEDs will have been executives at a time when data or AI was not as integral to a company as it is now).

Across the sector, Boards need to grasp the power and risks associated with data, and include at least one non-executive director who can participate in – and lead – conversations around ethical use of AI.

Across the sector, Boards need to grasp the power and risks associated with data, and include at least one non-executive director who can participate in – and lead – conversations around ethical use of AI. Most Boards aren’t here yet – and Chairs must think carefully about making appointments which can provide sufficient checks and balances to the executive. Indeed, one of the principles in Moira’s recent Boards of the Future manifesto is the need for deep digital understanding around the top table, from those who can speak the language of technology and translate it for their fellow Board members.

Accenture recommends the creation of a dedicated “ethics” committee in order to identify ethical concerns and apply company principals in a technological setting. The firm recommends that this committee consists of the necessary range of expertise, and be resistant to bias and conflicts of interests. The committee should include:

• Technical experts – those who have a strong understanding of the technology, systems and applications being discussed, for example a CDO, CTO or CIO

• Ethical experts – those who have a firm knowledge of the ethical and moral principals at hand, can assist the committee with working through issues analytically and can provide information and/or examples from other areas of ethics

• Legal experts – those who have a detailed understanding of the legal repercussions of issues to do with data and AI

• Subject matter experts – those who have an understanding of how a change in policy on data and AI will impact current internal operations or affect customer experience

• Citizen participants – those who can represent public concerns and perspectives.

For businesses in the consumer-facing industry, it is particularly crucial that an understanding of the ethical and moral implications of AI underpins data innovation and implementation. Businesses in our space which don’t commit to understanding and mitigating these issues run the risk of losing consumer trust – which, of course, underpins all of the businesses we lead.

We recommend four actions for Boards to effectively govern AI:

1) Separate AI ‘doing’ from governance – The team responsible for AI innovation and technological development should not be responsible for governing its ethical use

2) Make ethical use of data and AI a standing item on risk committee – AI should be a regular item on the risk committee agenda. Consider setting up a standalone ethics committee

3) Identify one NED to be responsible AI, data and ethics – Name one non-executive board member, who is appropriately trained and/or qualified, to be responsible for AI, data governance and ethics on the board

4) Measure and audit – Bring in third-party bodies to independently audit algorithms and policies around the collection and implementation of customer data, reporting directly to the Board

Data and AI – and their ethical and proper use within consumer businesses – will dominate the NED, governance and risk agenda of businesses in the years to come. How equipped is your business to grapple with this new horizon – and separate AI ‘doing’ from governance? Do let me know your thoughts – or indeed any best practice you have to share from your own organisation.

Elliott.goldstein@thembsgroup.co.uk | @TheMBSGroup