AI Ethics for Startups – 7 Practical Steps

Radiologists assessing the pain experienced by osteoarthritis patients typically use a scale called the Kellgren-Lawrence Grade (KLG). The KLG calculates pain levels based on the presence of certain radiographic features, like missing cartilage or damage. But data from the National Institute of Health revealed a disparity between the level of pain as calculated by the KLG and Black patients’ self-reported experience of pain.

The MIT Technology Review explains: “Black patients who show the same amount of missing cartilage as white patients self-report higher levels of pain.”

But why?

The article continues:

One hypothesis is that Black patients could be reporting higher levels of pain in order to get doctors to treat them more seriously. But there’s an alternative explanation. The KLG methodology itself could be biased. It was developed several decades ago with white British populations. Some medical experts argue that the list of radiographic markers it tells clinicians to look for may not include all the possible physical sources of pain within a more diverse population. Put another way, there may be radiographic indicators of pain that appear more commonly in Black people that simply aren’t part of the KLG rubric.

MIT Technology Review

To test this hypothesis, researchers trained a deep learning model to predict patients’ self-reported pain from an x-ray. It was much more accurate for all patients, but especially for Black patients, thus revealing critical flaws in the KLG methodology.

By deploying this artificial intelligence in an investigative fashion, these researchers were able to reshape the medical community’s understanding of pain, generating new knowledge that will ultimately be used to reduce healthcare inequities. 

This is just one example of the power AI has to do good in the world. 

At the same time, we also read media stories that highlight the damage wrought by AI-driven systems. These (usually) unanticipated consequences can be devastating, and only serve to fuel public mistrust. Like the UnitedHealth algorithm that was shown to prioritize healthier white patients over sicker black ones. The time IBM’s Watson gave unsafe recommendations for treating cancer. And the problem of AI medical imaging techniques yielding false positives and false negatives, leading to incorrect diagnoses. 

Put another way, we’re learning that unchecked and unexamined AI also has the potential to create considerable harm to individuals and society. 

This environment has prompted some of the biggest companies in technology to build teams just to examine the ethics of AI development and deployment. Salesforce, IBM, Microsoft, and Google are just some of the organizations looking at how to mitigate risk and maximize the positive impact of their technologies. 

There is a lot at stake for any company doing AI now, and ongoing ethical evaluation is fast becoming part of good hygiene practices. Get it wrong or breeze over it and there could also be significant financial losses to bear

Early stage businesses have an opportunity to get ahead of emerging concerns about the ethical use of AI and to learn from the mistakes of others. Yet, only very few can afford to assemble dedicated teams to lead on this issue. That’s why here I have laid out some considerations that any AI business can (and should) have on their agenda as they launch into this live, pre-regulatory environment. 

  1. Be open and transparent 

With the media, the public and governments leaning in more closely to monitor the practices of companies that build and use AI, transparency and responsibility will be key. Businesses should be able to account for where their datasets come from, including how they are collected and how data consent, privacy and security are all managed. 

At the same time there should be visibility into the specific data needed to power the AI, how it’s being used and to what end. Depending on that use, companies need to make sure they are training with balanced, representative datasets in order to avoid biased outcomes. For example, we know by now that if certain communities are over or under represented in consumer data it can skew AI predictions in pernicious ways

This isn’t always obvious: in 2019 Apple and Goldman Sachs launched a credit card that extended higher credit lines to men than women. Though the bank were adamant lending was based on creditworthiness rather than gender, the fact that women have historically had fewer opportunities to build credit had likely led the algorithm to favor men.

As far as possible, AI-driven companies should be able to understand and explain how their systems are tackling algorithmic fairness

  1. Be proactive in anticipating bad ethical outcomes

When artificial intelligence goes wrong in the ethical sense, the reasons are the same as when it goes wrong in any other sense. Those developing it haven’t anticipated all of the possible bad outcomes, or they have misunderstood how causal interactions lead to harm. Leaders looking to avert risk and launch ethically robust products must spend time regularly workshopping and categorizing potentially dangerous scenarios. Only then can decisions be taken about how they should be mitigated. 

Sometimes known as “ethical risk sweeping”, it involves asking questions at different stages along the workflow. 

What poor recommendations or outcomes might this product yield [think in terms of safety, security, privacy, autonomy, dignity, physical harm]? Who would this impact? Does it affect different sectors of society differently? How could the system change user behavior negatively? When does the system privilege business wants (e.g. profit or data gathering) over customer/user needs? How could the product or service be abused by bad actors? Here’s a more thorough checklist of questions to use or adapt. 

By understanding the kinds of ethical risks that pertain to a product or service, companies will get better at monitoring and mitigating. If it’s easier to begin by thinking about these questions through the lens of more familiar risk categories — like legal or reputational — then that can also be a useful starting point. 

In some cases, leaders will have to determine for themselves whether an outcome is ethically permissible based on their own principles. Here it can help to agree on a broader set of company values or commitments — thinking about customers, the industry, broader society — and then cross reference with possible AI outcomes. 

  1. Invite diverse perspectives 

Astonishingly, we continue to hear examples of development teams that get close to launch before realizing their product is flawed because they failed to consider a full range of perspectives. Take the Google team that built an iOS YouTube app without considering left-handed users because everyone involved was right-handed. Or the Facebook team that hadn’t realized that their Portal product didn’t recognize black faces because the team testing it was caucasian. 

Ethical risks are often missed when businesses don’t understand the viewpoint of particular groups. Having a more diverse team can help technology developers identify blindspots and imagine how users will engage with their product in the real world. 

Diverse teams should also extend to the skills around the table. In recent years, Silicon Valley companies have been hiring philosophers, artists, historians and social scientists (among others) to help technical staff understand things like the socio-economic context within which a certain dataset was collected, or the historical precedent of using particular types of tools. 

Not everyone can afford to do this, but it’s still important to ensure that the team reflects more than one experience of the world.  

  1. Have a structured approach that designates responsibility

Thinking about the ethical impact of AI should be an ongoing process, not a one-off exercise. This means that once a company has a clear idea of the kinds of ethical risks they need to monitor, they must find ways to build ethical vigilance into the operations of the business. For example, can teams be incentivized to anticipate and identify ethical issues? Could the organization develop a set of AI ethics best practices and include them as part of existing employee training programs? Could AI ethics controls be included in the formal control framework? 

Critically, ethics should have a clear owner that can take and attribute responsibility. Sometimes this falls to those who already deal with risk or security. This person or governance team should be driven to make sure the company’s position is clearly articulated, and the impact of the AI solution on society-at-large is taken seriously internally. 

  1. Keep a human in the loop

Companies are beginning to talk more freely about the importance of AI oversight. There is now general agreement that all artificial intelligence should be designed in a way that allows for human intervention. Especially where systems are being used in the context of health or human lives. 

Having the capability for a human to become involved in the decision cycle is the ultimate failsafe, but it shouldn’t be a spanner in the works. On the contrary, human oversight can improve transparency, help AI make more effective human-centric decisions, and enable more powerful systems (see here), all while ensuring user agency is safeguarded.  

European Commission guidelines for trustworthy AI suggest that “the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.

  1. Be diligent and flexible 

Though there are lessons to learn from how other sectors have built a consideration for impact into how they think about business practice — e.g. environmental sustainability and medical ethics — as yet there is no one playbook for how to think about AI ethics. That means businesses need to be diligent in keeping track of new advice, and flexible when it comes to new ways of thinking. 

Moreover, AI firms would be wise to encourage employees to communicate ethical concerns, as well as the values they believe their employer should uphold. Cultivating an environment where team members can ask difficult questions and suggest appropriate changes will help any business discover and neutralize problems more quickly while keeping employees engaged and empowered. 

  1. Tell users that your tools aren’t perfect

Be honest. Let your users know your tools aren’t perfect. Often we talk about building trust in AI, but we should also teach users to be cautious. The best way to do both is to be open about the reality of a system’s limitations. 

Building unrealistic expectations and having users place too much faith in AI output is a surefire way to run into trouble. As is often said, these are tools to augment human intelligence and should be clearly conveyed as such. 


With AI ethics, we are still at the beginning of what promises to be a long road, and even big businesses are “building the plane as they are flying it.” That said, to overlook the issue, or sit back and wait for regulation, is to misunderstand the current environment. Once a product or service hits the market it is immediately at risk of falling foul of new norms and standards — and the world is now watching this space. 

All businesses developing, designing or deploying artificially intelligent systems need to start these conversations today. 

“When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution… Every technology carries its own negativity, which is invented at the same time as technical progress.”

Paul Virilio

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s