How to Implement AI Ethics in Business, Part 1 – BRINK – Global Business Conversation & Insights

0

AI has become a business necessity. And AI ethics are quickly becoming a key risk requirement. No company can afford reputational damage due to bias in algorithms or discriminatory behavior.

Yet most companies have yet to fully understand what AI ethics require, according to Reid Blackman, a former professor of philosophy and ethics and author of Ethical Machines: Your Concise Guide to Fully Unbiased, Transparent, and Respectful AI.

BLACK MAN: We love AI because it does things very quickly and at scale, but that means the ethical and reputational risks of AI are also rapidly evolving. When you talk about discriminating AI, you’re not talking about how you might discriminate against this or that person, you’re talking about discriminating a lot of people.

Businesses are going to do what they need to do with AI to improve their bottom line, but along the way they shouldn’t put their brand, let alone people, at risk. It’s more than just a hiring manager discriminating against one person.

EDGE: Where are most companies in their thinking on this, in your experience?

BLACK MAN: The dominant business strategy these days, if you can call it that, is finger-crossing. They just hope bad things don’t happen. And when a company does something, it focuses on bias, which is just a subset of all ethical and reputational risk.

There are multinationals that are currently under scrutiny and being investigated by regulators and subject to fines, there is no doubt. But to be honest, there are also organizations that will get away with it. Different organizations are going to have different risk appetites.

It is the nature of the machine learning beast that it recognizes very complex patterns and data. And so, it can very well recognize discriminatory or biased patterns.

I’m an ethicist, so I think you should really identify and mitigate those risks because people are getting hurt. But if you ask me a simple, empirical question about whether organizations can take the risk and maybe get away with it, of course it can. I wouldn’t say this constitutes a responsible steward of your brand, but it can. It’s a bet you’re making, and it seems like a foolish bet to me.

Don’t leave the problem to the technologists

EDGE: Do you think companies generally underestimate the risks associated with AI because it is a new field?

BLACK MAN: There is an underestimation of the risk, partly because they don’t understand what the risks are. One of the problems we have, quite frankly, is that talking about artificial intelligence, and more specifically machine learning, is intimidating to a lot of non-technologists.

They think, “Oh, AI, the risk of AI, the bias of AI, that’s for the techs to figure out. That’s not what I do, I’m not a technologist, so I don’t deal with that. The truth is that it is senior leaders who are ultimately responsible for the ethical behavior and reputation of the organization.

And they underestimate the risks because they don’t believe they can really understand them and because they’re – again, to be perfectly frank – intellectually intimidated by expressions like machine learning and artificial intelligence.

The three big risks

EDGE: You say the three big risks are privacy, the black box issue, and bias.

BLACK MAN: So those are the big three. And then the fourth is just a big bucket that I would call “case-specific ethical risks.” The reason that biases, explainability, and confidentiality keep cropping up in discussions of AI and the ethics of machine learning is that the likelihood of realizing these risks is greatly increased due to the nature of the stupid machine learning.

It is the nature of the machine learning beast that it recognizes very complex patterns and data. And so, it can very well recognize discriminatory or biased patterns. It’s the nature of the machine learning beast to recognize phenomenally complex mathematical patterns that are too complex for humans to understand, and so you have the problem of not being able to explain how the AI ​​does that. And it’s the nature of the machine learning beast that it requires an enormous amount of data to train, and so data scientists are incentivized to collect as much data as they can.

Then there are the use-case specific ethical risks. So if, for example, you are creating a self-driving car, the main ethical risks will not be bias, explainability, or privacy violations, but killing and maiming pedestrians. If you’re building facial recognition software, it’s less about the data you collect for training data that could violate people’s privacy, but about the monitoring you engage in.

EDGE: You talk about structure and content – ​​how does a business start building some sort of structure to mitigate those risks?

BLACK MAN: The distinction between content and structure is really important. The content question is: what are the ethical risks that we are trying to mitigate? The structural question is: how do we identify and mitigate these risks?

Many organizations don’t know how to approach either of these questions, and the main problem is that they don’t dig deep enough into the content before tackling the structure side.

They identified the content at an extremely high level, but it is so general that it cannot be put into practice. So one of the things I recommend to clients is that they need to think much deeper about the ethical risks that they’re trying to identify and mitigate that are specific to the industry or their organization. .

Make sure that whenever you express what you consider to be an ethical risk, you relate it to things that are simply irrelevant to your organization. So if you value X, that means you will never do Y. For example, because we value privacy, we will never sell anyone’s data to a third party.

At some point, once you’ve gone deeper on the content side, you have to start building that structure: what does your governance look like? What are the policies? What are the KPIs for compliance with these policies? What are the procedures that our data scientists, engineers and product owners must follow? Do we need an ethics committee? Etc.

Share.

Comments are closed.