“Ethical Machines” breaks down ethical risk mitigation planning for AI – Compliance Week | Region & Cash

Perhaps your organization has a team of developers building industry-leading AI from the ground up. Perhaps your company’s HR department wants to get a recruiting machine to simplify the hiring process. Regardless of the objective function of AI, the prospect of automating a defined task opens the door to new efficiencies and saves your organization valuable time and money.

From a utopian world view, it all sounds wonderful.

As Reid Blackman explains in detail in his new book Ethical Machines, the ethical risks of using AI are enormous. In certain use cases they are unpredictable. For others, they emerge sideways, even when consciously anticipated.

Take Amazon, for example: a company that recognized the ethical risks of AI development the hard (i.e., expensive and reputation-damaging) way.

A team of Amazon engineers built a resume-reading AI to help humans in the task of reviewing tens of thousands of resumes a day, Blackman explained in his book. The team trained the AI ​​with a decade of hiring data and instructed the machine to look for the “interview-worthy” pattern.

“Women are not interview-worthy,” was the pattern she spat out.

The AI ​​was biased. More precisely, it learned Being biased based on the troves of labeled data that were entered. Amazon eventually scrapped its recruitment engine, despite subsequent efforts to eliminate discriminatory submissions. There was no guarantee that the AI ​​would not uncover other patterns in the data that could lead to discriminatory judgments.

Machine learning (ML), a subset of AI, is “software that learns by example,” Blackman said in an interview with Compliance Week. It is important that ML is not an ethically neutral tool, he argued. It’s not like, say, a screwdriver.

“The people who commission, design, develop, deploy and approve an AI are very different from the people who make screwdrivers,” wrote Blackman. “When you develop AI, you develop ethical – or unethical – machines.”

Therefore, it is the people – the enterprise as a whole – involved in every phase of an AI’s lifecycle who are accountable for its insights, decisions and impact, whether intended or not. Blackman’s book advocates that companies adopt a comprehensive AI ethics program that instills thoughtful decision-making into every phase of an AI’s development, from conception to deployment, and engages a cross-functional team of experts (not just “engineers”) to manage costs to avoid dead ends like Amazon’s – or worse.

Bias, explainability, privacy – where compliance comes into play

With wry humor that pervades the book, Blackman poked fun at the three most debated issues in AI ethics: bias, explainability, and privacy. He didn’t joke about the problems themselves; they are real. In fact, he devoted entire chapters to demystification and outlined steps to address it. But he joked about the cliche way people talk about her.

Everyone knows these are critical challenges, but the alleged “subjectivity” of ethics clouds the conversation, Blackman observed — to the point where people shrug and abandon the debate, allowing confusion to prevail. He made a convincing argument about what ethics is Not subjective when it comes to the use of AI in an organization. Business leaders need to be clear about what they are willing to do, stand up ethically and take risks on behalf of AI. Therefore, different companies will have different propensities for ethical risk.

“One thing I’m trying to stress is that it’s an ethical risk mitigationno ethical risk elimination. You need to mitigate this risk compared to other types of risk, such as: B. simple bottom line profit risks,” Blackman told the CW.

“I’m not trying to get them to radically reconsider their ethical priorities. I get them to understand that there are real risks here that need to be brought into the advisory process, which should certainly include compliance officers,” he added.

Here’s a quick rundown of the Big Three according to Ethical Machines:

  • bias: As seen in the Amazon example, bias occurs when an ML creates a series of discriminatory outcomes or automated decisions that range from ethically problematic to outrageous, depending on how those decisions affect people. Along with gender bias, think of racial profiling.
  • explainability: The degree to which an ML’s algorithm (i.e. what happens between its inputs and outputs) can be deciphered by humans. A “black box” ML is inexplicable; The pattern the machine identifies is too complex for humans to understand. This issue becomes important when people are owed an explanation for a ML’s decisions at the human decency level. For example, a candidate denied a mortgage or parole based on an AI’s decision might reasonably ask for an understandable explanation.
  • privacy: Data is the basis on which an ML is trained. The more the better. The issue of privacy arises when data is collected about individuals without their knowledge or consent. Furthermore, an ML trained on personal data will learn to make decisions that also threaten to invade people’s privacy. Think facial recognition software.

Compliance could and should play an oversight role in these challenging areas, Blackman recommended. He advised any company considering investing in AI to form a cross-functional AI ethics committee and add someone from the compliance function to the board (in addition to data scientists, subject matter experts, and individuals with business and legal expertise).

“It’s not the case that a compliance officer has to be involved in every single project. They need to be involved whenever ethical smoke is discovered,” Blackman explained. Formal inclusion of compliance in the AI ​​ethical risk due diligence process also helps ensure that “there is a process by which the right people can take a look [the potential ethical risks] to see if it’s actually burning there,” he said.

“It’s not the case that a compliance officer has to be involved in every single project. They need to be involved whenever ethical smoke is discovered.”

Reid Blackman, author of “Ethical Machines”

Most obviously, compliance officers will be involved in making sure organizations are compliant with upcoming AI regulatory requirements. In April 2021, the European Commission proposed a draft regulation on AI that will provide a legal framework that will impose a wide range of requirements on both the private and public sectors.

What regulations would the AI ​​Act contain?

“These three big issues I’m addressing — the challenges of bias, explainability, and privacy — are going to be 100 percent in the regulations in the near future,” Blackman said.

He expects the EU AI law to come into force in three years.

“Three may sound like a lot, but when you’re talking about the scale of organizational change required to actually be compliant, don’t start six months before the regulations are in place. We are talking about training tens of thousands of people. We’re talking about updating policies, [key performance indicators], infrastructure and governance. It’s a big boost.”

“Ethical Machines” is a good place to start.

Leave a Comment