Americas

  • United States

Asia

AI ethicists can help organizations with regulatory compliance and risk mitigation when deploying AI. But smart companies can also leverage these specialists to drive ethical innovation across the organization.

In response to the many ethical concerns surrounding the rise of generative artificial intelligence (genAI), including privacy, bias, and misinformation, many technology companies have started to work with AI ethicists, either on staff or as consultants. These professionals are brought on to steward how the organization adopts AI into their products, services, and workflows.

Bart Willemsen, a vice president and analyst at Gartner, says organizations would be better served with a dedicated ethicist or team rather than tacking on the function to an existing role.

“Having such a dedicated function with a consistent approach that continues to mature over time when it comes to breadth of topics discussed, when it comes to lessons learned of previous conversations and projects, means that the success rate of justifiable and responsible use of AI technology increases,” he said.

While companies that add the role may be well-intentioned, there’s a danger that AI ethicists will be token hires, ones who have no meaningful impact on the organization’s direction and decisions. How, then, should organizations integrate ethicists so they can live up to their mandate of improving ethical decision-making and responsible AI?

We spoke with tech and AI ethicists from around the world for their thoughts on how organizations can achieve this goal. With these best practices, organizations may transform ethics from a matter of compliance to an enduring source of competitive advantage.

The AI ethicist as tech educator

For some, “ethicist” may connote the image of a person lost in their own thoughts, far removed from the day-to-day reality of an organization. In practice, an AI ethicist is a highly collaborative position, one that should have influence horizontally across the organization.

Joe Fennel, AI ethicist at the University of Cambridge in the UK, frequently consults with organizations, training them on ethics along with performance and productivity.

Ethics is like jiu-jitsu, he says: “As you get to the more advanced belts, it really becomes less about the moves and much more about the principles that inform the moves. And it’s principles like balance and leverage and dynamicness.”

He approaches AI in the same way. For example, when teaching prompt engineering with the aim of reducing genAI hallucination rates, he does not require students to memorize specific phrases. Instead, he coaches them on broader principles, such as when to use instructions versus examples to teach the model.

Fennel has coalesced these techniques into an overall methodology with safety and ethical considerations that gets people interested in ethics, he says.

Darren Menachemson, chief ethicist at Australian design consultancy ThinkPlace, also believes that one of the key responsibilities of ethicists is communication, particularly around governance.

“[Governance] means that organizations need to have enough understanding of the technology that they really can control the risks, mitigate, [and] deal with [them]… It means that artificial intelligence as a concept needs to be well communicated through the organization so people understand what its limits are so it can be used responsibly,” he said.

There are of course cultural challenges to this instruction, namely the “move fast and break things” ethos that has defined the tech ecosystem, especially in the face of AI’s rise.

“What we’re seeing is a real imperative among many organizations to move quickly, to keep pace with what’s happening more broadly and also to take advantage of really amazing opportunities that are too significant and carry too many benefits to ignore,” Menachemson said.

Menachemson argues that ethicists, particularly those at the senior level, can succeed in spite of these challenges by possessing three qualities. The first is a deep understanding of the nuances of AI technology and what risk level this poses vis-a-vis the organization’s own risk appetite.

The second is a willingness to engage stakeholders to “understand the business context that artificial intelligence is being introduced into and get beyond the general to the specific in terms of the guidance that you’re offering.”

The third attribute is central to executing on the second. “Bewildering the senior cohorts with technical language or highly academic language loses them and loses the opportunity to have actual influence. Senior ethicists need to be expert communicators and need to understand how they can connect ethics risk to the strategic priorities of the C-suite,” he said.

Delivering actionable guidance at two levels

Although ethics may be subjective, the work of an AI or tech ethicist is far from inexact. When addressing a particular issue, such as user consent, the ethicist generally starts from a broad set of best practices and then gives recommendations tailored to the organization.

“We’ll say, ‘Here is what is currently the industry standard (or the cutting edge) in terms of responsible AI, and it’s really up to you to decide in the landscape of possibilities what you want to prioritize,’” said Matthew Sample, who was an AI ethicist for the Institute for Experiential AI and Northeastern University when Computerworld interviewed him. “For example, if [organizations are] not auditing their AI models for safety, for bias, if they’re not monitoring them over time, maybe they want to focus on that.”

Sample does give advice beyond these best practices, which may be as granular as how to operationalize ethics at the company. “If they literally don’t have even one person at the company who thinks about AI ethics, maybe they need to focus on hiring,” he said as an example. 

But Sample avoids hardline recommendations. “In the spirit of ethics, we certainly don’t say, ‘This is the one and only right thing to do at this point,’” he said.

Menachemson has a similar two-pronged approach to his workflows. At the top level, Menachemson says that ethicists give general guidance on what the risks are for a particular issue and what the possible mitigations and controls are.

“But there’s also an imperative to go deeper,” he said. This step should be focused on the organization’s unique context and can be done only after the basic advice is understood.

“Once that diligence is done, that’s when recommendations that are meaningful can be put to the chief executive or to the board. Until that diligence is done, you don’t have any assurance that you really are controlling the risk in a meaningful way,” he said.

In terms of what to discuss, cover, and communicate, Cambridge’s Fennel believes that AI ethicists should be broad rather than narrow in scope.

“The more comprehensive you are with your AI ethics agenda and assessment, the more diverse your AI safety implementation will be — and, equivalently, the more robust your risk prevention and mitigation strategy should also be,” he said.

Everyone should be an ethicist

When it comes to implementation, Jesslyn Diamond, the director of data ethics at Canada-based Telus Digital, says her group works to anticipate unintended consequences from genAI, such as any potential misuse, through the use of a red team, which identifies gaps and even tries to intentionally break systems.

“We also use the concept of blue teaming, which is trying to build the innovative solutions to protect and enhance the outcomes that are possible together through a purple team,” Diamond said.

The purple team is multidisciplinary in nature, spanning professionals from QA, customer service, finance, policy, and more. “There’s something about the nondeterministic nature of generative AI that really makes these diverse perspectives, inputs, and expertise so necessary,” she said.

Diamond says that purple teaming creates the opportunity for different types of professionals to use the technology, which is helpful in not only exploring the risks and unintended consequences that are important considerations for ethics, but also to reveal additional benefits.

Telus also provides specialized training to employees on concepts like data governance, privacy, security, data ethics, and responsible AI. These employees then become data stewards to their spheres of influence. To date, Telus has a network of over 500 such data stewards.

“Becoming more familiar with how [AI] works really equips both those who are very technical and those who are less technical to be able to fully participate in this important exercise of having that diversity of expertise and background [represented],” Diamond said.

It may seem obvious that ethics should be multidisciplinary, but far too many companies pigeonhole the function in a remote corner of the organization. “It is so important that people understand the technology in order to meaningfully govern it, and that tension between literacy and participation has to happen at the same time,” Diamond said.

Creating a culture of ethical innovation

The goal of advising on ethics is not to create a service desk model, where colleagues or clients always have to come back to the ethicist for additional guidance. Ethicists generally aim for their stakeholders to achieve some level of independence.

“We really want to make our partners self-sufficient. We want to teach them to do this work on their own,” Sample said.

Ethicists can promote ethics as a core company value, no different from teamwork, agility, or innovation. Key to this transformation is an understanding of the organization’s goal in implementing AI.

“If we believe that artificial intelligence is going to transform business models…then it becomes incumbent on an organization to make sure that the senior executives and the board never become disconnected from what AI is doing for or to their organization, workforce, or customers,” Menachemson said.

This alignment may be especially necessary in an environment where companies are diving head-first into AI without any clear strategic direction, simply because the technology is in vogue.

A dedicated ethicist or team could address one of the most foundational issues surrounding AI, notes Gartner’s Willemsen. One of the most frequently asked questions at a board level, regardless of the project at hand, is whether the company can use AI for it, he said. “And though slightly understandable, the second question is almost always omitted: ‘Should we use AI?’” he added.

Rather than operate with this glaring gap, Willemsen says that organizations should invert the order of questions. “Number one: What am I trying to achieve? Forget AI for a second. Let that be the first focus,” he said, noting that the majority of organizations that take this approach have more demonstrable success.

This simple question should be part of a larger program of organizational reflection and self-assessment. Willemsen believes that companies can improve their AI ethics by broadening the scope of their inquiry, asking difficult questions, remaining interested in the answers, and ultimately doing something with those answers.

Although AI may be transformational, Willemsen emphasized the need to closely scrutinize how it would benefit — or not benefit — people.

“This ought to take into account not only the function of AI technology, the extent to which undesired outcomes are to be prevented and that technology must be under control, but can also go into things like inhumane conditions in mining environments for the hardware to run it, the connection to modern day slavery with ‘tagger farms,’ as well as the incalculable damage from unprecedented electricity consumption and water usage for data center cooling,” he said.

Organizations that are fully aware of these issues and aligned with their AI initiatives will see benefits, according to Willemsen. “The value of AI ethics may not be immediately tangible,” he said. “But knowing what is right from wrong means the value and greater benefit of AI ethics has a longer-term view: a consistent application of technology only where it is really useful and makes sense.”

by Eric Frank

Eric Frank is a freelance journalist as well as data scientist currently studying at the University of Michigan.

More from this author