Increasing adoption of AI-based applications creates added risk when insufficient attention is paid to ethics. AI expert Lance Eliot presents five AI ethics principles.

As a global CIO, I’ve overseen numerous AI-enabled application initiatives, including state-of-the-art systems built in-house, and thosse licensed for use from trusted third parties. The AI techniques and technologies have incorporated a myriad of the latest capabilities in Natural Language Processing (NLP), Machine Learning (ML), and Deep Learning (DL).

Whenever attempting to embrace so-called bleeding edge functionality, there is the chance things will go awry, more so than might be experienced with the adoption of more conventional, tried-and-true tech.

There is increasing pressure upon all CIOs to embrace AI-based applications which, undoubtedly, will continue to mount. It is tempting for some CIOs to treat AI-powered apps as nothing new, and merely more of the usual. But doing so will imperil those fellow CIOs and, regrettably, get them into rather hot water.

AI Ethics Demands Attention

A significant undermining threat involves a lack of attention devoted to AI Ethics. Here’s what that means.

The news has been replete lately with business stories of AI-based systems that have gone afoul of various ethical and potentially legal best practices. For example, applications that ascertain whether a consumer is eligible for a home loan or a car loan might at first glance seem exceedingly innocuous, and a good candidate for the use of AI. Doing so is usually expected to reduce the amount of human labor required to make loan approval decisions, speed up the process and make it more consistent.

That is certainly all desirable, though there is a hidden risk that might be lurking within those AI systems. The AI that is being deployed could be computationally making use of factors such as race, ethnicity and gender, which might not be readily apparent, and yet upon scrutiny could be discovered during a lawsuit or other challenge to the company utilizing the AI application.

The reaction by those caught in such a morass is to claim there was no overt intent to incorporate those factors. What they don’t realize is that the AI might have landed on those factors on its own accord (but, assuredly do not misconstrue this as some form of sentience, as I’ll explain further in a moment). And though the CIO and the IT team might fervently argue that this was not their intention or desire, the damage would be done nonetheless, and at a heavy price.

In short, CIOs are not going to be able to wiggle out of such situations by asserting that they did not know what the AI was doing. A board of directors and the executive leadership is unlikely to cut the CIO that kind of slack. Rather, they would assert that the CIO should have known better, and was the executive responsible for guiding the company on a more proper course of action.

Yes, when it comes to the adverse consequences of AI applications, the buck stops at the desk of the CIO. Whether this is fair or not is inherently immaterial (sadly, there is rarely glory that comes as a counterbalance, since the standing assumption is that the CIO is just doing their job when successfully devising and fielding strategic systems for the firm, and ergo deserves few accolades accordingly).

Hopefully, all this highlights that being aware and proactive about AI ethics must be high on the priority list for any CIO undertaking AI-related projects.

Choosing Your AI Ethics Principles

What exactly should be taking place?

Let’s take a brief look at a popular set of AI Ethics principles as promulgated by the OECD (the Organisation for Economic Co-operation and Development), and consider ways that a CIO can put them into practice.

The OECD has proffered these five foundational precepts as part of undertaking AI efforts:

  1. AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being.

  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.

  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.

  4. AI systems must function in a robust, secure, and safe way throughout their life cycles and potential risks should be continually assessed and managed.

  5. Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.

These somewhat lofty and seemingly altruistic precepts  leave one with a heartwarming feeling, but they might also appear to be fuzzy and vaguely defined for CIOs who are used to working on more tangible and rubber-meets-the-road daily issues. For ease of reference, here is my shorthand version of OECD’s AI Ethics principles:

  1. Inclusive growth, sustainable development, and well-being

  2. Human-centered values and fairness

  3. Transparency and explainability

  4. Robustness, security, and safety

  5. Accountability

Returning to the earlier example about a loan approval AI application, one manner that the AI could have ended-up using the factors of race and gender might have been due to the Machine Learning approach that was selected. Machine Learning is essentially a mathematical approach that uses a form of statistical computation-based analysis to find patterns in data. The patterns discovered can then be used when new data comes along and presumably will generate results akin to those that have previously been performed (based on the historical data used during the ML training).

During the ML training, those building or devising the AI application need to ferret out what patterns the ML is landing upon. Imagine that the historical data being used for the training inadvertently contains swaths of loans denied to those of a certain race or gender. The mathematical algorithm will detect such a statistical pattern, but the ML will not have any kind of common-sense reasoning or sentience to also realize that this is a disturbing pattern and ought not to be blindly copied.


Related article:

CIOs Must Deliver Business Value: 3 ways to meet the challenge

by Tony Gerth


The AI developers and data scientists involved in crafting the AI application will need to be astute, and able to catch those pitfalls. If you don’t have the resources with the knowledge and the available time to look for those issues, the end result will be a ticking time bomb.

This example highlights the importance of the AI ethical precepts stipulating that an AI system has to embody human-centered values and fairness, along with transparency, explainability, and accountability.

The CIO cannot blankly assume that the team creating the AI has those crucial AI ethics principles in mind. Likewise, when licensing or using an AI application from a third party, do not allow yourself to be lulled into the belief that the third party has embraced and adopted such AI ethics precepts.

Advice for CIOs on the Use of AI

Here is the bottom line for CIOs needing to be mindful of how they make use of AI.

First, do an AI ethics assessment or audit to ascertain whether your development of AI and/or the acquisition of AI applications is making substantive use of AI ethics principles. This will give you an indication of where things stand, and whether you are starting at zero or might already be up the curve on the matter.

Second, identify a set of AI ethics principles that are best suited to your company. There is a slew of AI ethics tenets that have been published, and though they usually are similar, you’ll want to select a set that is especially appropriate to your organization.

Third, provide training on how AI ethics principles are to be used in practice. The common failing on this is to merely post the AI ethics standards on the wall or a virtual bulletin board and proclaim that they exist. But hollow words do not lead to substantive acts.

What is a CIO?

Fourth, proceed to make use of the adopted AI Ethics principles, and herald their use to showcase that they are important, valuable and make a difference. Even if you personally cannot gain glory for doing so, you can certainly shower your IT team and the end users with praise.

Fifth, continually monitor how the AI applications are faring and re-adjust how the AI ethics principles are being employed, making adjustments to ensure that there are no loose ends, or efforts that subvert the intentions of the precepts.

There is no time to waste in putting into practice a suitable series of AI ethics activities. Assuming that you are already using AI systems somewhere in your application portfolio, you want to get ahead of the game to avoid potential traps.

All the best in your AI efforts and perhaps this AI ethics exhortation will help you sleep more soundly at night without worrying that one of those systems is going badly astray.

AI ethics practices

Roles We Recruit


Read our weekly e-newsletter packed with career advice and resources for the strategic technology leader, and information about active searches.

The Heller Report

Add a Comment