listing page or single post https://www.hellersearch.com/blog Heller Blog

Why I’m an AI Optimist — And How You Can Be, Too

Anil Cheriyan
By Anil Cheriyan

Jul 23, 2025

AI carries significant risks, but with the right approach CIOs can harness these evolving capabilities to transform organizations, says long-time CIO and founder of Phase IV Ventures Anil Cheriyan.

There’s no denying the massive scope, scale, and impact of AI. Consider gains in drug development research, fraud detection in financial services, intelligent chatbots in customer engagement, and productivity tools (like meeting summarization). At the same time, there are well-publicized risks, such as job displacements, algorithmic bias, hallucinations, cybersecurity, privacy, intellectual property protection, and deepfakes. The risks are great enough that, some AI researchers talk about “the probability of doom” also called “p(doom)” – the likelihood that AI systems widely deployed will destroy our world. Even an AI champion like Anthropic CEO Dario Amodei puts the risk at between 10 and 25 percent.

In spite of these well-founded concerns, I remain an optimist. Why? Because societies have confronted new technologies and information networks (such as printed and social media) that first appeared risky and ultimately powered advances. I believe the benefits of AI likewise will prevail – if we perform as leaders should. That means learning from history, calculating when to cede decision-making to an AI, establishing the right level of governance, and architecting a diverse, multi-model AI environment that minimizes aggregate risk.

IT leaders have a critical role to play here. CIOs who strike the right balance between exuberance and risk awareness will be able to accelerate the development of outcome-oriented AI applications while addressing the underlying risks. 

Lessons from the Past

We’ve seen suspicion meeting a major technology before. With safety standards, electricity, once decried as a public health menace, is integral to our lives. Likewise, computers, once dubbed “mechanical monsters” for fears they would replace people,  have become indispensable.

More relevant to fears of AI is the abuse of information networks. Despite their benefits, bad actors have abused these networks to coerce, control and misinform. Deepfakes, robo-tweets and acts of political interference illustrate this. Society has and will continue to adapt to these threats associated with the development of AI. The benefits are too important. We’d no sooner shut down information networks than ban cars on the highway.

What has enabled humans to make peace with these new capabilities is the ability to apply these technologies to transform business and enrich human lives while mitigating, safeguarding against, or minimizing the associated risks. 

How the Human-AI Interface Will Evolve: Shifting Decision Rights

The promise of AI — especially generative and agentic AI — is the potential application of intelligence to radically transform the ways in which businesses operate. Whether it’s in the areas of customer service, supply chain, or R&D, true scalable impact will only occur when the human-AI interface — where decisions are made — is rethought.

Initial distrust of AI led to building “human-in-the-loop” AI systems,  where the ultimate decision-making is controlled by the machines’ flesh-and-blood counterpart. This is not necessarily an unwise approach, especially since early GenAI systems are still prone to hallucinations or may generate different answers for the same questions. In the near term, keeping humans in the loop makes sense until confidence in AI outputs grows.

However, over time, I anticipate that the human-AI interface can evolve in a few different ways:

  • Reinforced learning will improve the answers that GenAI models generate to the point where trust in these systems will grow, and humans will delegate more routine decision making to AI systems.
  • Interrogating the AI system to provide greater transparency into how it came up with an answer will also boost confidence in these systems. The way we test these heuristic systems will evolve. Traditional testing methods, relying on exact output matching, won’t work. We’ll need to continually validate Gen AI systems using empirical evaluation and testing to assess their accuracy, reliability, and suitability for different business applications.
  • We will need to advance our ability to discern the appropriate use of these systems. Knowing how and where to exploit these capabilities will require people being more selective in how they ask questions, how they evaluate answers, and how they make decisions.
The Right Level of Governance

As the risks of GenAI became apparent, governments, technology companies, corporate boards, and consulting firms began establishing risk management frameworks on AI governance. The concept of Responsible AI emerged, with frameworks focused on six core principles:

  • Fairness: Ensuring AI systems do not discriminate or perpetuate bias, particularly in decision-making processes.
  • Transparency: Making AI systems understandable and explainable to users.
  • Accountability: Establishing clear lines of responsibility for AI systems, ensuring someone is accountable for their actions and outcomes.
  • Privacy: Protecting sensitive data and ensuring user privacy in AI systems.
  • Security: Protecting AI systems from cyber threats.
  • Reliability and Safety: Ensuring AI systems perform as intended, minimizing risks and unintended consequences. 

While many of these frameworks are well-meaning, the rapid institutionalization of national and state regulations became a source of concern among those that viewed AI as a core competitive capability. AI is central to a new arms race between the U.S. and China. And some voiced concerns that over-regulation could hinder AI innovation, increase costs, and prevent organizations from keeping pace with rapid advances.

Amid regulatory uncertainty, it becomes incumbent on individual corporations to build their own "right" level of governance. It’s in each company’s interest to demonstrate their AI governance framework's effectiveness to build and maintain trust with customers, employees, board of directors, and oversight communities.

The Upside of Decentralization

Science fiction, as in HAL 9000 in 2001: A Space Odyssey, and the machine god in The Matrix, reflects real anxiety over the power of AI. Given the very decentralized nature of AI today and its rapid democratization, I believe these fears are exaggerated.  

GenAI models depend on the data they ingest and are trained upon, with reinforcement learning used to optimize their outputs based on user feedback or other criteria. At last count there were well over 700,000 large language models (LLMs), of which 40 to 50 are in wide use (such as OpenAI, Google, Anthropic, Meta, Alibaba, and DeepSeek).

Each of these models are competing to deliver artificial general intelligence: the ability to understand, learn, and apply intelligence at the level of a human being. There is no central “control” of this intelligence. What’s more, several of these LLMs are open-source models, allowing users (including a number of sovereign nations) to modify them for their own purposes, further propagating decentralization.

Self-correcting mechanisms are crucial to navigating some of the biggest risks associated with AI, like bias amplification, potential for misuse, and widespread job displacement. The decentralization of GenAI increases the likelihood that these models will be developed and used in a way that aligns with societal and business values and norms. Why: Because this decentralization enables continuous learning, adaptation, and improvement in how AI interacts with and influences the way we work, live, and advance as a society. This is where CIOs come in.

Implications for the CIO: An AI Optimist’s Playbook

CIOs — as leaders of teams that implement AI systems — will need to strike the right balance of when it comes to the evaluation and application of AI within their organizations. Neither doomsday pessimism nor optimistic naivete will drive meaningful, scalable change.

There are four actions IT leaders can take to stoke creative thinking around AI applications in the enterprise while also managing attendant risks.

  • Address AI fears head on. IT leaders can acknowledge and enumerate AI risks and build safeguards to minimize them. This may include architecting the appropriate integration protocols, implementing heuristic “test and learn” processes, and insisting on “chain-of-thought” transparency for each AI model. They can also educate their company leadership and workforces and IT teams on the appropriate use of AI and establish change management processes to increase AI adoption and address underlying reluctance.
  • Invest in evolving the human-AI interface.  Before applying any AI-powered automation, IT leaders should rethink existing workflows and consider the user experience. They can start conservatively by allocating most decision rights to people as a means to testing machine-derived results. Then they can adapt their framework based on the readiness of the AI model and its underlying data (to ensure its quality, transparency, maturity and accuracy) before relying more on machines.
  • Develop the right level of governance. CIOs should tailor their AI framework (there are public versions available) to their companies and regulatory environments. Evaluate the risks against the rewards for the opportunities being exploited. While these may be primarily business-led decisions, the CIO can work closely with business leaders and risk functions to clarify accountability and establish risk reporting processes for investors, regulators, boards of directors, executives, and company-wide level reporting.
  • Leverage multiple orchestrated AI models. By using more than one AI model, organizations can  take advantage of the relative strengths of each while minimizing the aggregate risk.  While this approach may cost more and demand effective integration (between company data and LLMs, agentic AI and business workflows and among agents), it is a worthwhile investment in today’s fast evolving AI environment.  

I firmly believe we are on the most exciting journey of our technology careers with the opportunity to leverage these amazing AI capabilities — if we take the right actions. Overly risk-averse approaches will result in nothing getting done. Irrational exuberance will result in risky, disjointed, and non-scalable applications. I remain cautiously, but steadfastly, in the AI optimist camp. We IT leaders can’t sit on the sidelines waiting for this all to be worked out. CIOs who lean in with a positive but clear-eyed view of AI will enable their organizations to take full advantage of these transformative capabilities. 

Anil Cheriyan

Written by Anil Cheriyan

Anil Cheriyan is founder of Phase IV Ventures, providing advice to banks, technology firms, and late-stage growth start-ups. Anil previously served as CTO at Cognizant Technology Solutions, CIO at Truist (formerly SunTrust Banks), and the U.S. Presidential appointee in charge of Technology Transformation Services charged with “making the lives of the public better through technology”.