This risks of getting AI wrong are elevated for large, established companies, yet they can’t afford to sit on the sidelines. Executive coach Joe Topinka and the Shaw Industries’ VP of IT Saurabh Shah share their framework for making sure that AI delivers real value without sacrificing governance or trust.
Most established companies are under pressure to “do something with AI” today. Boards want updates. Investors expect returns. Customers demand experiences that only intelligent, data-driven systems can deliver.
But the costs of getting AI wrong — from wasted spend to brand damage — are far higher for mature enterprises than for startups. Unlike new companies that can pivot with minimal downside, established companies must protect decades of hard-won customer trust, meet regulatory obligations, and are operating using long-lived systems that can’t be replaced overnight. This makes responsible vetting of AI opportunities a necessity.
The two of us — an executive coach and the VP of IT at Shaw Industries, a global leader in flooring and other surface solutions in the midst of a modernization effort — collaborated on a framework that can help other established companies ensure that innovation delivers business value without sacrificing governance or trust.
Common Pitfalls to Recognize—and Avoid
Vetting AI ideas in large organizations is harder than it looks. Some AI pitfalls are challenges that leaders encounter with any major technology initiative, while others are unique to AI or experienced with greater intensity because of the speed, visibility, and risks involved.
These include:- Approval paralysis. Governance structures, established to mitigate risk, can bury promising ideas under layers of approvals. This slows down any kind of innovation, but with AI it often means losing ground as competitors adopt tools that quickly scale.
- High demand and low capacity. New technologies typically spark widespread interest, creating more demand than IT can handle. With AI, this pressure is magnified because nearly every vendor is marketing AI features and almost every business unit wants to experiment, overwhelming already stretched IT organizations.
- Shadow AI. When IT can’t keep up, business units launch AI pilots on their own. Unlike traditional shadow IT, unvetted AI pilots pose greater risks — exposing sensitive data, introducing bias, or creating compliance problems if they aren’t aligned with enterprise governance.
- Pilot projects that don’t address a business problem. AI proofs-of-concept often dazzle, but without solving a real business problem they waste resources and create disillusionment. The risk of showy, low-value pilots is especially acute with AI given the hype cycle and executive pressure to “do something with AI.”
- Fear, uncertainty, and doubt that leads to costly pauses. Underlying concerns about disruption, accountability, or loss of control often create hesitation at various levels and across different departments. This fear, uncertainty, and doubt regarding AI slows meaningful progress.
A Four-Phase Model for Assessing AI
Recognizing these hurdles is the first step. To move beyond them, organizations need a structured way to separate opportunity from hype and to advance ideas safely without stifling innovation. External partners, vendors, or consultants can provide support along the way, but enterprise leaders must set the priorities and retain ownership to ensure synchronization with business goals.
That’s where a disciplined, four-phase model for assessing AI opportunities comes in. We’ve found that this approach works across the breadth of AI types, whether you’re exploring generative content, predictive maintenance, or customer-facing chatbots.
- Signal detection. The best AI ideas often originate closest to the business problems: in customer service, on the shop floor, inside marketing teams. Leaders should create open channels so employees can easily share their suggestions. But the idea submission process should also require critical context. Every proposal should indicate what business problem it addresses, and how it will improve outcomes or reduce costs.
- Feasibility and Alignment Checks. Not every idea deserves the green light. A disciplined triage process should review:
- Strategic fit. Does it advance business priorities? Tying AI investments to business outcomes not just technology milestones helps prevent scaling a cool tool that solves nothing essential. For example, a demand forecasting model should be tied to improving forecast accuracy by 20% to cut carrying costs and reduce stockouts, while a dynamic pricing model might be linked to improving margin by three to five percent or accelerating revenue growth in key markets.
- Data and technology. Is there clean, accessible data? Are our systems AI-friendly? If data is poorly governed, trapped in legacy silos, or inconsistent in quality, AI models will struggle to produce reliable results regardless of how advanced the algorithms may be.
- Risk and compliance. What are the legal, privacy, cybersecurity, and ethical implications? Ethical risks often involve protecting sensitive data, preventing bias, and ensuring transparency in how AI is used. For instance, a bank must ensure customer data is not exposed in generative AI models such as those used for text, images, or other content generation. A healthcare company, on the other hand, must guard against biased diagnostic outcomes, such as incorrect risk scoring or treatment recommendations, that result from skewed training data.
- Cultural readiness. Will people (employees, customers) adopt this or is it too big a leap? For example, an organization whose employees consistently use data and analytics to inform top-down decision making is far more likely to embrace an AI-powered forecasting tool. On the other hand, widespread resistance to change, lack of trust in data, or expecting AI to replace human judgment are clear signs that that even modest AI pilots are likely stall, let alone transformative ones.
- Thoughtful Experimentation. At this point, the organization can run small pilots tied to explicit success criteria. A good test doesn’t just prove the technology works; it verifies whether the initiative actually moves a key business metric. Experiments should be fast and contained and also include a plan for next steps if results are positive. For example, with the new demand forecasting model, the company should A/B test the model against current forecasting methods on select product lines to determine whether it improves forecast accuracy and reduces inventory costs, using the results to refine and scale the approach.
- Preparation to Scale. Finally, decisions-makers should make sure that processes are being redesigned and not just automated, that people are trained and their roles are updated, and there is governance is in place to monitor for drift, hallucinations, misuse, or evolving compliance requirements. Then, over a reasonable period — six to nine months, depending upon the complexity of the solution—the AI solution can be deployed broadly. For example, Shaw ran and successfully deployed a generative AI solution to empower customers with faster and more personalized flooring selection experiences.
What “Good” AI Looks Like
Assessing an AI idea can happen on two dimensions, which are complementary to each other. IT and business leaders can use a quick screening checklist to determine if the proposed solution is worth exploring and then conduct a deeper foundational review to see if their organization is ready to deploy the AI idea successfully at scale.
Quick Screen
This is an executive-friendly initial filter. It helps leaders determine whether an idea deserves to move forward. If a proposal doesn’t pass this initial screen, it should be paused or redirected.
Deeper Review
If an idea passes the initial screen, leaders should also make sure the following five foundational ingredients are available.
- Access to rich, trusted data. The best AI outcomes start with high-quality data; both structured (ERP transactions) and unstructured (emails, documents, chats). Without that, AI works blind. Organizations need reliable processes to move and prepare data at scale so information flows smoothly from where it is created into the AI system in a clean, usable form even as volumes grow. For a customer-facing chatbot initiative, for example, this means ensuring access to clean, governed customer information, a 360-degree view of customer interactions, prior service transcripts, and a curated knowledge base so responses are accurate and contextually relevant.
- Human judgment from domain experts. AI is a force multiplier, not a replacement for seasoned experience. Effective teams embed business experts into the process who can validate outputs and keep AI tied to what matters most. In the case of the chatbot, customer service leaders should regularly review AI-generated responses for accuracy, alignment with company policy, and brand tone.
- Solution architecture with humans at the helm. Good AI systems don’t turn into black boxes. They’re designed so people stay in control, with oversight, escalation paths, and “stop buttons” that allow for course correction. The AI in the chatbot, for example, should know when to pull a human in, seamlessly escalating an interaction to an agent when it detects customer frustration, complexity, or sensitive issues.
- Scalable multi-agent orchestration. As enterprises evolve from individual AI pilots to ecosystems of collaborating agents, they need thoughtful design, clear agent roles, and robust governance to ensure security, learning, and interoperability. The chatbot, for example, could be integrated with other AI agents such as those for order management, logistics tracking, and billing systems so it can provide end-to-end answers rather than narrow responses.
- Cybersecurity and risk governance. AI introduces new risks: data leakage, model misuse, hallucinations, compliance violations. Zero trust security, privacy controls, model audits, and clear accountability aren’t overhead; they form the architecture of trust. The chatbot initiative above should incorporate safeguards to prevent the exposure of sensitive customer data, ongoing monitoring for biased or inappropriate responses, and routine audits to ensure compliance and trustworthiness.
A Real-World Case Study
At Shaw Industries, demand for AI solutions had been growing across the company, from marketing personalization to operational forecasting. Shaw was in the midst of a substantial digital transformation, modernizing its core platforms with a cloud ERP to drive greater efficiency and agility across the business. This foundational modernization was also a deliberate choice so that future AI initiatives could rely upon clean data, well-understood processes, and a scalable architecture.
This ongoing modernization effort, however, did not put a pause on the business’s need to drive innovation. As business units propose AI ideas, they’re vetted against strategic goals, technical readiness, and the organization’s broader change capacity. Meanwhile, by strengthening its core systems, Shaw is creating the foundation to unlock AI’s full business value as the organization advances its transformation.
Among the AI initiatives Shaw has successfully deployed thus far are AI and Gen AI solutions to improve customer engagement and predictive models to optimize marketing costs. Shaw’s experience illustrates how mature companies can balance immediate needs with long-term thinking, making sure AI doesn’t just impress but delivers measurable benefits such as increased customer engagement, reduced downtime, and faster decision-making.
The Leader’s Role in AI Vetting
None of this happens by accident. Leaders must be intentional about creating the right environment for eliciting and assessing AI proposals. They can:
- Invite ideas, making it safe for teams to propose opportunities tied to real problems.
- Insist on discipline, using frameworks like these to avoid skipping critical steps.
- Create transparency so that teams know why an idea advances or stalls.
- Celebrate learning even when pilots fail, because the lessons sharpen the next opportunity.
AI will change every business but not always in predictable or positive ways. In established companies — where customers, regulators, and investors are all watching closely, the costs of getting AI wrong are high.
A structured approach to vetting protects the business and unlocks true innovation. It elevates AI from a technical experiment to a disciplined path to growth, trust, and long-term competitive advantage.
Joe Topinka is an award-winning CIO, executive coach, and author with more than four decades of technology leadership experience. He founded CIO Mentor to advise IT and business leaders across industries. Topinka has served as a strategic advisor to Fortune 500 firms, startups, and public agencies. A former Board chair and current board member emeritus of the BRM Institute, he is the author of IT Business Partnerships: A Field Guide and the forthcoming Beyond the Algorithm: Lead What Machines Can’t, a playbook for accountable, business-minded leadership in the AI era.
Saurabh Shah is a vice president of IT at Shaw Industries, a Berkshire Hathaway Firm, and a former Fortune 500 CIO. With global experience leading digital transformation, he has driven ERP modernization, enterprise data initiatives, and AI adoption to deliver business value. Known for balancing innovation with governance, Shah builds high-performing teams and scalable architectures that align technology with business priorities.
