Niel Nickolaisen, an IT advisor and Field CTO at Valcom Technologies, makes the case for ditching functionality-weighted scorecards in favor of a simpler rubric built around vendor differentiation, innovation focus, and how fast you can get to value.
I know what you're thinking: if we're not selecting technology based on functionality, what are we selecting on? Hear me out.
Functionality is important. Critical, even. But it may not be the most important factor in selecting one system over another.
I use a selection process that treats functionality as table stakes. Every system I consider should deliver comprehensive functionality in an operationally excellent, best-practice way. If a system has functionality gaps, I drop it from consideration. Market leaders, by definition, have comprehensive functionality—that's part of what makes them market leaders.
If I play on the bleeding edge and consider nascent products that aren't yet established, I'm choosing to trade off innovation against mature functionality. That's a different conversation.
For the rest of us, the real question isn't whether a system can do what we need—it's whether we're evaluating the right things once functionality clears the bar. In my experience, most selection processes get this wrong.
The 120-Row Spreadsheet
Early in a new CTO role, I walked into a human capital management (HCM) selection already in progress. The project manager invited me to the meeting where the cross-functional team planned to finalize their selection criteria. He shared his masterpiece—a spreadsheet with eight columns and 120 rows. Columns for functionality, weighted importance, vendor scores, and totals. Rows for every element of desired functionality: Does it integrate with payroll? Does it include standard reports?
The idea was to evaluate each of the three vendors, enter their scores, total the weighted results, and make a selection.
As a newcomer, I didn't want to derail the process. But as the meeting got rolling, I made an observation. The three HCM vendors were all solid platforms—great news. Given that, could we safely assume they'd all score reasonably well on the 120 functionality criteria? And if so, what should actually drive the decision?
I asked how long we'd been using the current HCM system: twelve years. And why were we replacing it? Because it hadn't kept up with technology and employee experience expectations.
That was the real issue. The new system would be significantly different from the old one, so ease of use and ease of adoption should be major criteria. We should evaluate the elegance of the user experience and how easily managers and employees could learn the system. And given that the current system had become obsolete, we needed to understand each vendor's product roadmap—and their track record of actually delivering on past roadmaps. On top of that, we'd need to consider cost to acquire, implement, and own. And since the company wanted a return on this investment, time-to-value was also key.
We restructured the spreadsheet on the spot. The 120 rows became eight criteria:
-
Ease of use
-
Ease of adoption
-
Product roadmap
-
Roadmap delivery track record
-
Cost to acquire
-
Cost to implement
-
Cost to own
-
Time to value
With the revised criteria, the selection process looked very different. We brought in process owners to evaluate ease of use and adoption. We asked them to pay attention to functionality, but from the perspective of “how easy will it be for me and my team to learn this?” Meanwhile, the selection team focused on the roadmap and cost criteria. If two vendors looked equally strong, time to value broke the tie; the one that got us to benefits faster won. The evaluation that was supposed to take three to four months took four weeks. Ultimately, we selected the HCM with the best user experience and the most certified integrations with other applications, reducing implementation and ownership costs and accelerating time to value.
That experience taught me something I've applied ever since. Those eight criteria were right for that decision—but over time, I've distilled them into six criteria that work across any technology selection today.
The Six Factors
You've probably seen some version of that functionality-focused selection spreadsheet—or one even bigger. When I encounter a monster spreadsheet, I kindly ask that we close the file and never look at it again.
Instead, we build a simpler rubric. We treat lack of functionality as a disqualifier—we don't assess how well one system creates an invoice versus another. What matters is that they both can create an invoice.
So, what should drive the selection? Here's what I evaluate:
-
Differentiation. What does this system do better than anyone else? And does that matter for my use case?
-
Innovation focus. Does the vendor concentrate innovation on what differentiates them, or spread it across everything, including table-stakes features?
-
Product and innovation roadmap. I'm choosing a system I hope to use for years, which means I'm inheriting the vendor's planned innovation. There's enough uncertainty in my life—I'm unwilling to accept roadmap uncertainty from my vendors. How well has the vendor delivered on past roadmaps? There can be a big difference between what was planned and what actually shipped.
-
Time to value. This one is critical. I'm choosing a system to generate value, and the sooner I realize that value, the better. I might pay a premium to shorten time to value.
Here's a simple example: say the new system will generate $1.2 million per year. System A takes six months to deploy. System B takes three months. System B gets me to value $300K sooner. How much is that worth?
What shortens time to value?
● Ease of implementation: Less effort means faster value.
● Ease of use: If users struggle, value shrinks, and time to value stretches.
● Ease of integration: What does it take to connect this system to everything else? - Artificial intelligence. If a system has AI capabilities—and most do—I want to know how well-designed and accurate they are, how they create value, and whether the AI has a credible roadmap. AI that's impressive today may be table stakes tomorrow.
- Cost to acquire and own. This goes beyond the sticker price. I look at licensing model and costs, implementation costs, ongoing support and maintenance, and the availability of skills required to implement and support the system. A lower licensing fee doesn't mean much if implementation takes twice as long or the talent market for that platform is thin.
Making the Shift
Using this approach is different, and different can be difficult. To make it work, I rely on my human change management motto: "People prefer the familiar to the comfortable, and the comfortable to the better."
To get people to embrace the better, I need to make it both familiar and comfortable.
There's a related trap in system selection. When I ask users to list the functionality they need in a new system, they accurately describe the functionality of the system we decided to replace. My mantra is that we replace, not replicate, legacy systems. If replication is the goal, just keep the legacy system—it's a perfect replica. It's not the users' fault. The only system they know is the one they've been using. But it means the selection process has to actively push past what's familiar.
This selection approach works best by testing it on a single technology decision first. Once it works—and people see that it works—it's easier to use it the next time. Before long, the test becomes the standard process.
I’ve applied this model to dozens of selections—ERP, CRM, cybersecurity, Governance, Risk, and Compliance (GRC), infrastructure, AI. The results: lower costs, higher adoption, much less wasted time and effort—and, most importantly, systems that actually delivered the value we expected. That's the payoff for ditching the spreadsheet.
Written by Niel Nickolaisen
Niel Nickolaisen is an IT advisor and Field CTO at Valcom Technologies. The co-author of The Agile Culture: Leading Through Trust and Ownership and Stand Back and Deliver, he advises several technology start-ups and sits on the board of a start-up accelerator. Previously, Niel held technology and operational executive positions at Utah State University and other organizations. Nickolaisen has an MBA from Utah State University, an M.S. in engineering from MIT, and a B.S. in physics from Utah State University.