Harini Shankar, director of technology at the Financial Industry Regulatory Authority, argues that modern quality assurance functions are evolving to integrate observability and compliance into testing today’s real-world systems. That means QA leaders are poised to become strategic business partners.
Software quality assurance (QA) and trust have always gone hand in hand. But what that relationship looks like is changing dramatically.
Think about all the software that is fundamental to your company’s performance: those that deliver a customer experience, empower your sales teams to close deals, enable you to execute financial transactions, and ensure that your supply chain is well managed. Traditionally, QA was seen as more of a gatekeeper for these systems. It was focused on identifying bugs and ensuring software met basic functional requirements before release. But that is no longer sufficient.
Today's systems are more dynamic, distributed, and deeply integrated into major business flows. With this rise in complexity due to cloud native architectures, real-time data pipelines, and AI-powered features, the role of QA has expanded exponentially. Indeed, QA teams are becoming the backbone of modern software delivery. They ensure that systems are resilient and audit-ready. And they help DevOps teams detect and prevent failures without compromising regulatory demands. This evolution – from bug-detector to compliance enabler – is a strategic opportunity.
Two mandates are now front and center: observability and compliance.
Observability is the ability to monitor, understand, and debug systems in real time. The QA function of 2025 plays a vital role in observability by ensuring that systems are behaving as expected across services. The QA function must validate that failures are observable and explainable because this has now become a baseline requirement for compliance and risk management.
In industries such as finance, healthcare and insurance, regulators expect systems to maintain audit trails and provide proper evidence that a transaction occurred. If failures happen and they cannot be explained — whether due to missing logs or gaps in test coverage — it can lead to regulatory penalties and loss of customer trust. With the increased integration of AI into core platforms, that responsibility to maintain evidence of transactions requires companies to be able to explain what occurred as a result of AI activity. This makes auditability a shared responsibility between engineering and QA teams. It’s crucial for QA to step up not to just validate features, but to help build trustworthy systems.
Compliance means ensuring that systems are built to meet internal policies, external regulations, and ethical standards.
While compliance was once a concern only for infosec teams after a system went live, it’s increasingly embedded into the software development lifecycle. Waiting until production to identify compliance issues is too expensive and risky. Thus, it must be integrated into QA testing, too.
All transactions must be traceable and auditable to meet regulatory requirements which mandate that organizations are able to maintain accurate, tamper-proof records of user actions. This level of traceability is required to demonstrate adherence of policies during audits, any audits, investigations, or after-incident analyses. As systems become more complex and subject to increasing regulatory demands, companies need QA leaders who are able to bridge engineering and compliance. Without this alignment, production outages can increase. This in turn can lead to penalties and loss of customer trust.
Observability: Why QA Can’t Ignore It Anymore
What went wrong? Where? And Why?
You can only answer these questions about system failures if the software is properly instrumented. This is where QA teams can play a pivotal role. In the modern technology landscape, QA organizations have to move beyond simply checking just for functional correctness; they have to validate system visibility (the ability to observe how a system behave in real time). The requires logs, metrics, and alerts that are not only available, but meaningful. Without such visibility, teams will struggle to diagnose issues, prove compliance, or understand user journeys.
Given the complexity of today’s systems, QA must evolve to validate that the system is not only functional, but resilient and traceable.
This includes verifying that error logs are saved (with appropriate severity levels noted), that relevant metrics are logged for performance-critical operations, and that traces are present for calls that take place across system components.
Let’s take the example of a retail ecommerce platform during a high-traffic sale event. In the traditional QA set up, testing would confirm that checkout and search work under normal conditions. However, if users begin to notice failures during high-traffic times, this approach does not offer much help in diagnosing the root cause of these issues.
In a modern QA process with observability in place, QA would validate that the error logs capture any potential failures with time stamps and severity. They can also identify traces, follow user journeys, and easily reproduce the failures. This means when an issue arises, teams are able to pinpoint the exact cause of the problem — be it a database bottleneck or a degraded service.
Testing for Observability: New QA Practices
If observability is a priority, QA processes my evolve beyond validating software functionality. QA teams will have to verify that the system can tell us about what it’s doing.
Given the complexity of IT architectures, QA processes must expand to include substantiating that observability platforms are correctly integrated into systems and are producing relevant logs. Some new practices the QA function must adopt include:
- Log validation during automated testing: ensuring that workflows produce logs with the right structure and several level information.
- Chaos testing: recreating failures in non-production environments to verify that logs, traces, and metrics accurately reflect any issues.
- Synthetic monitoring during pre-production: simulating user traffic to verify that the key workflows (such as logins, transactions, or data submissions) are fully observable and logs and traces are being captured as expected.
Taking a Lead Role in Compliance
Particularly in highly regulated industries like finance and healthcare, QA teams have become compliance enablers as well. As these industries have adopted Continuous Integration/Continuous Delivery (CI/CD) practices to automate software development and deployment, QA functions must ensure that new systems are not only being rolled out faster and more reliably, but that they don’t introduce compliance risks.
Specific compliance checks must be integrated to CI/CD pipelines to flag any violations. Some new QA practices in this area include:
- Testing for data masking in non-production environments: Ensuring that sensitive information such as personal identifiers are properly obfuscated during testing. This is important to protect user privacy and prevent accidental data leaks during QA activities.
- Access control testing: Validating that specific role-based flows operate as intended.
- Validating that critical user actions are being logged: Confirming that activities such as login, payment, or checkout are captured with sufficient detail for audit and compliance. Without proper logging, it can become challenging to trace issues and demonstrate adherence to policies.
- Generating evidence for audits: Proactively collecting test summaries, version-controlled artifacts, and automation logs. This increases trust in new systems and puts the organization in a better position in case of audits.
Strategic QA in Action
Let's consider a loan origination platform used by an institution to process thousands of applications daily. In traditional QA set up, test results would be all over the place, logs might be stored locally, and QA wouldn't be looped in for production incidents.
Now let's imagine approaching the same platform using unified test reporting across components such as credit scoring and identity verification all visible in a centralized dashboard integrated with Splunk. If there is an outage during peak hours, those performing root cause analysis could look to QA logs, pinpointing that the issue resulted from a recent configuration change that broke one of the APIs. Because the test logs are traceable, this can happen within minutes.
Meanwhile, since this team implemented policy-as-code rules within the product development pipeline, they were able to prevent deployment that violated masking rules. When QA integrates such checks — in conjunction with DevOps, security, and site reliability engineering — it not only prevents compliance violations but enables the faster restoration of services. This helps save time, maintains trust, and minimizes regulatory risk.
To most effectively support observability and compliance, however, QA systems and processes themselves must be trustworthy, traceable, transparent, and integrated. There are a number of new investments QA leaders can make to ensure that the function inspires confidence.
Adopting unified test reporting across microservices and test types is a big step in this direction. Integrating QA test dashboards into tools like Splunk or TestRail can also be beneficial. Ensuring that all test logs are tamper-proof to protect against any data loss is also critical.
At an organizational level, there are changes that need to happen as well. QA teams can no longer operate in isolation. They must collaborate across different functions, including DevOps (to ensure QA is a part of CI/CD), security teams (to automate policy checks), and site reliability engineering (to validate that reliability metrics are observable). Such cross-functional synergy ensures that the systems can be released with more confidence and clarity.
Some evidence of greater QA integration include incident post-mortems that include QA insights, the integration of policy-as-code validations into automated test pipelines and requiring that observability criteria are met before a system is considered “done.”
The Future of QA: From Back-Room Tester to Strategic Partner
It’s clear to me that in the future, QA will no longer be about validating quality only but proving system integrity. To get there, QA will evolve in a number of other ways. Some emerging best practices on the horizon include:
- The development of continuous governance models, in which QA is blended with DevSecOps
- The automatic generation of audit evidence from test runs
- The use of LLM-powered test insights to identify anomalies
- The creation of observability-first development frameworks
As organizations have increasing demands for reliable systems and real-time insight, the QA function is in an ideal position to take the lead in meeting them. With the right investments in new processes, QA will empower development teams to move faster — with greater confidence and without sacrificing quality. QA will no longer just be a team of testers but a strategic enabler of trust and transparency.

Written by Harini Shankar
Harini Shankar is director of technology at the Financial Industry Regulatory Authority (FINRA). Shankar is a seasoned technology leader with a passion for driving innovation and building resilient software systems, including various large-scale initiatives encompassing numerous micro-applications. She has led and mentored teams of engineers, enabling them to deliver seamless integration and validation of complex systems.