How to Reduce Repeat Contacts: A Support Leader's Guide
Repeat contacts signal deeper organizational failures — in product, process, and knowledge systems. This is what support leaders need to understand about contact elimination, cross-functional alignment, and building a deflection strategy that actually works.
The most expensive ticket your support team handles is not the complicated one. It is the second one on the same issue.
When a customer contacts support, receives a resolution, and returns days later with the same problem — that moment is not just an operational failure. It is a signal. Something upstream — in your product, your processes, your knowledge infrastructure, or your organizational design — is systematically not working. And in most organizations, that signal is going unread.
Not because support leaders are not paying attention. But because the systems most teams have built to measure performance were designed to track speed and volume, not cause. They answer "how fast are we resolving tickets?" with precision. They are nearly silent on "why do these tickets exist in the first place?"
This is the organizational blind spot at the center of most deflection strategies — and it is why so many of them fail quietly, while headcount needs grow and CSAT remains stubbornly flat.
This post is for Heads of Support, Support Ops Leads, and CS Managers who are serious about reducing contact volume and want to understand why current investments are not moving the number as expected — and what structural and strategic changes actually do.
TL;DR: Repeat contacts are not a content problem or a chatbot problem. They are an organizational diagnosis problem. The support operations that most effectively reduce volume — at companies like Shopify, Intercom, and Amazon — share one thing: they have built systematic, cross-functional feedback loops between their ticket data and the teams responsible for fixing root causes. Getting there requires more than a knowledge base audit. It requires a rethink of how support data flows, who acts on it, and what decisions it is allowed to influence.
Two Terms Worth Defining Before You Go Further
One of the most consequential — and most commonly blurred — distinctions in support operations:
Contact Deflection vs. Contact Elimination
Deflection means a contact happened but was handled by self-service or automation instead of a live agent. The customer reached out. The channel absorbed it.
Elimination means the contact never happened at all — because the upstream cause was fixed. A product change, a better onboarding flow, a KB article that answered the question before the customer felt the need to ask it.
Mature support organizations track both and treat elimination as the more important metric. Most organizations measure only deflection — which means they are optimizing for how volume is handled, not whether it should exist.
This distinction shapes everything that follows. Deflection is an efficiency play. Elimination is a structural one. And repeat contacts — the focus of this post — are almost always elimination opportunities, not deflection ones.
The Hidden Cost That Most Support Budgets Are Not Tracking
Ask a support leader to justify headcount and they will open a ticket volume report. Ask them to justify deflection investment and they will reference deflection rate or self-serve adoption. But ask them what percentage of their current ticket volume is preventable — meaning contacts that should never have reached an agent — and most will pause.
That number is the one that matters. And in most organizations, nobody owns it.
The financial case for owning it is not subtle. Forrester Research has documented that the average cost of a live agent support interaction runs between $6 and $12 depending on channel and organization — compared to less than $0.25 for a self-service resolution. That is a 40-to-1 cost differential.
Put it in concrete terms: a team handling 5,000 contacts per month at an average fully-loaded cost of $8 per contact is spending $480,000 per year on contact handling. If 20% of those contacts are repeat contacts on categories that a product fix or a KB article could have eliminated — a conservative figure for most support organizations — that represents approximately $96,000 in annual spend on contacts that should not have happened. Not because of poor agent performance. Because of an unaddressed upstream cause.
That number does not appear on any dashboard most support teams run. It is baked into headcount cost and treated as fixed. It is not fixed.
The loyalty cost is equally significant. The Corporate Executive Board — now part of Gartner — conducted one of the most comprehensive studies of customer service behavior ever published, formalized in the landmark Effortless Experience research program. Their central finding overturned the conventional wisdom that customer delight drives loyalty: it does not. Effort drives disloyalty. Customers who have to contact support more than once about the same issue are significantly more likely to churn and share negative experiences — regardless of how warmly each individual interaction was handled.
According to Gartner's Effortless Experience research, 96% of customers who experience a high-effort service interaction report becoming disloyal — compared to just 9% of customers in low-effort interactions. Repeat contacts are among the primary drivers of high-effort experiences.
The implication for support leadership is uncomfortable: you can have a team of skilled agents, strong CSAT scores, and respectable FRT numbers — and still be losing customers to problems your ticket data is pointing to. The performance metrics look fine. The structural problem is invisible in them.
The Organizational Blind Spot: Why Support Data Rarely Drives Decisions
There is a pattern that plays out across support organizations of all sizes. The support team sits closest to the customer. It hears, in real time, what is frustrating people, what is confusing, what is broken. It accumulates — in the form of ticket data — one of the richest bodies of customer intelligence in the entire business.
And then, in most organizations, that intelligence goes largely nowhere.
It does not reach product teams in a form that moves roadmap priorities. It does not reach content teams in a form that drives KB investment. It does not reach operations in a form that improves processes. It produces dashboards that support leadership uses internally, the occasional anecdote that surfaces in a cross-functional meeting, and not much else.
Three structural reasons explain why.
Support data is formatted for operational management, not organizational decision-making. Ticket volumes, response times, resolution rates — these metrics matter for running the team day to day. They were never designed to be inputs for a product prioritization conversation or a content strategy decision. When support leaders bring data to cross-functional tables, they are often carrying the wrong kind of evidence for the conversation they need to have.
The people responsible for fixing root causes do not own the problem. When a ticket category is driven by product friction, the fix requires an engineering sprint. But engineering's priorities are set by product teams whose roadmap inputs come from sales, marketing, customer success, and user research — not typically from support ticket analysis. Support leaders end up holding clear evidence of a problem while having limited structural influence over the team that can solve it.
Support teams are rewarded for throughput, not prevention. Most support organizations are still measured primarily on speed and efficiency: first response time, resolution time, tickets per agent per day. These metrics create a powerful implicit incentive to handle volume well rather than reduce it. A team that closes 500 tickets per day is performing well by every standard dashboard metric, even if 150 of those tickets were preventable. The incentive does not create pressure to investigate why they existed.
This is not a new problem. What is changing is the cost of leaving it unaddressed — as AI raises the ROI ceiling on deflection investment, and as economic pressure on support headcount reduces the tolerance for avoidable volume.
What Repeat Contacts Are Actually Telling You
Every recurring ticket category is a symptom. The question most organizations do not have a disciplined way to ask is: a symptom of what?
The answer almost always traces to one of three root causes. And the organizational response to each is completely different — which is why conflating them produces misdirected investment.
Knowledge Gaps: The Most Solvable Problem
A knowledge gap exists when a customer had a question with a clear, stable answer — and could not find it. The article did not exist, was not surfaced by search, or was written in a way that addressed a different version of the question than the one the customer was actually asking.
Knowledge gaps are the most politically straightforward root cause to address. They do not require engineering resources or roadmap space. They require content investment, improved taxonomy, and someone with authority over knowledge architecture. The fix is tractable and the outcome is measurable: publish the article, track KB views in that category, monitor whether ticket volume declines.
The standard practice in mature support organizations is to treat knowledge gap identification as a continuous process, not a project. High-volume ticket categories with no corresponding KB content — or with KB content that generates few views — are flagged automatically and routed to content queues. The knowledge base is treated as infrastructure with a roadmap and a quality standard, not a library that grows when someone has time to write.
The trend accelerating this is AI-assisted KB management: tools that can cross-reference ticket text against existing KB content, identify coverage gaps, suggest article structures, and draft first versions of missing content from resolved ticket threads. Organizations that have adopted this approach systematically report meaningful reductions in knowledge-gap-driven contacts within two to three quarters.
Product Friction: The Highest-Leverage Problem
Product friction exists when the contact is not a question — it is a response to an experience that failed. A confusing onboarding step. An error message that does not explain what the customer should do next. A feature that behaves unexpectedly. A permission model that silently fails without a user-facing explanation.
No KB article resolves this. No chatbot deflects it cleanly. The customer contacts support because the product sent them there — and they will keep coming back until the product changes.
This is the root cause support leaders are most hesitant to raise in cross-functional conversations, because raising it without quantified evidence tends to land as complaint rather than diagnosis. "A lot of customers are confused by the billing page" is an anecdote. "This ticket category has generated 620 contacts in the last 90 days, 38% of which were repeat contacts, and it has been trending upward for two months" is a business case.
Amazon's support operations are perhaps the most documented example of a company that has built this into standard operating procedure. Their principle of eliminating "avoidable contacts" — treating contacts that should not have happened as upstream product requirements — is not merely a values statement. It has operational expression: categories of support contacts are systematically routed back to product teams as structured evidence, with volume, repeat rates, and estimated cost attached. The result is a feedback loop between customer experience and product development that most organizations aspire to but rarely build.
The organizational change required here is structural, not technical. It requires a defined pathway from ticket category analysis to product prioritization — a regular forum, an agreed format for presenting ticket evidence, and a shared understanding between support and product leadership of what constitutes a contact driver that warrants engineering attention.
Process Gaps: The Hardest Problem to See
Process gaps are the hardest root cause to surface in aggregate metrics, and the most common source of high repeat-rate categories that have relatively low total volume.
A process gap exists when the ticket was resolved — the interaction closed, CSAT captured — but the resolution did not stick. The customer returns within days on the same issue because the root cause was not actually addressed: the fix was temporary, the troubleshooting guidance was incomplete, the escalation path failed silently, or the resolution was inconsistent across agents handling the same category.
Unlike knowledge gaps, process gaps do not show up in ticket volume trends. Unlike product friction, they do not trigger escalations or correlation with product release timing. They hide inside the gap between a closed ticket and a reopened one — which is why tracking repeat contact rate by category is the only reliable way to surface them. Most teams track overall repeat contact rate, if they track it at all. Per-category repeat rates are where the signal lives.
The standard diagnostic practice is systematic QA review of repeat-contact tickets: sample tickets where the customer returned within seven days on the same issue, and review what the prior resolution looked like. The pattern typically becomes clear within a small sample — and an updated playbook, a revised troubleshooting guide, or a targeted coaching session can eliminate the pattern without requiring any content or product investment.
Why the root cause matters before you invest
Investing in KB content for a product-friction category produces almost no ticket reduction — the customer is not contacting you because they cannot find an answer. Investing in a product fix for a knowledge-gap category is engineering effort solving a problem a well-placed article would resolve. Getting root cause wrong before investing is how deflection budgets get spent and volume stays the same.
The Cross-Functional Problem Most Support Leaders Underestimate
If knowledge gaps can be fixed by support alone, product friction requires engineering, and process gaps require internal training — then the most strategically important skill for a support leader is not content production or data analysis. It is organizational influence.
The support teams that most effectively reduce contact volume are the ones that have learned to translate ticket data into the language of the teams that can fix root causes. This is an underappreciated leadership competency, and it manifests differently depending on where the root cause lives.
For product friction, it means building a business case that connects ticket volume to revenue impact. Not "we got a lot of tickets about the billing page" but: "This category generated 620 contacts last quarter at an estimated cost of $4,960. The 38% repeat rate suggests approximately 235 of those were customers returning after an unresolved first contact. A product fix that reduced volume by 50% would save roughly $12,000 annually — and these tickets are currently trending upward." That is a financial conversation. Product leaders have those.
For knowledge gaps, it means establishing a recurring workflow — not a one-time project — for KB investment to be driven by ticket data rather than editorial instinct. This requires organizational alignment between support and whoever owns the knowledge base: in some organizations that is support itself, in others it is marketing, product, or a dedicated self-service team. The accountability for gap identification needs to sit somewhere with teeth.
For process gaps, it means a quality program with the authority to change resolution standards, not just flag individual agent behavior. This is a management question as much as a process question — QA that reports to the same function it audits rarely produces durable change.
McKinsey research on contact center transformation consistently shows that the organizations with the lowest cost-per-contact ratios are not those with the most automation — they are the ones with the most disciplined cross-functional feedback loops between contact data and the teams responsible for upstream fixes. Automation amplifies cost reduction, but the human and process changes drive the majority of the outcome.
None of these are purely analytical problems. They are change management problems with an analytical foundation.
The AI Deflection Trap — And How Mature Teams Are Avoiding It
The arrival of generative AI in customer support has created a new and specific version of an old mistake: investing in deflection technology before diagnosing what needs to be deflected.
The technology is real. AI-powered support agents, generative chatbots, and automated resolution flows are producing genuine results in the right deployments. But the organizations seeing the strongest outcomes are not the ones that deployed AI fastest — they are the ones that deployed it most deliberately.
The distinction matters because of how AI fails when deployed without diagnosis. An AI agent trained on an undifferentiated corpus of ticket data and KB content will, by default, attempt to deflect everything equally. It has no mechanism to distinguish between a category where customers genuinely want self-service and one where the underlying issue is product friction that no content can resolve. It will attempt to deflect both, succeed inconsistently, and generate escalation rates that are difficult to attribute and even harder to improve.
The approach that produces better outcomes — documented in how mature support operations at companies including Zendesk, Salesforce Service Cloud, and Intercom have structured AI rollouts — starts with a clear categorization of contact types by deflection potential before automation is deployed:
- High deflection potential: The issue has a clear, stable, self-serviceable answer. AI works here. Self-service works here.
- Medium deflection potential: The issue requires some context or personalization before a self-serve path applies. Guided flows work here; AI can assist but not fully replace agent judgment.
- Low deflection potential: The issue requires agent judgment, real-time account access, or a product fix. AI deployed here primarily generates customer frustration and escalation volume.
Organizations that categorize first and deploy second report dramatically better deflection outcomes than those that deploy broadly and tune reactively.
According to Salesforce's State of Service 2024 report, 84% of service professionals say AI helps them address customer cases faster — but outcomes vary significantly by deployment strategy. Teams that targeted AI at well-understood, high-volume, clearly self-serviceable categories saw the strongest deflection results. Teams that deployed broadly without prior categorization saw limited gains and elevated escalation rates.
The strategic implication is simple but frequently ignored: AI is an amplifier, not a replacement for organizational understanding. What it amplifies depends entirely on how well you understand your volume before you deploy it.
What a Mature Deflection Operation Actually Looks Like
High-performing support organizations — those that reduce contact rates while growing their customer base — share a set of structural characteristics that are worth understanding as a benchmark. These apply at different scales, but the underlying principles hold.
They measure contact elimination, not just contact deflection. Deflection rate tells you how volume is handled. Contact elimination rate tells you whether volume should have existed. Mature organizations track both and hold teams accountable for the second one. In practice, this means someone is responsible for a prioritized list of categories that should have been prevented — not just routed efficiently.
They have a named owner for the avoidable contacts list. In support operations under 50 people, this is often the Head of Support themselves. In organizations of 50–200 support staff, it is typically a Support Ops Lead or a Voice of Customer analyst. At enterprise scale, it often becomes a dedicated function that sits at the intersection of support operations and product analytics. The specific structure varies with size. What does not vary is whether the function exists at all.
They run a structured cross-functional review at a defined cadence. Not a one-way reporting meeting where support presents data and product listens politely. A joint working session where ticket evidence, root cause hypotheses, and fix ownership are discussed and assigned. The cadence that works at most organizations is monthly for trend monitoring and quarterly for a full contact driver review — where the question is not just "what's driving volume" but "what did we do about last quarter's findings, and did it work?"
Their knowledge base has a roadmap. Content is published in response to quantified demand signals from ticket data, not when someone has bandwidth to write. In the most mature implementations — common at enterprise SaaS companies and in financial services — KB investment is governed by the same prioritization logic as product development: volume, trend, estimated deflection potential, and measurable outcome.
They have established a live cost-per-contact framework. Not as an abstract benchmark but as a number that appears in budget and roadmap conversations. When a support leader can say "this category costs approximately $8 per contact, generates 800 contacts per quarter, and our estimate is that a product fix would reduce that by 60%" — the discussion with engineering becomes a financial conversation, not a support request. This reframing — from operational complaint to quantified investment case — is the single most consistent enabler of cross-functional support influence.
Industry benchmark to know
According to TSIA (Technology & Services Industry Association) benchmarking research, repeat contacts represent 20–30% of total contact volume in most enterprise SaaS support organizations. For a team handling 3,000 tickets per month, that is 600–900 contacts per month that are — at least in part — candidates for elimination. At $8 per contact, that is $57,600–$86,400 of annual spend on the first target for structured root cause analysis.
The Organizational Shift That Changes Everything
The pattern across the organizations that have made the most progress on contact elimination is not primarily technological. It is cultural.
The shift is from treating support as a cost center that should handle volume efficiently, to treating it as an intelligence function that generates the organization's most direct signal about what is failing for customers — and that has both the data and the organizational standing to close the loop.
This shift has specific structural expressions. Support leadership sits in product and roadmap conversations, not just operational reviews. Ticket data flows into product analytics and content strategy, not just support dashboards. Contact elimination is a shared metric owned across support, product, and content — not a support-only KPI that gets reviewed in a support team meeting and goes nowhere.
The foundational research here is the HBR work by Matthew Dixon, Nick Toman, and Rick DeLisi — published as The Effortless Experience — which demonstrated that organizations treating service as a strategic feedback mechanism consistently outperform those treating it as a cost to minimize. The connection between the two outcomes is structural: when support data drives upstream fixes, customer experience improves and contact volume declines simultaneously. These are not competing outcomes. They are the same outcome approached from different angles.
In practice, this shift rarely happens because of technology. It happens because a support leader builds the case — with data, with financial framing, with a clear articulation of the organizational changes required — and finds an executive sponsor willing to change how support intelligence flows through the organization.
That case starts with understanding, in precise terms, what your ticket data is showing: which categories are driving volume, which are trending upward, which have the highest repeat rates, and which have no self-serve coverage despite high demand. The organizational change is built on that foundation.
The New Trends Reshaping the Deflection Conversation
Several forces are converging to make this organizational shift more urgent — and more achievable — than in previous years.
AI is raising customer expectations on both sides of the interaction. Customers who experience effective AI-powered self-service in one product raise their bar for every other support interaction. Simultaneously, AI gives support teams the analytical capacity to process and categorize ticket data at a scale that previously required dedicated data engineering. The combination creates both pressure to improve and tools to move faster.
Generational preference for self-service has settled into a baseline expectation. Microsoft's Global State of Customer Service research consistently finds that the vast majority of consumers expect brands to offer self-service options — and that preference is no longer concentrated among younger demographics. Self-service is now a default expectation across age groups, not a differentiator. The customers who prefer live agent interaction for simple questions represent a shrinking, and generally self-selecting, minority. The strategic implication: the investment case for self-service has become baseline, not aspirational.
Economic pressure has eliminated headcount as the default scaling lever. The support industry went through a significant reckoning in 2022–2024 as companies that had scaled headcount rapidly during growth periods faced pressure to reduce operational costs without degrading customer experience. The broad response — AI deployment, deflection targets, self-service investment — was often implemented without the diagnostic groundwork that makes any of it effective. As those results have come in, the conversation has shifted from "how do we deploy AI?" to "how do we know what to deploy it to?" That is precisely the organizational question at the center of this post.
Voice of the Customer is converging with support analytics — and the organizations that move first will have a structural advantage. Historically, VoC programs and support analytics have operated in parallel: VoC capturing sentiment and satisfaction, support analytics tracking operational performance. These two streams have rarely been integrated because they lived in different tools, reported to different teams, and served different audiences.
The emerging practice — increasingly formalized at enterprise organizations in financial services, SaaS, and retail — is to integrate them into a single contact intelligence function. When a customer's CSAT verbatim can be cross-referenced with their ticket category history and their most recent contact's root cause, the analysis becomes significantly richer. A customer who gives a 6 NPS score and has contacted support three times in 90 days on the same product friction category is a very specific and actionable signal — not just a dissatisfied customer, but a dissatisfied customer whose dissatisfaction is traceable to a specific, fixable upstream cause.
Support leaders who position their teams at this intersection are building something more strategically valuable than a service function. They are building the organization's most direct, real-time window into what customers actually experience — and translating it into decisions that product, content, and operations teams can act on.
Your First Move
The strategic argument in this post is only as useful as what it produces in practice. For support leaders who have not yet built the organizational infrastructure described here, the most practical starting point is a single focused question: what are the ten ticket categories generating the most volume in the last 90 days, and do we know the root cause of each?
Not approximately. Not based on agent intuition. With data — category volume, whether volume is increasing or stable, what percentage of those contacts are repeat contacts from the same customer on the same issue, and whether there is self-serve content available that customers are actually finding.
That analysis does not require a data engineering team. It requires a ticket export and the organizational will to do something with what it shows. For categories where the root cause is clear — a missing article, an identified product friction point, a known process inconsistency — the next step is assigning ownership and a timeline, not commissioning a study.
For categories where the root cause is not clear, the investigation itself becomes the first action: pulling a sample of tickets, reading them, and forming a hypothesis about why customers keep returning on this category despite receiving resolutions.
This is how the organizational shift begins. Not with a platform decision or a chatbot deployment — but with a clear-eyed look at what your ticket data is already telling you, in a format you can take into a room with product, engineering, or content leadership and use to move something.
The Question Every Support Leader Should Be Able to Answer
Here is a test worth applying to your current operation: if your CFO asked you tomorrow what percentage of your ticket volume is preventable — contacts that should not have reached an agent because a product fix, a KB article, or a process change could have eliminated them — could you answer?
Not approximately. Not based on agent anecdote. With data, by category, with a root cause hypothesis and a named owner for each fix.
Most support leaders cannot. Not because the data does not exist — it does, in every helpdesk export you have ever pulled. But because the organizational infrastructure to turn that data into a prioritized prevention agenda, and the cross-functional relationships to act on it, have not been built.
The organizations that build it consistently outperform the ones that do not. Not because they find some operational secret unavailable to others. But because they stop treating contact volume as a given and start treating it as a problem to solve — systematically, organizationally, and with the same rigor that good product teams apply to the products generating those contacts in the first place.
The data is already in your helpdesk. What you do with it is an organizational choice.
Trying to understand what's actually driving your repeat contacts? Querri connects directly to your ticket export and surfaces category volume, trend direction, and repeat contact patterns — so you can walk into cross-functional conversations with numbers instead of instincts. See how it works →
Frequently Asked Questions
What causes repeat contacts in customer support?
Repeat contacts — where a customer contacts support more than once about the same issue — trace to one of three root causes: a knowledge gap (the customer cannot find an existing answer), product friction (the product experience itself is generating the contact and a content response won't resolve it), or a process gap (the agent resolved the ticket but the resolution did not address the underlying cause). Each root cause has a different fix. Identifying which one applies to a specific ticket category is the prerequisite for making any deflection investment that actually reduces volume.
What is the difference between contact deflection and contact elimination?
Contact deflection means a customer reached out but was handled by self-service or automation instead of a live agent — the contact happened, it was redirected. Contact elimination means the contact never happened at all, because the upstream cause was fixed: a product change removed the friction, a KB article answered the question before the customer felt the need to ask, or a process change ensured the issue was resolved on first contact. Most support organizations measure deflection rate. Mature organizations measure both, and treat elimination as the higher-value metric.
How do you build a business case for reducing repeat contacts?
The most effective business case combines three numbers: the cost per live agent contact for your organization (typically $6–$12 for most teams), the volume of contacts in the target category over a 90-day period, and the estimated percentage of those contacts that are repeat contacts — meaning the same customer returning on the same issue. Multiply these together to arrive at an estimated annual cost of avoidable contacts in that category. A category generating 800 contacts per quarter with a 30% repeat rate and a $8 cost per contact represents approximately $23,040 in annual avoidable spend — on a single category. That is the number you bring to product or content leadership.
What metrics should support leaders track beyond deflection rate?
Deflection rate alone is insufficient because it measures channel, not cause. The metrics that surface the organizational picture are: contact elimination rate (contacts prevented entirely vs. prior period), repeat contact rate by category (percentage of tickets where the customer returned on the same issue within 7 days), cost per contact by category (which categories are most expensive to handle and why), and KB coverage rate (for high-volume categories, whether self-serve content exists and whether it is being found). Together, these tell you what to deflect, what to eliminate, and where to invest.
How should support ticket data flow to product and engineering teams?
The most effective model is a structured cross-functional review — not an ad hoc escalation — held at a defined cadence (monthly for trend monitoring, quarterly for full contact driver review). The format that moves product conversations is a category-level summary that includes: ticket volume over 90 days, trend direction (increasing, stable, or decreasing), repeat contact rate, estimated cost, and a root cause hypothesis. This is a financial conversation, not a support complaint. Product and engineering leaders respond to cost and risk framing. Ticket counts alone rarely move priorities.
What does a mature support deflection operation look like, organizationally?
Mature deflection operations share five characteristics: they measure contact elimination alongside deflection rate; they have a named owner for the avoidable contacts list (a role, a team, or a recurring forum); they run a structured cross-functional review with product and content teams on a defined cadence; their knowledge base is governed by a roadmap driven by ticket data rather than editorial availability; and they use a live cost-per-contact framework as a standard input in investment conversations. The specific org structure varies by team size — from the Head of Support owning the function directly at smaller scale, to a dedicated Support Ops or Voice of Customer function at enterprise scale — but the presence of these five practices does not.
Tags