Sam Klaidman – September 2021
In this article for Field Service News, Sam Klaidman, Founder and Principal Adviser at Middlesex Consulting, explores the journey service leaders take to achieve their desired outcomes. By clarifying the desired outcome statement, service leaders can better implement goal-setting theory to define the right types of goals for both the organization and program participants. This process often includes linking short-term goals to long-term strategic objectives, ensuring personal and professional goals align with the bigger picture.
Here is a notable exchange from Lewis Carroll’s Alice’s Adventure in Wonderland:
‘Would you tell me, please, which way I ought to go from here?’ asked Alice.
‘That depends a good deal on where you want to get to,’ said the Cheshire Cat.
‘I don’t much care where—’ said Alice.
‘Then it doesn’t matter which way you go,’ said the Cat.
‘—so long as I get somewhere,’ Alice added.
‘Oh, you’re sure to do that,’ said the Cat, ‘if you only walk long enough.’
Service leaders, however, know where they want to go. They aim to achieve the business objectives defined in their strategic plans or individual goals, which often determine annual bonuses. Yet many miss opportunities to excel because they do not use all available tools. Incorporating a logic model to define desired outcomes helps set goals that are measurable, achievable, relevant and time-bound, ensuring each step aligns with both short-term and long-term objectives.
Drip
Service businesses are inundated with data—from products in the field, call centers, service managers, logistics, and peers in finance, marketing, sales, customer success and beyond. What’s often missing is actionable insight. This condition is known as DRIP:
Data Rich Insight Poor
For example:
Most field service organizations survey customers and measure key performance indicators (KPIs) such as:
- Net Promoter Score (NPS)
- Customer Satisfaction (CSAT)
- Customer Effort Score (CES)
Data is collected and aggregated into a single KPI. Unfortunately, these KPIs do not guide the actions needed to achieve business outcomes like revenue growth, higher employee satisfaction or improved productivity. To reach these outcomes, you must link individual data to actual customer actions—discover what customers really did, how they responded to surveys, and pinpoint what must be corrected for better results. Creating a measurable, achievable plan with relevant time frames can significantly influence goal-setting success over time.
Solving the DRIP problem requires a more detailed approach:

Few people focus on this level of detail—what some call working in the weeds. Let’s examine how NPS is typically used and why this process may not be effective.
Net Promoter Score
Net Promoter Score (NPS) was introduced in 2001 as “The One Number You Need to Grow.” Today it’s used across businesses of all sizes and industries, including field service organizations. While NPS has many critics, there are also real-world examples supporting its validity.
Consider an example where NPS and high-level analysis yield data that makes analysts and companies feel productive, but does not actually improve desired outcomes.
A Quick Review of NPS
Organizations ask customers:
“Based on XXX, how likely are you to recommend us to a friend or associate?”
Responses are given on an 11-point scale—11 is definitely likely, 5 is neutral, and 0 is definitely unlikely. Results are grouped as follows:

The 2 green scores are promoters, the 2 yellow are passives, and the 7 reds are detractors. The NPS score is calculated as the percent promoters minus the percent detractors, ranging from +100% to -100%.
Some Data
Bain & Company, the originator of NPS, produced the following chart:

In this example, surveyors focus not on the NPS score but on how customer sentiment correlates with buying intentions. Promoters are about 90-95% likely to consider their current manufacturer, passives 75-80% likely, and detractors only 40-45% likely.
Knowing each individual’s score, surveyors can create targeted programs for customer segments or sub-segments, identify reasons behind their feelings, and correct issues or offer compensation if problems are beyond their control. Simultaneously, internal procedures and policies must be reviewed to prevent alienating other customers. Developing these corrective steps and aligning them with a logic model ensures each strategy is built on a clear plan, making overall goals effective, measurable and within relevant time constraints.
But intent alone is not enough. In “Five Frogs on A Log” by Mark L. Feldman and Michael F. Spratt, a riddle illustrates this concept:
Five frogs are sitting on a log.
Four decide to jump off.
How many are left?
Answer: Five.
Why?
Because deciding and doing are not the same.
What matters is action, not intention. A customer’s promise to send a purchase order is meaningless until it is received and booked. In business coaching, setting time-bound goals and tracking real-world outcomes is essential for achieving results.
Regarding the Bain & Company data, a more useful question would be: “Based on XXX, how likely are you to lease or purchase your next vehicle from our brand (or dealership)?” The business objective is to sell or lease vehicles, not just get referrals. Surveyors could then track respondents at each level—0, 1, 2, etc.—who actually leased or purchased a car. It might take one or two years to measure the impact of increasing promoter percentages, but this approach enables progress based on actual data. This demonstrates evidence-based methods to refine specific, measurable goals and achieve desired outcomes over a defined period.
Another Example but About Service Parts Usage, not NPS
Data – Your business is the Field Service arm of a hardware product OEM, and you consume a large number of parts each month. To diagnose issues, your parts manager prepares a report of total usage by part number and another report breaking out the data by transaction type—installation, warranty, billable and service contract. You notice one expensive part is most used during warranty.
Insight – If your only concern is minimizing customer downtime, you would increase stock levels. But if your desired outcome is higher company profit and CSAT scores, you ensure each defective part is returned for failure analysis.
Action – Failure analysts share results and total field repair costs with Engineering and Manufacturing. This often leads to part redesign or modification and changes in manufacturing processes.
Outcome – Once implemented, it may be cost-effective to swap out old designs when field engineers are on-site. Old parts are removed from stock and replaced with new designs, resulting in overall cost savings—your desired outcome.
This process aligns measurable, achievable steps with relevant, time-bound actions, structuring desired outcomes effectively. Viewing these steps through a logic model enables more efficient goal-setting for both short-term and long-term performance, leading to genuine improvements in parts usage and business results.
Conclusion
Without linking data to desired outcomes, you’re essentially tracking a gratification metric—it feels good but doesn’t move you closer to your objectives. Whether setting personal goals or writing goals for your team, remember that a smart outcomes approach—where goals are specific, measurable, achievable and relevant—ensures that decisions lead to action. A desired outcomes tool or logic model provides the clarity needed to bridge that gap.