top of page
Robot labeled "MCP" presents to three people at a table. Flip chart shows upward graph, text: Business-Aligned, Secure, Cost-Efficient.

In Part 1, we looked at what MCP is, why it’s gaining traction, and where it fits in the modern


GenAI architecture. In Part 2, we’re going to get more practical. We’ll explore what makes a good MCP design and how to evaluate its return on investment.


Because for CIOs, the question isn’t just “Can we do this?” it’s “Should we?”



What Makes a Good MCP Design


Good MCP design is what separates a messy lab experiment from a production-ready capability. It’s the connective tissue between user intent and enterprise execution.


Here’s what to look for:


1. Business-Aligned


Each tool should map to a real-world task or action that supports business value, not just an API call. For example, instead of defining a generic tool like runSQLQuery, define a more intentional one like getAverageOccupancyRate that handles context resolution, filters, and formatting internally.


This reduces hallucination risk, improves business trust, and clarifies intent.


But that doesn’t mean you need to define a tool for every single business metric. That can quickly become an engineering burden.


Many teams opt for a middle ground. For example:


  • Define a general-purpose tool like getMetric that takes in parameters like metricName, geography, timePeriod, and portfolioId

  • Add logic to resolve common terms like “my portfolio” or “last quarter” using user context

  • Expose a curated list of supported metrics and time ranges via enum definitions to avoid hallucinations


This way, you retain flexibility without sacrificing control.


2. Structured


MCP works best when your tool definitions follow a clear schema:


  • Name

  • Description

  • Parameters (with types, validations, and enums)

  • Examples (real-world examples of how the tool would be used)

  • Output format (e.g., JSON, text, chart config, etc.)


Consistent structure helps:


  • LLMs generalize across tools

  • Developers debug and maintain implementations

  • Product teams reason about what’s possible


3. Governed


Tools should only be callable under specific conditions:


  • Who is allowed to invoke this tool?

  • What contexts should it be available in?

  • Are there budgets or rate limits applied?


Think of this as enterprise-grade role-based access control (RBAC) for your GenAI solution.


4. Observable


You can’t govern what you can’t see. Your MCP implementation should:


* Log tool calls

* Capture input/output

* Track which tools are used by which users and why


This creates the auditability and feedback loop necessary to improve performance and safety over time.



Where Does the ROI Come From?


This is the part that matters most to CIOs: What’s the business case?


1. Lower Total Cost of Ownership


Instead of hardcoding logic in a dozen Python scripts across various LLM projects, MCP creates a central library of reusable actions. This means:


  • Faster time to market for new use cases (read: increase business user satisfaction)

  • Lower engineering effort per project (read: less cost per use case)

  • Less duplication of effort across teams (read: less code to maintain over time, ability to iterate faster. See point 1).


2. Faster Time to Value


Well-defined tools allow LLM agents to act with less guesswork and more confidence. That improves:


  • First-response accuracy

  • Task completion rate

  • User trust


When you spend less time fiddling with prompt engineering, you spend more time shipping value.


3. Reduced Risk


MCP lets you bake in compliance and security at the interface level:


  • Limit what LLMs can do

  • Validate inputs before execution

  • Mask or redact sensitive data


This prevents a lot of downstream cleanup from prompt injection, privacy violations, or compliance missteps.


4. Product Thinking


MCP shifts your mindset from project to product. You stop building one-off experiments and start building:


  • Shared definitions

  • Shared infrastructure

  • Shared context

  • Shared value


This amplifies the value of each new use case. Every tool you add to the MCP library becomes a building block for future capabilities.



So Is MCP Worth It?


That depends.


If you’re building small, throwaway use cases, probably not.


But if you’re building enterprise-grade GenAI capabilities that need to be safe, reusable, and scalable?


Then MCP isn’t just worth it — it may be the only path forward.



TL;DR


MCP is more than just a developer pattern. It’s a strategic enabler for scaling GenAI responsibly.


By combining structured design with business alignment and enterprise governance, MCP can lower costs, increase speed, and reduce risk.


If you’re investing in GenAI as a long-term capability, MCP should be on your radar.



At Fuse, we believe a great data strategy only matters if it leads to action.


If you’re ready to move from planning to execution — and build solutions your team will actually use — let’s talk.


Over the past few months, MCP has found its way into nearly every conversation I’ve had with CIOs and data leaders.

A man with a laptop hands papers to a smiling robot labeled MCP. A sign points to "Systems." Setting is simple and monochrome.

Vendors are promoting it. Engineers are experimenting with it. And CIOs are asking:

“Is this just another GenAI thing or is there real business value here?”

This two-part series is designed to answer that question. We’ll break down what MCP is (and isn’t), where it fits into the modern enterprise stack, and what value it can deliver, not just for developers, but for the business as a whole.



MCP Plainly Explained


MCP stands for Model Context Protocol. At a high level, it’s a structured way to define what tools (in tool-calling) an LLM can access, what inputs those tools expect, the context that will be helpful, and how to route user requests accordingly.


You can think of it as a standardized interface between natural language systems and business systems, much like REST APIs are an interface between multiple business systems.


If tool-calling is how LLMs get things done, MCP is how they understand what can be done and how to do it safely and repeatably. Think of it somewhat like an application server or business rules engine that handles the inputs and outputs necessary to make your agentic solution come to life.



Why It’s Gaining Attention


As teams move from GenAI pilots to production use cases, one thing becomes clear: free-form prompting, managing tool budgets in code, and injecting context manually doesn’t scale. Business users want consistency. Governance teams want control. Architects want composability.


MCP offers a framework for all three.


By defining a catalog of tools and actions, along with parameters, types, validations, descriptions, and examples MCP lets you:


  • Control what LLMs can do (and can’t do)

  • Encourage reuse and modularity across use cases

  • Improve accuracy by grounding prompts in formal context

  • Set the stage for agentic workflows



Why MCP Matters from a Business Perspective


Too many GenAI projects stall out after the pilot phase. Why?


Because early experiments are often built with brittle code, limited reuse, and little regard for scalability or governance.


MCP changes that.


By formalizing how LLMs interact with enterprise systems, MCP:


  • Reduces risk by limiting what LLMs are allowed to do

  • Lowers cost by making actions modular and reusable

  • Improves reliability by grounding prompts in structured context

  • Accelerates delivery by separating design from implementation


In other words, MCP isn’t just a developer convenience. It’s a strategic enabler for CIOs trying to scale GenAI safely, cost-effectively, and in a way that aligns with enterprise standards.


Without MCP, organizations face a different reality:


  • Increased risk from unmanaged tool use and ambiguous prompts

  • Slower development cycles due to one-off implementations and fragile pipelines

  • Higher costs from redundant or overly complex prompt engineering

  • Limited scalability as new use cases require bespoke integrations and governance patches


Put simply, skipping MCP might work for isolated pilots but it makes enterprise-scale deployment expensive, inconsistent, and hard to govern.



What It Looks Like in Practice


Let’s say you want to let users ask questions like:

“What was our average occupancy rate last quarter for my portfolio?”

Rather than leaving the LLM to figure everything out from a hardcoded list of tools in a Python script, MCP gives it:


  • A list of tools it can use (e.g., `runSQLQuery`)

  • A structured definition of what inputs that tool expects (e.g., natural language string, a validated SQL string, JSON formatted results, etc.)

  • Examples of how similar requests have been handled in the past

  • User context to resolve ambiguity like “my portfolio -> Ontario”


This is where design matters. A good MCP implementation acts like a bridge:


  • On one side: natural language questions

  • On the other: business logic, systems, APIs, and data assets


MCP connects the two, with guardrails.



Where It’s Being Used


MCP is a core concept in many modern cloud and data platforms. For example, Snowflake provides a managed MCP Server as part of its AI/ML product offering. It’s also appearing in custom enterprise architectures built on platforms like LangChain.


But it’s not a product. It’s a pattern.


And like all patterns, its success depends on:


  • The quality of your tool definitions

  • The consistency of your structure

  • The governance model you apply



Is It Right for Your Company?


That depends. If you’re experimenting with GenAI in isolated use cases, MCP will be overkill. But if you’re:


  • Trying to build scalable GenAI products

  • Supporting multiple user types and workflows

  • Embedding LLMs into production systems


Then MCP may offer the structure you need.


The key is to approach it not just as a technical artifact, but as a design and governance layer.


That’s where we’ll go in Part 2.


We’ll explore:


  • What makes a good tool definition

  • How to balance reusability and specificity

  • The ROI of investing in MCP-style design


And we’ll give you a checklist to help decide whether now’s the time to make that investment.



TL;DR


If GenAI is part of your enterprise roadmap, MCP might be the missing layer that makes it safe, reusable, and scalable.


It’s not just about what LLMs can do.


It’s about making sure they do the right things, the right way, every time.


Follow along for Part 2: Making the Business Case for MCP.



At Fuse, we believe a great data strategy only matters if it leads to action.


If you’re ready to move from planning to execution — and build solutions your team will actually use — let’s talk.


Part 2: From Vision to Roadmap: Anchoring in Alignment


Robot labeled "DATA" presenting a roadmap on a flip chart to three attentive people at a table in an office setting.

You’ve run your Vision Workshops across departments. You’ve surfaced how success is measured, where people influence outcomes, where data is helping (or not), and where gaps or friction exist.


Now comes the harder but more important step: converting that insight into a prioritized roadmap the business can rally behind, before writing a single user story.


Here’s how I do it.



1. Synthesize and Tell the Story


Start by distilling what you heard across all departments into a concise narrative.


  • Find common themes: recurring pain points, overlapping priorities, shared data needs.

  • Highlight value zones: those use cases where a better data experience could unlock real business impact.

  • Define success: Map how each group defines success and their sphere of influence in achieving it.

  • Surface tensions or trade-offs: highlight where multiple teams asking for the same datasource, or one group’s “urgent” is another’s “nice to have”.


The narrative becomes your baseline for alignment. It moves the conversation from “my ask” to “our priorities”.



2. Establish a Business-Led Alignment Committee


To avoid political derailment or scope creep, your roadmap decisions should live in a shared, visible process.


I often create a lightweight data steering group of 4–6 business leaders consisting of people who are stakeholders in data-driven goals. This committee doesn’t micromanage they:

  • Review proposed initiatives together

  • Surface trade-offs and conflicting priorities

  • Ensure no one function dominates the roadmap

  • Guard against “priority drift”


The goal is not full consensus. It’s a shared process people trust.



3. Prioritize with Objectivity, Not Emotion


With your committee, use a simple, transparent framework to score initiatives:

Criteria

What to Ask

Reach

How many decisions or users benefit?

Impact

How big is the improvement on efficiency, insight, or revenue?

Confidence

How well-understood is the need, data, and risks?

Effort (or cost)

How much work, integration, or technical cleanup is required?

This is basically the RICE method used in many places. It’s not perfect, but it helps conversations be about tradeoff, not politics.


While doing this, be realistic about readiness. Just because something scores high shouldn’t mean it’s immediately scaffolded. Ask:


Who on the business side has the bandwidth to partner? What dependencies or foundational work is needed first?

When business partners see that you respect their time and constraints, alignment comes faster.



4. Draft (and Share) a Transparent Roadmap


Think of your first roadmap not as a rigid commitment, but as a draft to spark conversation and ownership.


When you present it:

  • Walk people through why certain items were prioritized, and why others were deferred

  • If a theme was deferred, provide options for bringing it forward (eg: additional budget to fund another scrum team)

  • Surface the trade-offs you made

  • Show timing, dependencies, and resource needs

  • Invite feedback and iteration


This helps shift it from the data team’s roadmap to the company’s data roadmap.



5. Operate in Delivery Cycles, Not Epics for Eternity


Even the best roadmap should be agile. I recommend:

  • Quarterly delivery cycles

  • Focusing on one initiative per cycle

  • Building, validating, and then iterating based on real usage

  • Reporting back to your committee on what was delivered, what was learned, and what comes next


This cycle of visibility, adaptability, and shared ownership becomes your alignment mechanism and not a static roadmap pinned on a wall.



Why This Matters


Without this level of alignment:

  • The data roadmap becomes a wish list

  • Functions compete instead of collaborate

  • Priorities shift with the loudest voice

  • Momentum stalls mid-delivery


When you connect business vision with alignment and planning, you get a roadmap that’s not just strategic, it’s in motion.



At Fuse, we believe a great data strategy only matters if it leads to action.


If you’re ready to move from planning to execution — and build solutions your team will actually use — let’s talk.


fuse data logo
bottom of page