top of page
A robot labeled "DATA" asks a man, "Who are you again?" in a speech bubble. The scene is in black and white, conveying confusion.

In the early days of GenAI adoption, most of the effort went into building prompts, chaining tools, and selecting the right model. But as LLMs enter the enterprise mainstream, the biggest performance gains are coming from somewhere else: context engineering.


If prompt engineering is about how you ask, then context engineering is about what the model knows when you ask it.


And in many cases, context matters more.


What Is Context Engineering?


At its core, context engineering is the practice of deliberately shaping the information that large language models (LLMs) use to make decisions.


Every LLM request has a context window: a limited amount of information the model sees at inference time. This includes the current promept, system instructions, tool definitions, user preferences, and any retrieved knowledge from previous conversations or documents.


Context engineering is the art and science of curating that window. It’s about deciding what information goes in, how it’s formatted, and how it adapts over time based on user behavior, role, and task. It bridges the gap between general-purpose models and specific, contextualized enterprise use cases.


Without context engineering, even the best models will hallucinate, produce irrelevant outputs, or miss the mark entirely. With it, LLMs can feel far more intelligent, personalized, and reliable.


Why It Matters


Most enterprises aren’t struggling because they picked the wrong model. They’re struggling because their model doesn’t have the right context.


Common issues like hallucinations, irrelevant responses, or inconsistent behavior often stem from:


  • Missing task instructions (the model doesn’t know what it’s trying to do)

  • Lack of memory (the model forgets past conversations or decisions)

  • Insufficient knowledge (the model doesn’t know about your systems or business)

  • Poor tool definitions (the model doesn’t know how or when to use the tools it’s given)


Context engineering addresses all of these.


In other words, it’s not just a performance layer, it’s a trust layer that determines whether your AI feels like a prototype or a real assistant.


Why You Shouldn’t Roll Your Own


It might be tempting to build a custom context solution yourself. But the reality is that most hand-rolled solutions break down at scale:


  • They’re brittle and hard to maintain

  • They struggle to support multiple agents, tools, and personas

  • They often lack a governance model for what goes in or out of the context window


This is where Model Context Protocol (MCP) comes in.


MCP provides a structured, reusable, and observable approach to context engineering across your entire agent stack.


It standardizes how context is:


  • Defined (e.g. task instructions, user profile, tools, memory)

  • Assembled dynamically at runtime

  • Governed and validated

  • Updated over time


With MCP, you don’t need to reinvent context logic for each new use case. You define a pattern once and scale it with confidence.


Best Practices for Enterprise Context Engineering


Here’s what good context engineering looks like at the enterprise level:


  • Define roles and tasks: Context should adapt based on the persona and task (e.g. a financial analyst asking about Q2 trends).

  • Keep system instructions clean: Long, unfocused system prompts are hard to debug and instructions get lost in the context. Use short, layered instructions.

  • Version your tools and templates: Treat context like software. Use version control and structured testing.

  • Modularize your inputs: Use MCP or a similar structure to keep prompts, tools, memory, and retrieval decoupled.

  • Establish governance: Decide who can define or update context components — and how changes are reviewed.

  • Use telemetry: Track token usage, tool calls, and user satisfaction to tune your context dynamically.


Final Thoughts


Context engineering isn’t hype. It’s the natural evolution of building with LLMs.


As AI becomes embedded in real workflows, the difference between a helpful assistant and a frustrating one comes down to whether it understands the who, what, and why of each task.


You won’t get that from a bigger model. You’ll get it from better context.


And if you want to do that reliably, at scale, and with traceability, you need to engineer it.



At Fuse, we believe a great data strategy only matters if it leads to action.




A robot labeled "Data" presents a chart on "Impact vs. Effort" to three people in a meeting room. The scene is analytical and engaged.

In Part 1, we talked about why the traditional data project model is failing and how a shift toward data products offers a better path forward.


But that shift doesn’t happen overnight.


Most data teams today are caught in a cycle of intake, delivery, and support.


Even if they want to work differently, the gravitational pull of “just get it done” is strong. So how do you change that?


Here are five practical steps to help your team move from project delivery to product thinking.


1. Start with Problems and Possibilities


If you're drowning in intake requests, your first step isn’t to stop everything and start building products. It's to understand the business landscape.


Use a framework like RICE (Reach, Impact, Confidence, Effort) to score the highest priority problems that you're handling today. Then have proactive conversations with your business counterparts about where data could create leverage to refine your prioritization assumptions.


Don’t just wait for intake. Run Vision Workshops to explore the “art of the possible” and uncover opportunities that aren't yet on the roadmap.


This helps shift the narrative from “data team as service desk” to “data team as strategic enabler.”


2. Shift Allocation Toward Products


As you identify product opportunities, start shifting team members to support them.


This doesn't mean killing all projects. But it does mean setting aside explicit capacity to build and grow data products.


One approach is to set an initial ratio like 75% project-based service delivery, 25% product development and adjust it over time. As more durable products take hold, your need for reactive service work should shrink.


Be transparent with the business. Let them know that the goal is better outcomes, not less support.


3. Create New Operating Rhythms


You can't deliver data products using the same rhythms as project work.


Move from rigid project timelines to product backlogs. From milestone-driven gantt charts to agile sprints. From ad hoc status updates to regular demo sessions with users.


Create space for iteration and feedback. Treat adoption and value creation as success metrics, not just delivery.


4. Rewire Incentives and Mindsets


If you want your team to act like a product team, you need to reward them like product team.


That means giving them time to go deep on a problem space. Encouraging experimentation. Recognizing progress based on user outcomes, not task completion.


It also means helping your stakeholders adapt. Product thinking requires shared ownership. That can feel uncomfortable at first, but it’s what unlocks trust and traction.


5. Define and Launch Your First Product


Don’t wait for perfect structure. Start small.


Pick a real user need. Define a thin slice of a data product that could meet it. Assemble a cross-functional team. Build it. Test it. Launch it. Improve it.


Make the benefits visible. Show the before and after. Tell the story in a way that resonates with the business.


Then do it again.



At Fuse, we believe a great data strategy only matters if it leads to action.



Robot gestures at a sign reading "PRODUCTS > PROJECTS" with three businesspeople smiling nearby. Trees and clouds in the background.

Most data work is still run like a project.


You identify a need. You scope the effort. You assign people to deliver something. And then, when it’s done, you move on to the next thing.


But increasingly, this model is breaking down and there is growing distance between the business and the data team.


Because data is not a one-and-done initiative. It’s not something you “build and forget.” It’s an evolving asset and capability that needs to be maintained, improved, and adapted as business needs change.


In this two-part series, we’ll look at why the traditional data project model no longer works and how to start shifting toward a product mindset that’s better suited to today’s data-driven organizations.



The Trouble with Traditional Data Projects


Data projects tend to follow a waterfall mindset:


  • Define requirements

  • Build the pipeline or report

  • Deliver the output

  • Declare success


In theory, this sounds clean. But in practice, it often fails.


That’s because most data work:


  • Involves ambiguous and evolving requirements

  • Depends on upstream data sources that change over time

  • Requires feedback loops with users to be effective

  • Needs ongoing support and iteration to add value as business needs evolve


What starts as a “project” quickly becomes a permanent fixture. But since it wasn’t built or funded with longevity in mind, it begins to break down. The result is tech debt, poor user experience, and rising maintenance costs.



Introducing Data Products


Data products are created the same way as software products:


  • They serve a defined set of personas

  • They have a clear value proposition

  • They’re versioned, tested, and iterated

  • They’re owned by a team, not a one-time project group


A data product might consist of a series of pipelines, dashboards, or models, but it’s designed and packaged to be used, maintained, and improved over time, with benefit being delivered to a specific user group.


This shift mirrors what software teams have already embraced: durable, user-centered solutions require durable, user-centered teams.



Why Product Thinking Works Better


Moving to a product model offers several advantages:


  • Better alignment: Products are built to satisfy specific user needs (needs, not requirements), which means more relevant, useful outcomes.

  • Higher quality: Iteration leads to better design, fewer bugs, and more trusted results.

  • Sustainability: Ownership ensures that someone is responsible for keeping things working.

  • Strategic focus: Products are tied to business value and adoption, not just delivery deadlines.


This doesn’t mean we never run projects. But it means we stop pretending that all data work can be scoped, staffed, and delivered like a finite initiative. Instead, we invest in capabilities that grow over time.


In Part 2, we’ll look at what it takes to actually make this shift — from changing how teams are structured to how work is funded and prioritized.



At Fuse, we believe a great data strategy only matters if it leads to action.


fuse data logo
bottom of page