10 Prompts to Upgrade Your AI Agent Insights (London Matcha Edition)

11 min de lecture·May 12, 2026
Head of Marketing
AI & NLP

A lot of tools these days come with AI agents. What’s still missing is a clear way of working with them. Most teams still use agents for quick summaries, maybe a couple of follow-up questions, and then the analysis is considered “done,” resulting in very average output. What’s missing is not access to technology, but knowledge on how to use it. That’s where the distance between surface-level insights and seriously strong analysis starts to show.  
 
And that knowledge-gap matters, because the way insights are being delivered is already changing. Gartner predicts that by 2028, up to 60% of dashboards will be replaced by AI-generated narratives, and KPMG is already envisioning a 2030 workday where most of data work is handled by AI agents
 
To get you on the right track, we wanted to show you what one of these new workflows could look like. So, we took a real dataset with thousands of London matcha café reviews and used it as an AI agent test case. Why matcha, you ask? Because it’s everywhere right now, and we’ve gone pretty darn deep down the matcha rabbit hole here at the Caplena office.

And it's not just us. There are tons of new matcha connoisseurs out there. It's trending hard (as Google Trends is telling us below). Interest has been climbing steadily for the past few years, meaning more matcha cafés, more competition, and more expectations. And that's exactly the kind of messy, opinionated feedback dataset that's perfect for testing what an AI agent can actually do.

You probably didn't plan to care this much about matcha either, but here we are. And that's exactly how you (in this story, you're a very finger-on-the-pulse kind of market researcher) end up in a meeting where someone says: “We should open a matcha café in London.”

There's a pause. A few nods. Someone says "Shoreditch." Someone else whispers “"minimalist interior." At least one person is holding a room-temp kombucha. Then someone turns to you: "Can we validate this?"

You're not being asked if matcha is popular. You’re being asked what actually makes a matcha café work. What drives ratings, what creates loyalty, and where the real opportunities are in the market. The data exist, sure, but it's not really analysis-ready: there are thousands of reviews, all subjective, and hard to compare. This is where the way you work starts to matter.

🥶 How most teams use AI agents today

🔥What changes with an agent workflow

You drop in your data and ask a few questions, and maybe one or two follow-ups. You get a very average qualitative feel, and then you move on. The analysis happens elsewhere.

You don’t stop at one question. You build a sequence. Each prompt adds context and pushes the analysis further. Now you’re getting explanations, not just observations.

And you're smart. So you're choosing the workflow way, clearly. Instead of spending your time clicking through dashboards, you spend it exploring, interpreting, and telling the story. The agent handles the heavy lifting. How far you get in this process comes down to one thing: the prompts you use to guide it.

🚩 The problem: Most agentic AI analyses don't go far enough

A lot of teams would start by asking an AI agent to "summarize" their data. It makes sense, but it's also where things stall. Agents are designed for multi-step work, so when they're used like a one-trick pony, they never get there.

The issue here is simply the lack of direction. Prompts are the control layer for how an AI agent behaves. Too vague, and the output becomes generic. Too rigid, and the system loses the ability to reason. In feedback analysis, this matters even more because you're trying to connect it to outcomes like ratings, sentiment, and behavior.

Once you move from "asking questions" to building an agentic workflow, things start to change. The idea is to have each prompt build on the previous one, and where each answer adds context to the next. That's what turns an AI agent into something closer to a partner.

🧠 The setup: From "nice idea" to actual customer understanding

Alright, so you've been tasked to get a lay of the matcha land. You start by doing what any reasonable researcher would do: You gather reviews from the most popular matcha cafés in town. Different styles, different positioning, and different volumes of feedback.

An hour on Google and you've identified your champions: JENKI Matcha, Matchado, Ohho Cafe, How Matcha, and Tsujiri. You pull all of their reviews from Google and run the data through an agent that helps you with the setup, structuring the feedback and connecting it to ratings and sentiment. Now, you're not just looking at raw text anymore. You're working with something you can actually analyze.

From there, you let the AI agent loose on the dataset. The ten prompts below are how the analysis unfolds. These aren’t just picked at random; they follow real analysis benchmarks with each and every step.


10 prompts that actually generate insight

What you're trying to achieve

Prompt

01

Understand what's top of mind for customers

"What defines a great experience based on customer reviews across all locations?"

02

Reveal what scoring metrics don't tell

"How do scores and net sentiment differ for each location?"

03

Identify strengths

"Which topics are mentioned the most positively?"

04

Spot weaknesses

"What are the most common complaints?"

05

Understand what drives satisfaction

"Which aspects of the experience have the strongest influence on ratings across these businesses?"

06

Dig deeper into performance differences

"Why do some receive higher ratings than others, based on customer feedback?"

07

Segment customer groups

"Do different types of customers describe their expectations differently across these businesses?"

08

Detect gaps and where to improve

"What are customers asking for or expecting that none of these businesses are consistently delivering?"

09

Track trends over time

"How have customer expectations and feedback topics changed over time?"

10

Turn analysis into proactive monitoring

"Alert me every month about new topics or shifts in sentiment across these businesses"


01. Understand what's top of mind for customers

"What defines a great experience based on customer reviews across all locations?"

This is not a warm-up question. You're asking the AI agent to synthesize the entire dataset into a model of what "good" looks like across the category, not just for one café.

What you find in the matcha data
As you run this through the agent, the experience breaks into clear layers: matcha quality as the entry ticket, staff friendliness as the emotional differentiator, and desserts and atmosphere rounding it out.

More interestingly, when you're looking at impact, matcha quality has the strongest positive effect on ratings (+1.12 stars per positive mention). But staff attitude has the strongest negative effect (−0.87 per negative mention). That combination tells you something important. Quality gets rewarded, but bad service gets punished even faster.

Takeaway
Every category follows a similar structure: baseline expectations, emotional differentiators, and friction points. If you don’t separate these early, everything else becomes harder to interpret.

What this unlocks in an AI agent workflow
This prompt creates a general understanding of the dataset. It gives the AI agent a reference point for what "good" looks like, so every next step is grounded in the same definition instead of drifting into generic summaries.

02. Reveal what scoring metrics don't tell

"How do scores and net sentiment differ for each location?"

This is your landscape view. You're mapping who's leading, who's lagging, and how far apart they actually are. But, importantly, you're not stopping at averages. You're combining ratings and sentiment to understand what scores alone don’t tell you.

What you find in the matcha data
One café (Ohho) leads across every metric, with a 4.88 rating and +92 sentiment. Others cluster lower, with noticeable gaps. What stands out is not just the ranking, but the spread. That might not sound like a big deal, but a ~0.5 star difference translates into a 45-point gap in net sentiment. That's where the real differences start to show.

Takeaway
Small differences in ratings can hide much bigger differences underneath. Looking at sentiment alongside scores reveals gaps that averages alone won't show.

What this unlocks in an AI agent workflow
This anchors the analysis in measurable differences. It pushes the agent to compare and quantify gaps, turning feedback into something you can prioritize instead of just describe.


03. Identify strengths

"Which topics are mentioned the most positively?"

This is where perception becomes visible. Not what companies say they are, but what customers consistently experience.

What you find in the matcha data
Each café has carved out a distinct niche: they're either pistachio-driven, personal service focused, dessert lovers, matcha-purists, or new flavor innovators. This doesn't happen by chance. It reflects deliberate or emergent positioning in the market.

Takeaway
Customers associate brands with specific strengths. Surfacing these patterns makes differentiation visible instead of assumed.

What this unlocks in an AI agent workflow
This is where the agent starts structuring perception. Instead of listing feedback, it organizes it into consistent topics so you can compare how each player is positioned in the market.

04. Spot weaknesses

"What are the most common complaints?"

You're now identifying where and how the experience breaks differently across players.

What you find in the matcha data
Each café has its own failure mode: we’re seeing complaints about seating constraints, quality inconsistencies, pricing issues (expensive), staff attitude (100% negative), or a mix of it all. That last one is particularly telling. It shows how a single issue, if consistent enough, can dominate perception.

Takeaway
Weaknesses are not random. They point to underlying operational or experience gaps, and understanding them is what makes improvement actionable.

What this unlocks in an AI agent workflow
This introduces friction into the analysis. It helps the agent isolate where experiences break down and make sure that negative signals are not diluted by overall averages.

05. Understand what drives satisfaction

"Which aspects of the experience have the strongest influence on ratings across these businesses?"

This is where you move from "what is mentioned" to "what actually matters." You’re quantifying impact.

What you find in the matcha data
Matcha quality has the strongest positive impact on ratings. But it’s relatively consistent across cafés, which limits its ability to differentiate.  
 
Staff attitude behaves differently. Positive mentions barely move ratings. Negative mentions have a massive downward impact. This makes it a classic hygiene factor. When it works, it goes unnoticed. When it fails, it dominates the experience. But we also saw café-specific nuances that highlighted their unique drivers (for good and bad).

Takeaway
Not all aspects of the experience carry the same weight. Driver analysis shows where changes will actually move outcomes, helping you prioritize what matters most.

What this unlocks in an AI agent workflow
This is the shift from frequency to impact. Now you can move to prioritization instead of just surface-level observation.

06. Dig deeper into performance differences

"Why do some receive higher ratings than others, based on customer feedback?"

At this point, you've mapped the market. This prompt connects those layers and explains why differences exist.

What you find in the matcha data
At first glance, it looks like top cafés simply have better matcha. But the data says otherwise. Matcha quality mentions are almost identical between higher- and lower-rated cafés (35% vs 31%). The real differences show up elsewhere. The cafés that perform the best aren't dramatically better at the product. They're significantly better at the experience around it.

Takeaway
Once baseline expectations are met, performance differences come from how the experience is delivered. That’s where differentiation actually happens.

What this unlocks in an AI agent workflow
This adds another perspective on segmentation. While you can prompt an AI agent to compare topics and sentiment for your existing customer segments, this example shows you how you can use an AI agent to use feedback as a way to cluster your audience into different groups, showing how expectations vary in real usage.

07. Segment customer groups

"Do different types of customers describe their expectations differently across these businesses?"

You're now moving from a single "average customer" view to understanding variation in expectations. This can be done by analyzing feedback through existing segments, or by letting patterns emerge directly from the data.

What you find in the matcha data
Three customer profiles emerge: experience-driven, price-sensitive, and a smaller but vocal group shaped by negative staff interactions. Positive experiences often highlight staff by name, while negative ones are largely driven by poor service. Delight splits between atmosphere and service for some cafés, and product and desserts for others.

Takeaway
There's no single "five-star experience." Expectations vary across customer groups, and understanding those differences is key to designing experiences that resonate.

What this unlocks in an AI agent workflow
This adds another perspective on segmentation beyond predefined groups. The agent helps surface patterns that are difficult to uncover manually, showing how expectations vary in real usage.

08. Detect gaps and where to improve

"What are customers asking for or expecting that none of these businesses are consistently delivering?"

This is where analysis becomes forward-looking, and where teams go from analysis to actual CX decisions. Instead of comparing what exists, you're identifying what's missing.

What you find in the matcha data
Across cafés, a clear set of unmet expectations emerges, from limited seating and inconsistent quality to weak customization, missing amenities, and uneven service. These patterns show up across multiple locations, pointing to structural gaps in the market.

Takeaway
Opportunities often sit where expectations are not consistently met. Solving shared pain points is what creates real differentiation.

What this unlocks in an AI agent workflow
This moves the agent from analysis to opportunity mapping. Instead of comparing what exists, it highlights what is missing, turning feedback into a source of innovation.

09. Track trends over time

"How have customer expectations and feedback topics changed over time?"

You're now adding a time dimension. Without it, you're analyzing a static picture. With it, you start to see direction.

What you find in the matcha data
The market has shifted. Staff friendliness is mentioned far more often, desserts matter less, and trends like boba have come and gone. Seating frustration is rising, service is improving, and new flavors are emerging. The category has moved from novelty to expectation, and customers are evaluating it more critically.

Takeaway
Customer expectations evolve quickly. What worked before doesn't necessarily work now, and keeping up with those shifts is key to staying relevant.

What this unlocks in an AI agent workflow
This adds a time dimension to the analysis. The agent stops describing a static dataset and starts identifying trends, making insights forward-looking instead of retrospective.

10. Turn analysis into proactive monitoring

"Alert me every month about new topics or shifts in sentiment across these businesses"

This is where you move from analysis to system. Instead of running a one-off project, you create continuous visibility.

What this looks like in practice
In the matcha case, this becomes a monitoring setup that tracks key market shifts: changes in ratings and sentiment over time, emerging topics gaining traction, spikes in negative feedback, and cross-café patterns that signal broader trends.

Takeaway
Insights become significantly more valuable when they're continuous. The real advantage comes from seeing changes as they happen, not weeks later.

What this unlocks in an AI agent workflow
The agent moves from answering questions to actively tracking changes, helping your team to stay aligned with the data without starting from scratch each and every time.

😎 What you now know (and probably couldn't see before)

By the time you've worked through these prompts, you’re no longer dealing with a collection of reviews. You’ve built a structured understanding of the market. You now know a few things.

  • What defines a great experience

  • How competitors position themselves

  • What actually drives ratings

  • Why some perform better than others

  • Where the gaps are

  • How expectations are evolving

At this point, you can go back to that original meeting and answer the question properly. Yes, you can in fact open a matcha café in London. But success won't come from inventing a new drink. It comes from consistently delivering an experience that removes friction, meets rising expectations, and gives people a reason to come back.

⚙️ From standalone prompts to a repeatable system

This is where most teams hit a ceiling. Running this kind of analysis once is manageable. Running it continuously across datasets, while keeping everything consistent, is where it gets complex.

This is where the difference between a standalone AI agent and a structured system becomes clear. 

  • Insights are generated on demand, not days later

  • Changes in the data are detected automatically

  • Different stakeholders can explore insights independently

  • Findings remain consistent across analyses

This is what turns isolated prompts into a real agentic workflow. And it's where platforms like Caplena come in. Not to replace thinking but to support it. It's connecting text with quantitative signals, maintaining consistency, and enabling teams to move from exploration to monitoring without rebuilding the process each time.

"

Dans le futur, les tableaux de bords seront moins utilisés qu'aujourd'hui. Peut-être qu'ils resteront à disposition des équipes pour les consulter, mais je pense que les agents seront une force de transformation.

Matthias Strodtkötter

Head of Product

Matthias Strodtkötter's company

☝️ Where this could break (and how to fix it)

This analysis wasn't perfectly smooth. A few things tripped us up, and fixing them made a big difference in the quality of the output. Luckily, there is great guidance available on getting the most out of your agent.

Problem

Fix

1

When topics aren't clear from the start: Without a structured way to code open-ended feedback, things get messy fast.

Define clear, distinct  topics that cover most responses.

2

When the agent lacks context: Some answers feel generic. Not wrong, just not specific enough.

Be explicit about your dataset. What columns mean, how topics are defined, and what you're trying to understand. Tools like Caplena carry this context forward so outputs stay grounded.

3

When you stop at the first answer: The best insights rarely come from the first response.

Treat prompts as a conversation. Always follow up.

4

When numbers aren't validated: Combining text and quantitative signals requires a quick sense-check.

Sanity-check numbers and relationships. Platforms like Caplena rely on a statistical backbone, so calculations remain precise and consistent.

5

When prompts aren't structured: Running prompts in isolation leads to fragmented results.

Build a simple sequence where each step builds on the previous one.


👉 See how this works on your own data

If you're wondering how to apply this to your own data, not just matcha cafés (unless that’s your thing, you do you), that's exactly what Caplena's Agentic Insights are built for. Book a quick walkthrough and see how this works on your own data. Not just in theory, but in practice.


Table des matières