You’re designing a new feature. Your team has debated for weeks about which interaction pattern will work better. Mockups look clean in Figma. Stakeholders have signed off. But you have no idea if real people can actually use it. This is the problem guerrilla usability testing solves—cheaply, quickly, and without the overhead of a formal lab study.
A coffee shop is one of the best environments for this kind of rapid user research. The ambient noise creates realistic distraction. The seating arrangement lets you observe without hovering. And the foot traffic gives you access to people who aren’t your coworkers or users who’ve already been through your onboarding funnel. If you haven’t run one yet, you’re missing data your competitors are probably already collecting.
This guide walks you through the entire process from preparation to presenting findings. I’ve run dozens of these tests in coffee shops across three countries, and I’ll tell you exactly what works, what doesn’t, and where most people waste their time.
Guerrilla usability testing is a rapid, informal method of observing real users trying to complete tasks with your product, prototype, or design. Unlike formal lab studies, you don’t recruit through panels, don’t pay participants market rates, and don’t control every variable in the environment.
The term comes from the guerrilla marketing concept—unconventional, on-the-ground tactics that achieve results with minimal resources. In UX practice, it typically means intercepting people in public spaces, asking them to complete a few tasks, and watching what happens.
You’ll hear people argue about whether this counts as “real” research. It does—with the appropriate constraints. You’re not testing for statistical validity. You’re looking for severe usability issues, obvious friction points, and patterns in how people approach unfamiliar interfaces. Five users in a coffee shop will find roughly 85% of the problems you’d find with more rigorous testing, according to the classic Nielsen Norman Group research. That’s good enough for most iterative design work.
The method isn’t new. Jakob Nielsen was writing about it in the early 2000s. What has changed is how accessible it’s become—any designer with a prototype and a coffee budget can do this today.
Coffee shops offer something no usability lab can replicate: authentic context. People are alert but relaxed. They’re in public but not performing. The background noise (conversations, espresso machine hissing, indie music playing) mirrors the distracted, multitasking environment where most digital products actually get used.
The other practical advantages are harder to ignore:
Speed. You can be testing within 30 minutes of deciding to run a session. No recruiter emails, no scheduling software, no participant compensation logistics.
Cost. A $5 coffee buys you a 20-minute conversation with someone who might reveal that your navigation labels make no sense. Compare that to $100+ per participant for panel studies.
Honesty. Strangers in coffee shops have no reason to spare your feelings. They won’t nod along because they think that’s what you want to hear. If your design confuses them, they’ll tell you—or better, you’ll watch them struggle without saying a word.
Diversity. Unlike testing with colleagues or existing users, coffee shop recruitment naturally gives you a range of ages, tech comfort levels, and mental models. You’re not just testing with people like you.
The main tradeoff is control. You can’t guarantee participant demographics, you can’t replicate exact conditions across sessions, and you have to work around the natural rhythms of the space. These are acceptable tradeoffs for iterative design feedback. They’re not acceptable if you’re testing for accessibility compliance or medical device interfaces.
Before you leave your desk, you need to know what you’re actually trying to learn. Vague goals like “get feedback on the new checkout flow” will give you vague results.
Write two to three specific questions you want answered. Examples:
Each test session should focus on one or two tasks maximum. If you pile on five tasks, participants fatigue and your later data becomes useless.
Your script serves two purposes: it keeps you consistent across participants, and it prevents you from leading people toward answers you want to hear.
Keep it short. A guerrilla test script should fit on one side of paper:
The biggest mistake I see is designers who can’t resist giving hints during the observation phase. If someone is struggling, that’s data. Write it down. Then, after the session, you can decide whether to give them a hint for the remaining tasks.
Here’s a real script I used last year when testing a budgeting app prototype:
“Hi, I’m a designer working on a new app. Could I get 10 minutes of your time to try something and give me feedback? I’m not selling anything—this is just research for a project I’m working on.”
[If yes:]
“Great, thanks. So this is a concept for a budgeting app that doesn’t exist yet—just a prototype. I’d like you to try one thing: can you add a $45 expense for groceries? Just tap around and do what feels natural. I’ll watch and take notes, but I won’t help unless you get completely stuck.”
[After they attempt it:]
“That was helpful. When you were looking for where to add an expense, what were you thinking? Was it clear where to tap?”
Notice there’s no “think aloud” pressure. Asking people to narrate their thoughts while performing tasks is awkward in casual environments. Instead, ask retrospective questions after they finish each task.
Bring exactly what you need:
What you don’t need: recording equipment, screensharing setup, or a formal observation room. The messiness is part of the method.
This is where most people hesitate. Walking up to strangers and asking for their time feels uncomfortable. It gets easier with practice, and there are specific approaches that work.
Timing matters. Weekday mornings (8-10am) and Saturday mid-morning (10am-12pm) are prime. People are awake, they’re already committed to being there, and they haven’t been on their laptops for hours. Avoid happy hour—people are either rushing to leave or already drinking.
Choose your spot. Sit at a table near high traffic areas but not in the walkway. Don’t hover near the register. You’re looking for someone alone, on a laptop or reading—someone who looks like they have 15 minutes but aren’t deeply focused. Avoid people in meetings or on phone calls.
The approach. Make eye contact, smile, and lead with the ask:
“Excuse me—I hope I’m not interrupting. I’m a designer working on a new app and I’m looking for someone to try it for about 10 minutes. I’d really appreciate the feedback. Would you be willing to help?”
If they seem hesitant, add: “I can buy you a coffee as a thank you”—but lead with the request. Most people say yes. If they say no, thank them and move to the next person. Don’t take it personally.
Target five participants. For guerrilla testing, five users is the sweet spot. You’ll start seeing patterns by user three, and user five usually confirms what you’ve observed. More than five in a single session is diminishing returns.
A practical tip: recruit all five before you start testing. Approach five people, get five yeses, then go back to the first person. This prevents the awkwardness of trying to recruit someone while another participant is waiting.
Once you’ve recruited your participants, the actual testing follows a consistent rhythm:
1. Set the context (1 minute). Remind them what they’re doing, that they’re helping with research, that there are no wrong answers. Show them the device and let them hold it if appropriate.
2. Give the task (30 seconds). Read your script’s task description verbatim. Don’t add color commentary. “Can you find where you’d search for a flight to Denver?” is clear. “Can you try to find a flight—I’m really curious if the search works the way I hope it does” introduces bias.
3. Observe without intervening (5-10 minutes). This is the part that requires discipline. Watch their fingers, their facial expressions, their pauses. Note exactly where they tap first. Count how many times they back out. If they mutter something, write it down. If they sigh, write that down too.
Only step in if they’ve been completely stuck for more than 60 seconds. Even then, ask a question first: “What are you looking for right now?” Often they’ll answer and then figure it out themselves.
4. Ask follow-up questions (2-3 minutes). After each task, ask one or two questions. Don’t ask “Did you like it?”—that’s not useful. Ask “What were you thinking when you tapped that?” or “What would you expect to happen here?” These questions reveal mental models.
5. Thank them and wrap up (30 seconds). “That’s exactly what I needed. Thanks so much.” Hand them the coffee or gift card. Walk them through what happens next if appropriate (“I’ll summarize what I learned and we may implement changes based on this”).
Repeat with each participant. Take 2-3 minutes between sessions to jot down everything you remember before it fades.
Your notes from the session are raw material. They need processing before they become useful to your team.
Right after your last session, spend 15 minutes writing up what you observed:
I use a simple three-column format:
| User | What happened | Quote/observation |
|---|---|---|
| Participant 1 | Tapped logo instead of search icon | “I thought the logo would go back to home” |
| Participant 2 | Scrolled past search bar twice | “I didn’t see it until I scrolled down” |
This is not formal research documentation. It’s a working note for you and your team.
After documenting, the analysis is straightforward: group issues by severity.
Critical issues: Users couldn’t complete the task. This needs fixing immediately.
Major issues: Users completed the task but with significant friction, confusion, or obvious workarounds. Plan to address these in the next sprint.
Minor issues: Users noticed something off but it didn’t stop them. Consider fixing if it’s easy; otherwise, deprioritize.
When you present to your team, skip the formal research presentation. Guerrilla testing lacks the rigor for that. Instead, frame it as “quick feedback we gathered” and focus on actionable findings:
Be honest about limitations: “We talked to five people on a Saturday morning—this represents a specific slice of users, not everyone.” This honesty builds trust and prevents stakeholders from over-interpreting the data.
Having run dozens of these, here’s what I’ve learned the hard way:
Testing too many tasks. If you give participants five tasks, data from tasks four and five is contaminated by fatigue. Stick to one or two.
Helping too soon. The instinct to rescue someone from confusion is strong. Resist it. Their struggle is the data. Ask “What are you looking for?” before you point anywhere.
Testing on the wrong device. Don’t hand someone your $1,200 laptop to test a mobile app. Test on the device your actual users have.
Recruiting only people who look like you. Young, urban coffee shops skew certain demographics. Vary your locations and times to get different participant pools.
Not compensating fairly. A $5 gift card for 15 minutes is reasonable. Asking for 30 minutes without offering anything is rude and will get you more nos than yeses.
Collecting without acting. The worst guerrilla testing is the kind where designers gather interesting anecdotes and then don’t change anything. If you run the test, commit to doing something with what you learn.
This method has real limits. Don’t reach for guerrilla testing when:
For those situations, you need formal lab studies, moderated remote testing, or unmoderated panel research. Guerrilla testing is a complement to those methods, not a replacement.
The honest truth: I’ve seen teams use guerrilla testing as an excuse to avoid proper research entirely. That’s a mistake. Use it for what it’s good at—rapid, early-stage feedback that keeps your design direction from going off the rails. Then graduate to more rigorous methods for high-stakes decisions.
Running a guerrilla usability test in a coffee shop is one of the highest-return activities you can do as a designer. For the cost of a few lattes, you get direct observation of how real humans interact with your work. No panel. No scheduling. No overhead.
The method isn’t perfect. It won’t tell you everything. But it’ll tell you if your navigation is broken, if your labels make sense, and if people can actually do the thing you designed them to do. That’s information worth having before you ship.
Pick a prototype. Walk to a coffee shop. Ask five strangers for 10 minutes. Your next iteration will be better for it.
Complete TikTok Shop guide for 2025: Learn proven strategies to sell products and explode your…
Discover the biggest social media trends 2024 that are reshaping digital marketing. Learn what's working…
Discover the top social media marketing trends 2024 to boost your brand. Learn proven strategies…
Master social media marketing in 2025 with our complete guide. Boost engagement, grow your following,…
Social media marketing strategies 2024: proven tactics that work. Learn how to grow your following…
Discover the most effective social media marketing strategies in 2024. Learn proven tactics to grow…