What I Look For in the First 20 Minutes of a Workflow Review
The 20-minute workflow review isn't a sales conversation. It's a diagnostic. By the end of it, I should know two things: whether I can actually help, and roughly where to start. Most consultants use early calls to qualify the client. I use them to qualify the problem — and those are different exercises with different outcomes.
A client who's enthusiastic about working together but has a process that isn't ready for automation isn't someone I should take on right now — not because they're not worth helping, but because the fix they need isn't one I can deliver cleanly. I'd rather tell them that in 20 minutes and point them toward what to do first than take the engagement and deliver something half-useful three weeks later.
Here's what I'm actually listening for.
How many source reports are in the process
The first question I always ask is the simplest one: walk me through what happens every morning, from the first report you open to when the brief goes out. Just describe it out loud, in order.
The answer tells me a great deal. A team pulling from two sources has a manageable manual process — the automation case is real but the urgency is lower. A team pulling from five or six sources across multiple properties, recombining them manually each morning, has a much stronger case and a much clearer ROI.
The number of sources also tells me about complexity downstream. More sources usually means more format inconsistency, more cleaning, more normalization work before the data is usable. That's not a red flag — it's the scope of the work. But I want to understand it before proposing anything.
Who touches the process and how
Some operations have one person who owns the morning report entirely — they pull everything, assemble the view, and send it out. Others have it distributed across two or three people, with informal handoffs and timing dependencies between them.
Either structure can be automated, but they suggest different approaches. A single-owner process is usually faster to map and faster to replace. A distributed one often has coordination happening implicitly between people — expectations about format, timing assumptions, informal checks — that need to be surfaced before the automation can replicate them reliably.
The most telling question in this area isn't who does the work today. It's what happens when that person is out. When the answer is "it usually doesn't go out" or "someone else tries but it takes twice as long and half the sections are missing," that tells me the workflow is more fragile than the team realizes — and that the case for automation is stronger than the daily time cost alone suggests.
How long it actually takes
Teams almost always underestimate this number, and in a predictable direction. They'll say 20 minutes when they mean the time they're actively working — not the time waiting for an export to generate, not the 10 minutes of context-switching when someone messages them mid-assembly, not the fact that Mondays take significantly longer because they're catching up on the weekend.
I've found it's more useful to ask when the morning brief typically arrives in inboxes, and then work backward from when the first export usually drops. That gap — from first export to delivered output — is the real number. For most groups it's between 45 minutes and two hours, even when the active work time feels shorter.
For a 10-property group, a 40-minute process per property is 400 minutes of daily manual work — roughly 1,733 hours per year — absorbed silently as "what the job requires." Most teams have never calculated it. Once they do, the conversation shifts.
What the output actually looks like
This is where I learn the most about how automation-ready the operation is. I'll ask to see the brief — the actual email, spreadsheet, or document that goes out each morning. Not a description of it. The real thing.
A well-defined, consistent output format is the strongest positive signal I can get. It means the team already knows what they want. The automation has a clear target, and the work is primarily about connecting the inputs — not figuring out what the output should be.
An output that varies from day to day, or that's never been written down as a standard, or that implicitly reflects whoever assembled it that morning — that's harder. It's not a dealbreaker, but it means there's definition work required before any automation can be built reliably. That's a different kind of project, and a longer one.
What tells me automation is practical right now
- Same source reports every morning
- Consistent output format
- Someone can describe the steps clearly
- Process has run the same way for 6+ months
- Clear owner who can confirm details
- Inputs arrive on irregular schedules
- Output format changes by person or day
- Process lives in one person's head
- Significant one-off judgment in assembly
- No consistent definition of "done"
Why I always recommend starting narrower than expected
Almost every time, I propose a narrower first pilot than the client expects. Not because the larger scope isn't worth doing — it usually is — but because a working automated output for one property or one report type is worth considerably more at week four than a complex multi-property system that's still being tested and refined.
What I've learned from working across different portfolio types and sizes is that the first successful automation creates the trust and the operational familiarity that makes everything after it easier. The team has seen the output. They know what it looks like. They've made adjustments based on how it landed in practice. The second and third phases build on something real instead of something theoretical.
Starting narrow isn't a constraint on ambition. It's the most reliable path to the larger outcome the client actually wants.
← Back to all postsWant to run your own version of this diagnostic?
The morning report workflow audit walks you through the same questions I ask in a review — source sprawl, manual assembly time, fragility, decision delay, and output consistency. Takes about 10 minutes and scores each area.