What a Software Background Teaches You About Hotel Operations
I didn't come into hotel consulting from hospitality. I came from software — building systems, working with data pipelines, thinking about how information flows through organizations and where it gets stuck. When I started working with hotel teams, the operations looked different to me than they do to people who grew up inside the industry. Not better — just through a different lens. That lens turns out to be occasionally useful.
The first thing I noticed: the morning report is a data pipeline
In software, a data pipeline is a process that moves data from one or more source systems, transforms it into a usable form, and delivers it to a destination where it can be acted on. This is one of the most common patterns in all of enterprise technology — so common that there's an entire category of software dedicated to managing it.
When I saw a hotel team's morning reporting process for the first time — pulling files from the PMS, copying figures into a spreadsheet, formatting it, checking the formulas, distributing it — I recognized the pattern immediately. It was a data pipeline. A manual one, assembled by hand every morning, but structurally identical to what I'd seen in software operations.
The hotel team didn't call it a pipeline. They called it "doing the reports." But the framing matters, because the moment you see it as a pipeline, the obvious question becomes: why is a human doing the steps that software should be doing?
I've noticed that hotel teams sometimes push back on this framing initially. The implication feels like you're saying the work is simple or that the people doing it are replaceable. That's not it at all. The work of interpreting the output — deciding what the numbers mean, figuring out which dates need attention, making the rate call — is genuinely hard and requires real expertise. The assembly step that precedes that work doesn't.
The second thing: reliability is a design property, not a personality trait
In software, you build for reliability explicitly. You write tests. You add monitoring. You build retry logic. You set up alerts for when something fails. The assumption is that systems will break, so you design around that reality from the start.
Manual workflows don't have any of this. They rely on the person doing them showing up, following the same steps in the same order, catching their own errors, and not being sick or on vacation. When the process works, it's invisible. When it doesn't — when the figures are wrong, when the report arrives at 9am instead of 7:30am, when a column is missing — the failure is attributed to human error and the fix is "be more careful next time."
The interesting observation from a software perspective is that "be more careful" isn't actually a fix. It's an acknowledgment that the process doesn't have the reliability properties it needs. A well-designed automated process doesn't need someone to be careful. It either works or it alerts someone that it didn't work. The reliability is in the system, not in the effort.
Build monitoring. Define what "working" looks like. Alert when it doesn't. Make the failure mode obvious so it can be fixed quickly.
The automation layer either delivers the report by 7:45am or it sends an alert that the source export didn't arrive. No silent failures. No relying on someone to notice.
The third thing: the bottleneck is usually not where people think it is
In software performance work, there's a well-known principle: optimize the bottleneck, not the most visible part of the system. A change that speeds up a fast step while the slow step stays slow produces almost no improvement. You have to find and fix the actual constraint.
Hotel reporting workflows have a version of this problem. When teams think about improving their reporting, they often focus on the output — the format of the brief, the dashboard it's displayed in, the visualization. That's the visible part. But the bottleneck is almost always the assembly step: the time between when the source data exists and when the finished report is ready. Improving the format of the brief while leaving the manual assembly intact is exactly the mistake of optimizing a fast step while the slow one stays slow.
The useful question isn't "how should this report look?" It's "what takes the most time between the data existing and the brief being ready, and can that step be removed?" In most cases I've seen, the assembly step is the answer to both parts of that question.
The fourth thing: documentation is a forcing function
In software, you document systems because other people need to work on them — now or later. The documentation isn't the goal. It's a side effect of building something other people can understand and maintain.
I've found that asking hotel teams to document their reporting workflow has a similar forcing-function effect. When you actually write down — step by step, with who does what and in which order — how the morning report gets assembled, two things happen. First, you find steps that no one could explain clearly, which usually means no one owns them or they're done inconsistently. Second, you realize how many steps there are. The process that felt like "just doing the reports" turns out to have 18 distinct steps with 4 dependencies and 2 points where the whole thing waits on a single person.
That documentation exercise is where automation planning starts. You can't automate a process you haven't mapped. And you can't make a good case for changing a process until you've made the current one visible.
This is what the workflow audit on this site is actually doing. It's not grading anyone. It's making the current process visible in a structured way so you can see where the friction is — and so any automation built on top of it starts from an accurate understanding of what actually needs to change.
Where the parallel breaks down
I want to be honest about where the software lens has limits, because applying it carelessly does cause real problems.
Software systems are deterministic in a way that hotel operations aren't. Code does the same thing every time you run it. The morning report process has to handle the reality that source data doesn't always arrive on schedule, that some properties will have anomalies that need human judgment before they can be included in the brief, and that the team members who use the output have varying levels of technical comfort with automated systems.
The automation layer I build for hotel reporting isn't trying to make the process fully autonomous — it's trying to remove the mechanical assembly work so the human judgment that's genuinely required can be applied more effectively. There's a meaningful difference between automating assembly and trying to automate the decision-making that depends on it.
Revenue management is, at its core, a judgment-under-uncertainty discipline. The data informs the decision. The person makes it. I'm trying to make sure the data is there, reliably, when the decision needs to be made — not replace the person making it.
← Back to all postsWant to map what your reporting process actually looks like?
The workflow audit walks through your current morning reporting steps, surfaces where the friction is, and gives you a baseline for figuring out what's worth automating first.