Email Is the API: How Hotel Reporting Automation Works Before the Perfect Integration Exists
When most people hear "hotel reporting automation," they assume the first step is getting API access to the PMS. Sometimes it is. But in most hotel management companies, the real integration layer is far less glamorous: scheduled email attachments, shared Drive folders, Excel files with names that almost follow a pattern, and reports that arrive at 6:12am — except when they arrive at 6:47am, except when one property sends yesterday's file again. That sounds messy because it is. It's also exactly where the work happens.
Why the API conversation is often the wrong starting point
The hotel tech stack is API-poor by design. PMS vendors gate programmatic access behind partner programs. RMS platforms protect their data models. BI tools export to Excel because that's what the customer requested in 2019 and nobody has changed it. The result is a landscape where data moves primarily through scheduled reports, email attachments, SFTP drops, and manually exported spreadsheets.
This isn't a temporary problem waiting to be solved. It's the architecture. For a hotel management company running mixed brands across properties with different PMS platforms, different versions, and different configurations, there is no single clean API that covers everything. There's a collection of export mechanisms, some more reliable than others, that together constitute the data layer.
Waiting for perfect API access before automating anything is the wrong frame. The practical question is: what can you reliably receive right now, and how do you make that reliable enough to build on?
The export layer is already there — it's just not organized
Here's what most hotel management companies actually have running today, whether they've thought about it as a data layer or not:
- A PMS that can email a daily close report on a schedule
- A rate intelligence tool that generates a daily export by property
- An RMS that produces a pickup and forecast view in some format
- A BI or financial system that can schedule a budget variance report
- A GM or revenue manager who already receives most of this in their inbox
The reports exist. The delivery mechanism — email, usually — is already functioning. What doesn't exist is the layer that receives all of it reliably, normalizes it into a consistent structure, detects when something is wrong, and routes the finished view to the people who need it.
That gap is where the automation work lives. Not in building new integrations, but in organizing what's already flowing.
The real engineering problem is reliability, not parsing
Getting a script to read an email attachment and put numbers in a spreadsheet is not the hard part. The hard part is everything that happens when the world isn't clean.
The PMS export didn't arrive this morning. Maybe the scheduled job failed. Maybe the PMS was slow. Maybe someone changed the report configuration. Whatever the reason, the automation needs to know what to do: halt and flag, proceed with last known data, or blank the affected cells and mark them stale.
The file arrived at 7:45am instead of 5:30am. The morning brief has already been reviewed. Does the system rerun automatically? Flag for manual review? Log the arrival time so there's a record of what data was available when?
The same report arrived twice — once from the scheduled job and once because someone forwarded it manually. Processing it twice would double-count pickup figures. The automation needs to detect and deduplicate, not just process everything it receives.
The report arrived on Tuesday but contains Monday's data. Sounds obvious, but the business date often lives in a cell inside the spreadsheet, or embedded in the filename, or derivable only from context. Getting the date wrong silently — showing Monday's occupancy labeled as Tuesday — is a trust-killer that can go unnoticed for days.
The PMS got an update and the export now says "Rooms Sold" instead of "RoomsSold" and "OCC%" instead of "Occ Pct". The script that was working yesterday breaks silently — or worse, maps values to the wrong columns without breaking.
This is the worst failure mode. A required source file is missing, but the formula or import in the destination sheet still shows last night's figures. The brief goes out. Everyone reads it. Nobody knows the data is 24 hours old because there's no indication it hasn't refreshed. A stale number is worse than a missing number — at least a missing number is obviously missing.
The list above is not a list of edge cases. It's a list of things that happen in normal operations at any hotel management company running manual or semi-manual reporting workflows. Edge cases are the product. Handling them well is what separates a reliable automation from a fragile demo.
What the reliability layer actually looks like
Every automation project I work on starts with the same foundation before writing a single formula or building a single view:
The minimum reliability layer
Source inventory: document every expected input — report name, source system, delivery mechanism, expected arrival window, required vs. optional, normal format
Business date parser: extract the date the data is for, not the date the file arrived — these are often different
Duplicate detection: flag or skip files that have already been processed for the same business date and property
Canonical column mapping: normalize source field names to consistent internal names, so header drift doesn't silently break downstream formulas
Missing file behavior: define what happens for each source if the file doesn't arrive — blank the cells, show last known, halt, or flag
Stale output prevention: if a required input is missing, the output cells that depend on it should show clearly that the data is missing or stale — not yesterday's values
Run log: every execution writes a record — business date processed, files received, files missing, any anomalies, timestamp — so there's an audit trail the team can check when something looks off
This isn't glamorous work. None of it produces a chart or a dashboard or an AI summary. But it's the foundation that makes everything else trustworthy. A morning brief the team has learned to doubt — because it's been wrong before without saying so — is worse than no morning brief at all.
Why this approach works without a big IT project
The export layer is something hotel management companies already have. The PMS already sends reports. The rate shop already generates daily exports. The financial system already produces variance summaries. Nobody has to negotiate new API access or wait for a vendor to build a new integration endpoint.
What changes is that those exports get received in a monitored, structured way instead of landing in someone's inbox for manual forwarding into a spreadsheet. A Gmail filter routes them to a labeled inbox. A Google Apps Script reads the attachments, runs the reliability checks, normalizes the format, and writes the figures to the right cells. The destination sheet — the one the team already opens every morning — gets the same view it's always shown, just populated automatically instead of manually.
The team doesn't change their behavior. The brief looks the same. The only observable difference is that it's ready before anyone sits down, and when something goes wrong, the brief says so instead of quietly pretending everything is fine.
When to replace the export layer with something cleaner
The export-first approach is the practical first move, not the final architecture. As the automation matures and the portfolio grows, it makes sense to revisit each source and ask whether a cleaner integration is now available — a scheduled SFTP delivery from the PMS, a certified middleware connection to the RMS, a structured API endpoint that the BI vendor has opened up.
But the right trigger for those upgrades is reliability problems with the existing approach, not a philosophical preference for APIs over email. If the scheduled PMS export has been arriving consistently for six months, there's no reason to rebuild that piece. The exports that break, arrive inconsistently, or require manual intervention are the ones worth replacing with a more stable delivery mechanism.
The goal was never a perfect integration architecture. The goal was a morning view the team can trust. The export layer, built carefully with a proper reliability layer underneath it, gets most hotel management companies 90% of the way there — without a vendor partnership, without an IT project, and without waiting for the PMS to expose a better API.
← Back to all postsNot sure what your export layer looks like today?
The source report inventory maps every scheduled report your morning workflow depends on — what it is, how it arrives, what happens when it doesn't — which is the starting point for building the reliability layer on top of it.