The Actual Stack I Am Using to Build Hotel Report Automation
I want to document the actual stack behind this business because I think it is more interesting than the normal "AI wrote my landing page" story. Hotel Report Automation is the company. But the way the company is being built is part of the point.
The short version: I am using AI agents, a handful of practical tools, cheap infrastructure, and a lot of human review to build the kind of operating system I would want to create for a client.
Not magic. Not fully autonomous. Not a prompt thread that looks good on Twitter and breaks when real work shows up.
More like this: human judgment defines the workflow, agents do structured work, specialized tools handle specific jobs, artifacts make the work reviewable, and QA gates catch the risky parts before they hurt anything.
Recent receipts
This is the kind of work the stack has produced so far.
The stack, in plain English
The important part is not any one tool. The important part is what each tool is allowed to do.
The most important tool is the shared context file
This is less sexy than the AI model names, but it matters more.
Codex and Claude Code can both do useful work. They can also both create chaos if every session starts from zero. So the project has a shared context file that acts like company memory. It lists what is built, what is not done, which decisions should not be relitigated, what messaging rules matter, and what each agent did in the last session.
That one file changes the whole experience. The agents stop feeling like very fast interns with amnesia and start feeling more like contributors inside the same operating system.
The lesson is client-relevant too. Automation gets much safer when the process has memory: what the inputs are, what the output should be, what counts as a failure, what needs human review, and what should never be touched.
How I am using Deepline without burning credits blindly
The cleanest example so far was a contact-gap problem. We had 8 first-wave accounts that looked useful for outreach but did not have enough usable contacts.
The sloppy version would be: enrich everything, trust the output, dump rows into Smartlead, and hope nothing weird happens.
The better version was narrower:
- Identify the exact blocked accounts.
- Run a small lookup first instead of a giant enrichment pass.
- Use Deepline for the heavy lifting.
- Use public web checks where the result looked risky.
- Separate verifier additions from hold/reject rows.
- Create a combined verifier upload, but do not import into Smartlead until verification is done.
That run produced 13 recovered verification rows, 2 excluded hold/reject rows, and a 65-row combined verifier upload. The Deepline cost was about $1.55.
That is the kind of AI/tool spend I like: specific blocker, small run, useful artifact, review gate.
Cost discipline is part of the operating system
I am trying to be careful here because AI-native work can get expensive fast if you treat every tool like a magic button.
Use cheap infrastructure until it is not enough.
Static sites, simple HTML/JS, GitHub Pages, and Cloudflare are plenty for the first public surface.
Run pilots before scaling tool spend.
Use paid enrichment against a defined gap, inspect the result, then decide whether it deserves more volume.
Start from existing systems.
For client work, exports, spreadsheets, folders, and scheduled reports are often enough to create the first useful automation.
Spend on leverage, not novelty.
The question is not "can this be automated?" It is "will automating this produce a reliable output people actually use?"
What I do not let AI decide
This is the part that gets lost in a lot of AI content. The agents are useful, but they are not accountable. I am.
So the human review layer stays in the system for the things that can create real damage: public claims, prospect-specific copy, email drafts, recovered contact rows, anything involving client data, anything that affects deliverability, and anything that would make the business sound more mature than it is.
There have already been useful catches. Early email drafts leaked internal personalization instructions. Some owner/president drafts led too hard with AI when the better wedge was manual reporting pain. Some account pages were too generic before public-property context was added.
The point is not that AI made mistakes. Of course it did. The point is that the operating system caught them before they went live.
The goal is not to remove human judgment. The goal is to stop wasting human judgment on copy-paste work so it can be used where it actually matters.
Why this matters for hotel reporting
The way I am building this business is not separate from the offer. It is the same pattern applied to my own company.
The company has messy inputs: domains, sites, accounts, contacts, email drafts, tools, agent sessions, screenshots, and launch constraints. The work is to organize the mess, create shared context, automate repeated steps, add QA gates, and produce an operating view I can act on.
A hotel reporting workflow has messy inputs too: PMS exports, rate shop files, forecast sheets, pickup reports, emails, folders, and daily routines. The work is the same shape. Organize the mess, automate the recurring assembly, add checks, and create a morning view the team can actually use.
That is why I care about documenting the actual stack. It is not just behind-the-scenes content. It is proof of method.
← Back to all postsWant to see where your reporting workflow is still manual?
The report stack mapper walks through your source reports, manual steps, timing risk, and output format so you can see what is worth automating first.