Manager-in-the-loop: why Ezra suggests but never acts alone
Ezra never approves an application, rejects a conversion, or sends a message without your explicit approval. Here's why that's a feature, not a limitation.
There's a tempting pitch in AI tooling right now: "fully autonomous." Set it and forget it. The AI handles everything. You just check in occasionally to make sure things are going well.
We considered that path for Ezra. We chose the opposite. Not because full autonomy is impossible, but because it's irresponsible at this stage -- for us, for you, and for the partners whose livelihoods depend on the decisions being made correctly.
The trust problem
Affiliate programs involve real money, real partners, and real relationships. Every decision has consequences that are hard to reverse. And unlike content generation or data analysis, the mistakes are visible to real people who will remember them.
Auto-approve applications? You might onboard a fraudulent affiliate who damages your brand and generates chargebacks before you notice.
Auto-reject conversions? You might void a legitimate sale from your best partner, who then moves to a competitor's program.
Auto-send partner messages? You might say something that doesn't match your tone, your relationship with that partner, or the context of the conversation.
These aren't theoretical risks. They happen. And when they happen because of an AI acting autonomously, the cost is double: you lose money or a partner, and you lose trust in the tool itself. Once you stop trusting an AI tool, you stop using it. And once you stop using it, all the time it was supposed to save is gone.
The fundamental challenge is that affiliate management looks simple from the outside but is full of context that's hard to encode. A partner with low traffic might be worth approving because they have a niche audience that converts at 3x the average. A conversion might look suspicious by the numbers but make perfect sense if you know the partner just ran a sale. Context matters. Humans have it. AI is still learning it.
Suggest, approve, execute
Every action Ezra takes follows a three-step flow.
First, Ezra suggests. It reviews an application and recommends approval or decline, with reasoning. It flags a conversion and explains what looks suspicious. It drafts a reply to a partner message and shows you the text. Every suggestion comes with context -- the data Ezra analyzed, the factors it weighed, and why it landed where it did.
Second, you approve. In Slack, this is a button. Approve, decline, or edit. One tap. No switching to a dashboard, no navigating to the right page, no finding the right dropdown. Just a button in the message thread.
Third, Ezra executes. It approves the application in your tracking platform. It flags the conversion for review. It sends the message. The action only happens after your explicit "yes."
This isn't a three-click process masquerading as automation. Ezra does the analysis, the research, the drafting, and the formatting. You just make the final call. The time savings come from Ezra doing everything up to the decision point. The safety comes from you holding the decision itself.
Think of it like a junior analyst who prepares a brief for every decision. They do the legwork. They make a recommendation. But you sign off. The difference is that Ezra does this work in seconds instead of hours, and it never takes a sick day.
How approval cards work
In Slack, Ezra uses Block Kit -- Slack's interactive message format -- to present suggestions as structured cards. A typical application review card includes the applicant's name and URL, their traffic and audience summary, any red flags Ezra identified, a recommendation with confidence level, and two buttons: Approve and Decline.
You read the card. You tap a button. Ezra confirms the action in the same thread. The entire interaction takes about ten seconds and never leaves Slack.
Conversion review cards work the same way, with the addition of specific data points: the conversion amount, the referring URL, the partner's typical conversion pattern, and what specifically triggered the flag. Enough information to make a judgment call without opening the tracking platform.
If you want to modify the suggestion -- approve with different terms, decline with a custom message, edit a draft before sending -- you can. Ezra treats your edit as the final version and executes that instead.
The design principle is that the approval step should take less time than making the decision from scratch, but more than zero time. If you're tapping "approve" without reading the card, the system isn't working right. If reading the card takes longer than just doing the task manually, the system isn't working right either. The sweet spot is ten to fifteen seconds per decision: enough to verify the reasoning, not enough to feel like overhead.
The audit trail
Every suggestion Ezra makes is logged. Every approval you give is logged. Every action Ezra executes is logged with a timestamp, the suggestion it was based on, and who approved it.
This matters for compliance. It matters for team accountability. And it matters for debugging. If something goes wrong, you can trace exactly what happened: what Ezra suggested, what you approved, and what was executed. No ambiguity.
For teams, the audit trail also shows who approved what. If your junior manager approved an application that turned out to be fraudulent, that's visible. If Ezra's suggestions in a particular category are consistently wrong, you can see the pattern and adjust.
The audit trail is also how Ezra improves. When you override a suggestion -- approve something Ezra recommended declining, or vice versa -- that signal feeds back into how Ezra evaluates similar situations in the future. Your corrections make the system smarter over time, and the audit trail is the proof that it's working.
Earned autonomy
Manager-in-the-loop is how Ezra starts. It's not necessarily where it stays.
After you've used Ezra for a while and seen that its application scoring is accurate, you might want to let it auto-approve applications above a certain confidence threshold.
After you've verified that its conversion flagging catches real fraud and doesn't false-positive on legitimate sales, you might want to let it auto-flag without waiting for your review.
That path exists. We call it earned autonomy. Ezra can gain more independence over time, but only when you're ready, only for specific action types, and only with clear boundaries you set yourself.
The default is full human control. The progression is your choice. And you can always dial autonomy back down if something changes -- a new partner type you're less sure about, a new team member who needs to see every decision, a compliance requirement that demands human sign-off.
Earned autonomy is per-action, not all-or-nothing. You might let Ezra auto-approve applications above 90% confidence while still requiring manual review for conversion flags. You might let it send routine partner replies but require approval for anything involving commission changes. The granularity is yours to define.
Why this matters
The AI tools that survive in operations -- not content generation, not brainstorming, but actual operational work with real consequences -- will be the ones that get the trust model right.
Too autonomous, and one mistake destroys confidence in the tool. Too passive, and the tool doesn't save enough time to justify itself.
Manager-in-the-loop is the balance. Ezra does the work. You make the calls. And over time, if the calls consistently match the suggestions, you can let go a little more.
We'd rather build a tool you trust completely for three things than a tool you half-trust for thirty things. Trust compounds. Autonomy follows.
That's the bet we're making with Ezra. Start cautious. Prove the value. Earn the right to do more. It's slower than the "fully autonomous" pitch, but it's the path that leads to tools people actually keep using six months later.
AI that works with you, not instead of you.
Try Ezra free