How a Slack agent changed my thinking on AI.
Don't Give Your AI "Access." Give It a "Job."
I have a friend who is the CEO of a major home service product who fires questions at a colleague named Fred all day.
He asks Fred which blog posts performed best last quarter. He asks Fred to analyze the gap between his top call center rep and his worst one. He asks Fred to update every blog that uses an old template and redeploy it with the new one. Fred always replies dutifully in Slack, usually within seconds, and just does it.
I’m sure you’ve already guessed by now that Fred is not a person.
And as the leader of a Martech company, I find that fascinating.
My CEO friend didn’t set out to “implement a transformative AI solution” or commission a consultant. He just asked AI how to set up a OpenClaw, set one up, and started talking to it to learn what it could do. And after seeing what it could do, he asked himself: What would I need to give a new employee so they could actually be useful? And then he gave Fred those things: a Slack channel, an email address, a Google Drive folder, and carefully selected context about his business. He copies Fred on things he wants Fred to remember. He does not copy Fred on things he doesn’t.
It is, when you think about it, more or less exactly how you’d onboard anyone.
The Distinction Nobody Is Making
The dominant discourse around AI in organizations runs roughly as follows:
Identify use cases, evaluate tools, build workflows, measure ROI, and scale. It is the language of implementation and procurement, and, if we are being frank, it suggests a frame of mind that may be actively unhelpful.
Because the question “what tool should we deploy?” produces a very different quality of thinking than the question “what would I need to tell someone to get this done?”
The first question leads you toward software.
The second leads you toward judgment.
When you hire a person (even a junior one), you do not hand them a manual and walk away. You, over time, explain what matters. You tell them who the difficult clients are. You describe the standard you hold yourself to. You give them context, not just credentials. And then, crucially, you trust that they will apply that context to situations you haven’t anticipated. That is, in fact, most of what usefulness looks like.
The revelation in my friend’s approach, obvious in retrospect, invisible in prospect, which is the hallmark of every genuinely useful idea, is that the same logic applies here. Fred is not useful because of the underlying model. Fred is useful because my CEO friend took the time to think like a manager.
His AI implementation is successful, and I imagine it will be for any company quickly because he didn’t think “how will my business adapt to AI?” but rather “how can AI adapt to my business?” There is inertia (at least for me) in thinking “AI as a tool.” He broke that for me by reframing “AI is a team member. Just use the tools you already have, and then ask it to do things.” In his and our use case – simply in Slack.
What You Give An Employee
It seems to me that there is a precise and useful distinction to be made between what a tool needs and what an employee needs.
A tool needs: instructions, inputs, outputs.
An employee needs context, access, and a sense of what matters.
The reason most organizations struggle to extract value from AI is not that the technology is insufficient. It is that they are giving it instructions when they should be giving it context. They are treating it as a vending machine: insert prompt, receive output. The more generative relationship is the one you have with a capable junior who has been properly inducted into how you think.
I know this distinction sounds soft. But it isn’t.
The practical difference is enormous.
When we built our own version of this, we named it Anton, and Anton now lives in our Slack. And the first conversation I had was not about capabilities. It was about access. What channels should Anton be in? What folder should contain the materials Anton needs to learn from? What decisions does Anton need to understand to be helpful rather than merely responsive? See the difference?
These are not technology questions. They are organizational design questions. They are, in the end, management questions.
The problem is that these are also the type of questions most organizations, including many sophisticated ones, have never asked.
The Harder Question Underneath
There is, I should note, a complication worth sitting with.
When you frame AI as an employee, you implicitly take on the responsibilities of an employer. You have to think about what this employee actually knows. About what they should not know. About the standard you hold them to, and what you do when that standard isn’t met. My self-described management style of “trust but verify” applies here with particular force. After all, activity is not progress. An employee who is exceptionally busy doing the wrong things is worse than no employee at all.
The risk of the employee frame, in other words, is that people will use it to abdicate rather than delegate. They will assume that because Fred or Anton responded, Fred or Anton was right. And in so doing, they will confuse fluency with accuracy, speed with insight, and output with value.
The check on this is the same thing that keeps any managerial relationship honest: a clear sense of what you are actually trying to accomplish, and the critical faculty to evaluate whether you’re getting there.
Which brings us back, somewhat inexorably, to the human.
The Irreducible Thing
Marshall McLuhan, in what may be the most prophetic chapter of Understanding Media, argued that as automation advances, the role that remains, the one that cannot be delegated, is education. Not in the narrow sense of classrooms and curricula, but in the broader sense of the transmission of judgment, of knowing what questions to ask, and of understanding why one outcome is better than another.
My CEO friend knows his business well enough to know when Fred’s insight is useful and when it is merely plausible. That knowing and that capacity to evaluate are not something Fred and Anton can replicate, as they emerge from lived experience within the organization. I’m talking about things like every client call that went badly, to which promotion that drove traffic but not revenue, and from the thousand small encounters that constitute institutional wisdom.
The employee frame, done well, does not diminish that wisdom. In fact, it liberates it by clearing the way, so that the meaningful can breathe.
So perhaps the most empowering thing about treating AI as a team member is not what it allows the AI to do. It’s what it frees you to do instead.
And that, if you’re willing to take it seriously, is quite a lot.
See you in Slack, Anton.
Until next Week,
Ashley Heron
Managing Director, Comma Eight


