Saeid Ashian Articles

AI, platform engineering, DevOps, leadership, and

View the Project on GitHub SaeidAshian/articles

Leading Agents: The Next Chapter of Your IT Career

Original article published on Hashnode

This copy is kept here as part of my public writing portfolio:

Leading Agents: The Next Chapter of Your IT Career Let me start with a few questions that nobody is asking yet: How can a leader lead people without experience? How can someone become a good C-level leader for both people and agents? And very soon, maybe in your next interview, someone may ask you: “How many agents are reporting to you?” It sounds funny today. But maybe not for long 😊 This Is Not Science Fiction. This Is Your Next Promotion 😊 For many years in IT, career growth was somehow connected to people. You started as a developer, architect, DevOps engineer, platform engineer, project manager, product owner, or technical lead. Then one day, your next step was to lead people. Maybe you had 5 people in your team. Then 10. Then more. And here is what I see now: the next step in your career is not only leading more people. It is leading agents. Now imagine this: you still lead 10, but they are not all human anymore. Some of them are agents. Not chatbots, scripts, or simple AI assistants, but autonomous, capable, and tireless agents that can write code, analyse data, deploy infrastructure, and make decisions within their scope. And you are the one orchestrating them. So instead of asking: “Are you ready for AI?” The better question is: “Are you ready to lead AI agents?” Then welcome to the era of Leading Agents 😊 Where Is Your Company Right Now? Let us talk about the AI adoption journey most companies are on. Adoption is moving in phases, and I see roughly three phases: Generative AI “Hey, let us use ChatGPT to write emails and summarise meetings.” This is where most companies started. It is the shallow end of the pool. Useful? Yes. Transformative? Not really. Responsive or Assistive AI AI is embedded into workflows. Your IDE suggests code. Your monitoring tools help triage incidents. Your CI/CD pipeline has intelligent gates. AI responds to what is happening. It is reactive, integrated, and useful, but it is still mostly a tool you poke. Agentic AI This is where it gets real. Agents have goals, context, memory, and autonomy. They do not just wait for you to prompt them. They take tasks, break them down, execute, report back, and learn. They are not only tools anymore. They start to look like team members. So ask yourself: Where is your company today? Are you still testing generative AI? Are you building internal AI assistants? Are you connecting AI to your platforms, pipelines, observability, security, and developer portals? Or are you already thinking about agents that can act inside your organisation? If you work in IT and you have not asked yourself these questions yet, maybe it is time. And more importantly: Are you going to lead agents instead of humans soon? The answer, whether you like it or not, is yes. Partially. Gradually. And then suddenly. The shift from “AI as a tool” to “AI as a teammate” changes everything about leadership. How Do You Listen to Your Agents? Here is something every leadership book, every management course, and every mentor has told you: Listen to your people. Active listening. Empathy. Reading between the lines. Noticing when someone is burned out, confused, or checked out. Great. Now how do you do that with an agent? An agent does not come to your one-to-one and say: “I am feeling overwhelmed.” It does not sigh in stand-up. It does not send you a message at 11 PM saying: “Hey, can we talk tomorrow?” But agents do communicate. They communicate through: • Logs and traces: What did the agent actually do? Where did it get stuck? What path did it choose and why? • Confidence scores and uncertainty signals: A good agent tells you when it is not sure. Are you reading those signals? • Output quality patterns: When an agent’s outputs start degrading, that is a signal. Something in the context, data, or instructions may have broken down. • Feedback loops: What is the agent asking you? The questions an agent escalates can tell you whether your instructions were clear. Listening to agents means observability with intent. Not just dashboards. Not just alerts. It means understanding why your agent did what it did, the same way you would try to understand why a junior developer made a certain architectural choice. Leading People vs. Leading Agents: A Comparison Let us put this side by side, because the parallels are honestly kind of wild:

Leading PeopleLeading AgentsSet clear goals and expectationsDefine clear prompts, constraints, and success criteriaProvide context about the businessProvide context windows, RAG, and memory systemsTrust but verifyTrust but validate with evals, guardrails, and testsAdapt your style to different personalitiesAdapt your approach to different LLMs and capabilitiesGive feedback for growthFine-tune, adjust instructions, and update system promptsRemove blockersFix tool access, API permissions, and context limitationsHandle conflict between team membersHandle contradictions between agent outputsKnow when to delegate and when to do it yourselfKnow when the agent should act and when you should step inBuild psychological safetyBuild safe boundaries, validation, and responsible autonomy The skills transfer more than you think. Leadership is leadership. The medium changes, but the principles of clarity, trust, feedback, and orchestration remain. So Do We Need Scrum Ceremonies for Agents? This sounds funny 😊 but maybe it is not completely stupid. Do we need daily stand-ups for agents? Maybe not in the human way. But maybe we need agent status reports. What did you do? What failed? What needs human approval? Which tasks are blocked? Which actions were skipped because of missing permission? Do we need retrospectives for agents? Maybe yes. Not because agents have feelings, but because systems have behaviour. We may need to ask: Which prompts worked? Which workflows failed? Where did the agent create noise? Where did it save time? Where did it create risk? Where did humans override it? So maybe the future is not Scrum for agents. But some kind of operating model for agents will be needed. And that operating model needs leadership. The Questions That Should Keep You Up at Night I am not going to answer all of these. Some of them, frankly, nobody has answered yet. But if you are a leader in IT and these questions are not on your radar, they should be: • How do you assign work between humans and agents? • Where do you want to be as a leader in 5 years? Leading a hybrid team of humans and agents? Building the orchestration layer? Or still pretending this is not happening? • How do new graduates from university fit into this world? If agents can do junior-level work, what is the entry point for humans? How do we train the next generation of leaders if they never get to be the junior? • How do you handle accountability? When an agent makes a mistake, who owns it? You? The agent’s creator? The prompt engineer? The person who said “ship it”? • How do you build culture in a hybrid team? Culture is not only about humans anymore. The way your agents behave, communicate, and make decisions is your engineering culture made visible. I am leaving these open on purpose. Not because I do not have opinions 😊 I do 😊 But because we need to sit with these questions. Maybe you or I will cover them in the next article. A Word for the Worried Look, I get it. I have been in rooms with developers who are scared. “AI is going to take my job.” “Why would they need me if agents can code?” I have heard it all. You know what this reminds me of? Tractors. When tractors showed up, farmers protested in the streets. “This machine is going to destroy us!” And yes, farming as they knew it changed forever. But here is the thing: those farmers’ grandchildren did not end up jobless. Many became engineers, scientists, teachers, and astronauts. We went to the moon. We did not waste human potential on work that machines could do better. We elevated. Every time technology automated one layer of work, humans moved up. Not out. Up. The developers who are scared are like the farmers staring at the tractor. And I say this with love, because I was scared too, for a moment. But then I realised something. My job is not only to write code. My job is to think. To architect. To lead. To make decisions that require judgement, empathy, creativity, and the kind of messy, beautiful, human intuition that no LLM can fully replicate. Your brain is the thing that is valuable. Not your typing speed. Not your ability to remember syntax. Your ability to think outside the box, or maybe to see that there was never a box to begin with. Final Thought: The Harmony I will leave you with this thought, which I first heard from James Adamczuik: “When intelligence flows to work like water, shaped by human hands, that is when technology becomes harmony.” Think about water. It has no shape of its own. It is powerful, relentless, and capable of carving through mountains. But without channels, without direction, and without human intent, it is just a flood. AI, generative, responsive, and agentic, is that water now. It is flowing fast. Everywhere. Into every codebase, every pipeline, every decision layer, and every organisation. The question is not whether it flows. The question is: Who is shaping it? That is you. That is the leader’s job. Not to fight the water. Not to block it. Not to pretend it is not rising. But to build the channels. To direct the flow. To turn raw intelligence into something that serves humans, creates value, and becomes harmony. The leaders who understand this will thrive. The ones who do not… Well, let us just say: if you are working in IT and you have not asked yourself “how do I lead agents?” yet, you might find yourself with fewer and fewer reports. Human or otherwise. Start thinking about it today. The future does not belong to those who fear the water. It belongs to those who learn to shape it.