What Are Async Agents?
The Present: Synchronous Agents
In our current era, AI assistants operate in a synchronous fashion. They function like a real-time conversation—waiting for our input before taking action. We must stay actively engaged, guiding the AI tools and making decisions. Humans still direct 99% of the work, choosing both which AI tools to use and how to use them.
The agents with the most adoption are those used mostly with human oversight (i.e. copilots):
- Coding agents such as Cursor, GitHub Copilot and Lovable
- Knowledge agents such as Perplexity and ChatGPT
Meanwhile, agents that are fully autonomous (autopilots) haven’t been as successful yet:
- Take Cognition AI’s Devin, for example. Although Devin clearly has a much more sophisticated architecture and infra, the adoption has not been as successful as AI-enabled editors.
Kevin Scott (CTO of Microsoft) on the future of agents
Agents will definitely be less transactional, less session-oriented going forward. I hope we get more asynchronous things happening over the next 12 months.
Here Kevin is talking about what agents are like today. They wait for your reply and need feedback at every step.
Right now it's very interactive — you go to your agent and you send a prompt in, and it goes and does something immediately, and gives you the response back. And it's like “yep I've done it!”. I think there's going to be more over the next year dispatching your agent to go do something—it goes and works while you are not paying attention to it.
This is already happening! Take Google’s Gemini or OpenAI’s Deep Research or Manus’s general agent for example. We’re starting to see agents where the user enters a task, walks away, and can expect to see a reasonable result after 5 minutes or an hour.
The Future: Async (or autonomous) Agents
The future state of AI points toward fully async agents. Like background processes running on a computer, these autonomous agents will work independently while we focus elsewhere. These agents will handle tasks autonomously, only requesting human input for critical decisions or specialized expertise. The key challenge is developing AI systems that can make reliable, consistent decisions—including the ability to recognize their own limits and know when to seek human guidance.
We should just never lose the plot on where we're going: The first generation of agents are good at 5-second tasks, and then the generation after that, we're good at five-minute tasks. What we're going towards are things that you can delegate increasingly complicated tasks—and increasingly beefy work—over time the same way that you would to a co-worker.
This shift to autonomous operation marks a fundamental change in human-AI interaction. These agents will break down complex goals into manageable tasks, make decisions within set boundaries, and keep detailed records of their work. Yet they'll need robust frameworks to ensure they stay within appropriate limits and recognize when human oversight is needed.
What’s next: the inversion of labor
That's what everybody's going to want and that's where the capabilities are headed. How do you think about how to build product around where the future is almost certainly going to be? What do you need to go augment these systems with to allow them to do more of this thing? That is what ultimately I think we we want.
We're in the early stages of AI agent development and reliability. Currently, the workflow heavily favors human input—people do approximately 99% of the work while using AI tools for just 1% of tasks. These AI tools are primarily assistive, helping with specific tasks under direct human supervision.
However, we're witnessing the beginning of a dramatic shift. In the coming years, this ratio will completely reverse—what we call an inversion of labor. AI agents will take on the bulk of the workload, handling roughly 99% of tasks independently, while only requiring human intervention for the remaining 1% of cases.
This transformation will fundamentally change how humans work. Instead of being primary task executors, people will transition into more strategic oversight roles. They'll spend less time on direct, hands-on work and more time managing multiple AI processes. Think of it like evolving from being a craftsperson to becoming a workshop supervisor—less focused on individual tasks and more on coordinating and quality control.
However, this new paradigm brings significant challenges. The key questions we need to address include:
- How do we establish clear protocols for AI-to-human handoffs?
- What are the best practices for determining when AI should escalate issues to human attention?
- How do we ensure smooth collaboration between AI agents and human supervisors?
- What systems need to be in place to maintain accountability and quality control?
We’re tackling all of these challenges and more at Abundant. If you’re interested in working with us, please reach out!
