top of page

Akka - Enterprise Agentic AI

AKKA is one of the companies that helps make our By the Bay conferences run. Year after year, from the early Typesafe days to Lightbend and all the way up to this season. Their presence shows up on stage, on our program committee, and on the sponsor wall, threading a single story through many editions of By The Bay conferences.


Agentic systems are difficult to run at scale. Unlike simple stateless calls of chained APIs, these systems are stateful, long-lived, concurrent processes that maintain context. They require mechanisms for coordination, scheduling, persistence, and fault tolerance, much like any other large-scale distributed system. The expertise in distributed systems that Akka built collectively over the last 15 years, has given it a front row seat to the development and usage of Agentic AI.


> If you don’t know what Agentic AI is, check out this educational blog by the Akka team.


We are delighted to interview Tyler Jewel and discuss Enterprise Agentic AI.


ree

Tyler Jewell is CEO & President of Akka and a four-time DevEx CEO with three prior exits.

With 30 years in development platforms, he’s led product teams at BEA, Oracle, Red Hat, and Quest. A lifelong DevOps advocate and investor (InfoQ, Sourcegraph, Cloudant, TheLoops.ai, SauceLabs, and more), he also curates the Developer-Led Landscape, a public database of 1,700 DevOps companies.


Outside work, he’s a private pilot and volunteers with Angel Flights in Aurora, Oregon.


  1. Why “agentic AI is basically distributed systems”?


Alexy: A lot of people still think of agentic AI as “LLM + tools + some Python glue,” but you take a different view: it’s really a distributed systems problem. Why?


Tyler: Because once you move past toy or desktop agents, you’re suddenly coordinating lots of agents over a network. Those agents share a distributed state — and they need to keep working even when the network, hardware, or the LLM itself misbehaves. Now you have a distributed system at the heart, backed by a stochastic LLM. That’s exactly the class of problems Akka was built for 15 years ago.



  1. What’s the real state of agentic AI in enterprises today?


Alexy: Headlines make it sound like everyone is doing agents. Inside big companies, where are most teams actually?


Tyler: Most are still learning and prototyping on desktops. They use local agents, Python frameworks, and MLOps-led efforts because those teams already own the models. That’s fine, but it’s not the same as a 24/7, compliant, low-latency agentic system running in production.



  1. If people prototype in Python, why are production systems moving to Akka and the JVM? What makes an agentic platform enterprise-grade?


Alexy: You’ve won accounts full of Python developers. What was the case you made?


Tyler: We sell outcomes, not Java vs. Python:


- Get to production faster.

- Stay in production safely.

- Scale more cost-effectively.


Our SDK is simple and tightly constrained, so AI can generate complete systems without hallucinations. And one Akka core often replaces five to seven Python cores in LangChain-style stacks.



  1. What are real customers actually building with agents / AI-on-Akka — and where are the roadblocks?


Alexy: You cited Swiggy, Tubi, a major UK payments processor, and even Wi-Fi mesh devices. What are they doing, and where do things get stuck?


Tyler: They’re building personalization, traffic routing, and real-time recommendations under strict latency and availability constraints. That’s where Python prototypes break, and the actor model helps. We have dozens of agentic systems in production: Swiggy runs sub-100 ms personalization and routing on a large Akka cluster; Tubi runs real-time personalization across more than half a dozen models.



  1. How do banks and fintechs shape the stack for agentic AI?


Alexy: You operate in 52 banks, with nearly 30 using Akka for payments. Banks are conservative — JVM-friendly and strict on dependencies. Does that push agentic AI toward the JVM long-term?


Tyler: Yes. Their priority is preventing data loss and corruption. The JVM ecosystem is curated and inspectable — they can decompose every JAR, approve it, and run it. That’s why a large share of enterprise agentic systems will land on Java/JVM.



  1. What’s next: enterprise orchestration, an agentic mesh, and a global control plane?


Alexy: You described “islands of intelligence” and an “agentic control plane.” What does that look like?


Tyler: Enterprises will have thousands — eventually millions — of agents across multiple frameworks. To unlock value, you need discovery and composition across teams and domains: an enterprise agentic orchestration layer — a global control plane that finds these islands, interacts with them, and enables cooperative goals.



  1. What should practitioners do right now?


Alexy: If I’m coming to AI by the Bay to learn reliable AI, what should I do in the next 3–6 months?


Tyler: Experiment across frameworks. Learn the two reliability philosophies. Build something with a unified SDK. Plan for agent discovery and reuse. Track costs early. If you want a system that “never fails,” make the application responsible for its own SLA so it keeps adapting and can hold responsiveness indefinitely.


Where to next?


After working with major design partners and handling systems at billion tokens per second scale, Akka team introduced the Akka Agentic platform. We, By the Bay, digged into it hands on and invite you to try it too!


ree

What is Agentic AI? - explainer by Akka team.

Agentic AI: Why Experience Matters More Than Hype - Tyler Jewell on challenges LangChain is facing and how Akka solves them. 


Comments


bottom of page