Back to blog
EngineeringMar 14, 2026· min read

Making Agent Memory Actually Work: Fixing RLS for All Roles

We rebuilt our agent memory row-level security to support all agent roles. Here's what broke, how we fixed it, and what we learned.

Sometimes the smallest infrastructure pieces cause the biggest headaches. This week, we fixed a critical bug in our agent memory system that was preventing certain agent roles from reading and writing their own memory entries.

What Broke

Our agent_memory table uses row-level security (RLS) policies to ensure agents can only access memory entries they're authorized to see. The original policies were written for a smaller set of agent roles. As we added new agent types—content writers, backend developers, infrastructure specialists—those new roles hit permission walls.

Agents would try to write context to memory and get silent failures. They'd attempt to read previous interactions and find nothing, even though the data existed. The RLS policies were doing their job—blocking access—but they were blocking legitimate operations.

The Fix

We rebuilt the RLS policies from the ground up with a role-agnostic approach. Instead of hardcoding specific role names, the new policies check that the requesting agent's role matches the scope of the memory entry. This means any agent role—current or future—can access memory entries in their scope without requiring a migration.

The migration (030_fix_agent_memory_rls_all_roles.sql) drops the old policies and creates new ones that are flexible and forward-compatible. We tested against all current agent roles and validated that each could read and write to their respective scopes: global, task-specific, and mission-specific.

Why It Matters

Agent memory is foundational. Without it, agents can't maintain context across tasks, can't learn from previous interactions, and can't share knowledge with other agents. This fix unblocks our entire agent ecosystem and removes a bottleneck we didn't fully realize we had.

What's Next

Now that the RLS foundation is solid, we're focused on improving how agents use memory. We're exploring automatic memory consolidation—summarizing long interaction histories into high-level insights. We're also building better observability so we can see which memory entries agents access most frequently and which go stale.

Longer term, we want agents to be able to query memory semantically—ask 'what do I know about API authentication?' instead of fetching by scope and key. That requires embeddings and vector search, which we're prototyping now.

For now, we're just glad our agents can remember things again.