How AI Teammates Build Memory: Turning Work Into Reusable Knowledge

Team Asana contributor imageTeam Asana
April 2nd, 2026
5 min read
facebookx-twitterlinkedin
AI Agents Built for Teams

Most AI products treat memory as a personal feature — remembering facts about one user or one conversation. But AI that collaborates across teams needs a fundamentally different kind of memory. AI systems become more useful when they can build on what they have learned before. But in enterprise software, memory is not just a question of storing more context. The harder problem is making that memory useful across shared work while still keeping it inspectable, governable, and permission-aware.

That is the challenge we set out to solve with AI Teammates.

AI Teammates operate in a different environment. They collaborate on shared tasks, projects, and documents. They receive feedback from multiple people. They work across systems. And they need to improve over time without becoming a black box full of hidden context that nobody can audit.

That creates a different design problem. An AI Teammate needs a way to learn from execution, retrieve relevant knowledge later, and explain how that knowledge influenced its actions. At the same time, it cannot treat all prior information as globally reusable. What it remembers and what it can use have to respect the same boundaries that govern the underlying work.

In practice, that means memory becomes part of our enterprise-grade collaboration architecture. It connects learning, retrieval, access control, and transparency into one system.

From one-off context to reusable knowledge

The simplest version of an AI assistant starts fresh every time. You give it a prompt, maybe attach a few files, and it tries to help inside that one interaction. That can work for isolated requests, but it breaks down quickly in long-running team workflows.

A human collaborator does not just respond to the current message. They remember the team’s preferred process, the examples that define good output, the documents that matter, the feedback they received last week, and the project context that shapes how work should be done. If AI Teammates are going to feel like real collaborators, they need a way to accumulate the same kind of working knowledge.

For us, that meant building a unified memory layer that spans the full lifecycle of knowledge:

  • How memories are created

  • How they are retrieved during future executions

  • How they are linked to the Work Graph - Asana’s structured data model that connects tasks, projects, people, and goals across an organization

  • How users inspect and govern what the Teammate has learned

That last point matters more than it first appears. A powerful memory system that nobody can inspect or correct does not create trust. It creates a new failure mode.

How AI Teammates create memory

One of the first design choices we made was to identify the right moments for a teammate to generate memories.

One class of memory is inferred during or after execution. As a Teammate works through a task, it encounters instructions, reads resources, takes actions, and receives feedback. Some of that information is transient. Some of it is durable and worth reusing, for example; a user might give feedback like “always copy the data model reviewer on these tasks”. As another example, when a Teammate reads through a project it could learn things about the purpose of the project (eg. Project X contains resources about the organization’s marketing campaign best practices) . Those are strong candidates for durable memory.

Another class of memory is explicit. Users can provide guidance directly rather than waiting for the system to infer it. This is especially important when the goal is not to capture a lesson from past work, but to teach the Teammate how to behave in a broader work context or how to use a particular resource.

That explicit path becomes especially powerful for resource-linked memory. A user might give a Teammate access to a document, and then explain what role that resource should play. Is it a process document that explains how the team communicates? A reference document containing domain knowledge? A template that should shape future work? The distinction matters because the same source material can be used very differently depending on the user’s intent.

Whenever a memory is created we create “Memory Associations”, essentially references to the Work Graph objects that the memory is about or relevant to. For example, this could be the project that the memory is describing, or a Google Doc resource that a user uploaded. As we’ll describe further in this article, these associations are key to how we retrieve memories and ensure proper access control.

Memory Creation Lifecycle Diagram

Asana AI Teammate Memory Creation Lifecycle Diagram

How AI Teammates retrieve memories

A memory system is only as good as its retrieval model. Storing useful learnings is not enough if the right knowledge does not show up at the right time.

For AI Teammates, retrieval works as a two-lane system.

The first lane is execution-start retrieval. When a Teammate begins working on a task, it receives a set of relevant memories before it starts making decisions. These might include pinned instructions that should always apply, prior learnings that match the current work, or higher-level knowledge that appears relevant based on search or semantic similarity.

The second lane is contextual retrieval during execution. When a Teammate reads a specific task, project, or other object, it also receives memories associated with that object. This matters because some knowledge is not generally relevant in the abstract. It becomes relevant because the Teammate is now looking at a particular part of the Work Graph.

That combination gives the system a useful balance. The Teammate can start with a broad working set of likely relevant knowledge, then pick up more precise context as it reads deeper into the work.

Memory Retrieval Lifecycle Diagram

Memory Retrieval Lifecycle Diagram

Making memory operational

One of the most important design choices was to represent memory as something operational rather than mystical. In our data model, memory is a concrete object with content, metadata, and associations.

A Memory Association captures a Work Graph Object that the memory is about or relevant to. This allows us to explicitly contextualize a memory in the broader Work Graph, rather than having memories be untethered to the context that the memory is capturing. 

Memory Data Model Diagram

Memory Data Model Diagram

Access control shapes the entire design

Enterprise memory systems get much harder the moment multiple people, projects, and permissions are involved.

A personal assistant AI can often treat memory as a simple extension of one user’s history. However a Teammate that works across shared work, and can be triggered by multiple people, cannot do that safely. Any memory the system creates is derived from real underlying work: tasks, comments, documents, projects, and prior executions. If memory were allowed to float free of those sources, it could become a channel for leaking information across permission boundaries.

That is why memory in AI Teammates has to inherit the same access-control logic as the work it came from.

Every memory retrieval is scoped to whoever triggered the execution. The Teammate can only access a memory if that person has visibility into the work that produced it — the tasks, comments, documents, or projects involved. Same rule for associations: if a memory references an Asana object like a task or project, it only shows up when the triggering user can also see that object. Put differently, an AI Teammate never sees anything the person who kicked it off couldn't already see.

Transparency closes the loop

The final piece is visibility. If a Teammate uses memory to guide an action, users need a way to understand that influence.

That starts with inspectable memory itself. Users should be able to view the memories a Teammate holds, and delete memories that may be inaccurate or stale. In addition, when a Teammate executes, the system can show which memories were passed into the execution context and which were created over the course of the execution.

Instead of asking “why did the AI do that?” in the abstract, a user can trace the behavior back to a specific learned instruction, resource memory, or associated context object. The fix becomes concrete: edit the memory, remove it, refresh the source, or add better guidance.

For example, if a Teammate formats a report differently than expected, a user can check its memories to see that it learned a formatting preference from another team member's feedback last week and update or remove that memory to change the behavior.

This is why memory and transparency need to be designed together. A teammate that learns over time is more powerful. A teammate that learns invisibly is harder to trust.

Why this matters

In our previous post, we explored how AI Teammates operate transparently in shared team spaces. Memory is the layer that makes that collaboration compound: the system learns from shared work, retrieves that knowledge when it matters, and does so within the same trust boundaries that govern the work itself. 

The deeper lesson is that team-scoped AI needs more than a bigger context window. It needs a model for turning collaboration into durable, reusable, governable knowledge.

In the next post, we'll look at how we evaluate and select the language models that power this reasoning.


This article was written by Anant Tibrewal an engineer on the AI Teammates team, where he works to build and scale Asana's collaborative agentic AI product.

Related articles

Engineering

AI Agents Built for Teams: Shared Context and Transparency in Enterprise AI