Architecting the future of AI agents: 5 flexible conversation frameworks you need

Most folks familiar with architecture can look at a building designed by Frank Lloyd Wright and recognize it immediately. His design philosophy became what we now call organic architecture—weaving the man-made with the natural world through the design and construction of harmonious buildings. In many ways, conversational AI is going through a similar architectural revolution. 

Traditional turn-by-turn conversation design is quickly becoming outdated. As language models become more advanced, we need a new approach—one that empowers designers and developers to build agents that handle complex, dynamic interactions with flexibility and context awareness. Organic conversation.  

I invite you to think of your agent as the house you’re designing with an imaginative architect at the center of the process—you. To build that house, you need five key frameworks that govern areas like context management, integration capabilities, interaction models, and data handling. 

This new evolution of agents is a future we can build towards now. But in order to reach it, conversation designers and developers must work together closely to build the parameters of how we work with LLMs, agents, and data to build flexible and delightful customer experiences. 

My goal in this article is to explain the five frameworks you’ll need to continue to see your AI agents evolve—the overarching rules every agent needs to be effective. By approaching the construction of agents as an architect might, with these frameworks to guide structural integrity, we can create agents that do much more, and as a result, save valuable money, effort, and time.

Let’s explore how powerful agents are designed based on these frameworks.

Global framework—the architect’s blueprints

Like a blueprint, an agent’s global framework contains the overarching rules that an agent will work within. A global framework sets the architect’s vision—how the agent will maintain context across different states and sessions, handle errors, adhere to privacy, and route users correctly. Further, the states we expect an agent to navigate often change, and the purpose of the agent at any given time isn’t fixed. A carpenter cuts the wooden beams for the roof and walls differently—even in one home, there are numerous needs to address. In the same way, you must construct your AI agent to navigate through multiple states in a single session.While it won’t be limited to the following, your global framework will contain areas like:

  • User routing logic: How are you navigating the user through different states based on their queries and context? 
  • State management: How do you manage the logic and flow within a specific state? What about transitions to other states?
  • Error handling and recovery: When the user provides unexpected input or the system encounters an error, what are your strategies for the agent to course-correct? 
  • Global context management: How are you maintaining user context across different states and sessions? How are you personalizing those interactions? 
  • Security and compliance: How are you ensuring all interactions adhere to privacy laws and data protection standards? 

An agent that has a thoughtfully designed global framework might manage an interaction this way: 

User: I’m planning a trip to Tokyo

Agent: Exciting! When would you like to go?

User: That's a good question, what's the weather like in August?

Agent: In August, Tokyo experiences warm and humid weather. The average temperature ranges from around 25°C to 31°C (77°F to 88°F). Would you like to book then, or would you prefer something cooler?

User: August sounds great!

Agent: Perfect. I've found a window seat on VF Air departing on the first of August. How does that sound?

Thanks to a smart designer, the routing logic guides the agent to recognize that the user is asking about booking a trip and places them in that conversation state. The user is thinking about going in August, but instead of saying that explicitly, they ask, ”What’s the weather like in August.” The routing logic enables the agent to pull this information, while the error logic works on getting the user back on track. Then context management kicks into gear, pulling information from prior trips to offer their preferred seat type (window) along with their preferred airline (VF Air).

Your global framework is your agent's orchestration layer. You set the parameters for your agent to understand when to engage in a specific conversation state, when to call for a specific back-end integration, and so on. Because of the complexity of planning and building agents like this, your global framework requires robust back-end integrations, data encryption for security, and the ability to understand and maintain context over time and across sessions. The result is setting a foundation that has the potential to be an architectural marvel. 

Local framework—the plan of each room    

The local framework of an agent provides relevant, context-aware responses and interactions within defined conversation states or skills. Without localized strategies, agents would struggle to adapt to the requirements and flow of different tasks like booking travel, providing tech support instructions, or processing transactions.

This framework must manage how the agent interacts in different states and what information the agent needs within each state. Only then can they work through complex tasks like troubleshooting or action requests like checking someone's balance. 

For instance, a bedroom might only require windows, power points, and cupboards, while a bathroom will need plumbing, waterproofing, tiling, etc. to function. The local framework is the nitty-gritty of each space and what they need to come together.When designing an agent’s local framework, here are a few questions to explore: 

  • Execution: What information do we need to execute the action? How should we execute an action? What should we tell a user once the action has been executed?
  • Content retrieval and processing: What are your specific methods for fetching and processing information relevant to the current state? 
  • User interaction models: How do you define how the agent interacts with the user within each state? How do their question-asking strategies or response formulations change in different states? 
  • Contextual adaptation: How can you tailor the interaction based on the specific context or user data?

To build an agent that handles question and answer pairs, let’s explore an example of an agent supporting a user with the APN setting on their iPhone. 

Agent: Hi, welcome to VF Telcom. How can I help?

User: How do I update my APN settings?

[LLM added to address query: ‘How do I update the APN settings on my device?’]

Starting with the utterances above, we have the agent write a question that’s optimized for retrieval. Then, the LLM is added to the conversation to make the question more specific to address the query. 

[LLM: Clarification Needed On: 'Ask the user to specify their device make and model to betteraddress their query.']

Next, we instruct the LLM to look at both the information it has retrieved, along with the question that was presented to a knowledge base, and ensure they get clarifying information from the user. 

Agent: Certainly, to assist with that, could you please let me know the device you’re using? That way, I can give you the most accurate information.

User: an iPhone 15.

[LLM question generated (using entire conversation history): ‘How do I update the APN settings on an iPhone 15?’

LLM clarification check: Decision: '1', Clarification Needed On: 'Ask the user to specify the carrier they are using or if they need general instructions for updating APN settings on an iPhone 15 to better address their query.']

Agent: And could you confirm if you are already on the VF network?

User: Yes, I’m already on the network.

[LLM question generated:  ‘How can I update the APN settings on my iPhone 15 for VF network?’]

Here you can see that the LLM has determined that the user needs to specify their device and confirm their carrier in order to give them the most helpful answer to their query. The user responds with, “iPhone 15,” and is asked for further information so that it can generate the final question for the knowledge base.

[LLM check: Decision ‘1’, the output has no inconsistency or inaccuracies.] 

Now that we have all the information, the response is created. But before it’s presented, the LLM checks that there are no inconsistencies or hallucinations, by doing a cross-check of the response and the information that was retrieved.

Agent: To update the APN settings on your iPhone 15 for VF Network, you can follow these steps:

1. Go to Settings on your iPhone.

2. Tap on Cellular or Mobile Data.

3. Select Cellular Data Network.

4. Enter the following APN settings:

  • APN:
  • Username: JohnSmith
  • Password: 1234

5. Scroll down and enter the same APN settings under the MMS section.

6. Press the Home key to save the APN settings and exit to the main screen.

7. Restart your device.For more detailed instructions, you can visit the BYOD Mobile data & APN settings page.

Agent: Would you like more information on how to reset your network settings if updating the APN does not resolve your connectivity issues?

Finally, the answer is displayed, and another prompt is used to display a follow-up question to the user. These local frameworks give the LLM the guidelines to create questions that have been optimized for retrieval, self-check its own work, and ask follow-up questions. 

By building an intuitive local framework that handles question-answer pairs, you can go from managing hundreds of FAQs to managing the knowledge source that the overall conversation architecture draws from. Local components need to be flexible to adapt to user needs while being responsive to input—just remember that this approach requires detailed design and testing.

Integration framework—the utilities behind the walls 

Like the wiring and plumbing in a house, the stuff behind the drywall can be some of the most important. Similarly, the integrations we build between our agents and our systems can make or break user experience. 

Your integration framework is about designing what external services your agent has access to, what they're used for, and under which circumstances they should access them. By connecting your agent with integrations, it can automatically and flexibly complete tasks. These components can drastically improve the overall user experience that your agent delivers if they’re implemented non-deterministically.

When defining how integrations should work in your overall conversation architecture, here are a few things to ponder: 

  • API management: How does the agent interact with APIs? How do you manage those integrations? 
  • External data access: How should your agent access and store data from external sources? How should your agent format and share that data with users? 
  • Service orchestration: What is your approach for coordinating complex interactions across multiple back-end services? 

Integrations aren't new in conversational AI. Where this approach differs is that you’re designing integration rules without a deterministic flow to execute them. It's about giving the global (or local) framework all the information it needs to determine which integration would help it action/answer the user's question.

For example, when I ask a banking agent, “I want to check my balance,"  I usually get pushed down a flow that collects information until it calls an API that gives me my total balance (and it’s never what I want it to be).

In a future, where we design and construct agents with thoughtful frameworks to guide them, we let the agent decide when they need to use specific integrations. Although this approach to integrations requires secure, efficient, and scalable mechanisms—often involving middleware or service buses—it means there is no singular happy path that the agent is forcing a user down. As a result, the opportunities an agent has to serve multiple needs and reliably help more users go up exponentially. 

Analytics and data framework—the maintenance and improvements

After the home is completely constructed, it’s time for the final inspection. Did we comply with all construction codes? Where should we make more improvements or correct our mistakes? In the same way, a robust analytics and data framework allows you to understand your agent’s performance and manage data effectively. It will define how we pass information to LLMs and derive insights from our interactions. 

When collecting and managing data and analyzing the success of your agent, here are some questions you should answer: 

  • Data collection and storage: What are your methods for securely collecting and storing user data?
  • Feedback loops: How are you incorporating user feedback into your iterative improvements of the agents? How do you analyze interaction outcomes for refinement?
  • Implementation considerations: How are you adhering to data protection regulations? How are you improving your agent in such a way as to enable future scalability and flexibility? 

How does this framework inform a conversation? Well, during an ecommerce conversation, the AI agent collects data like the user's purchasing history, stated preferences ("I like eco-friendly products"), sentiment signals (frustrated tone), cart abandonment point, and final sale/no sale. Analytics frameworks would process this data, combining it with thousands of other interaction logs, which may reveal that eco-conscious buyers frequently abandon their cart due to a lack of green certifications on product pages. 

We can use this to update interaction components dynamically—changing the agent's persona, proactively suggesting sustainable options, linking green certifications, and using a more consultative interaction model for this user segment.

There are endlessly creative ways to use real-time analytics to update how an agent is responding to users. If you’re not securely collecting data gathered during interactions and analyzing it effectively, you’re not likely to be improving your agents based on what your users actually need. And the gorgeous home you designed, constructed, and inspected will eventually fall to ruin from lack of upkeep. 

Interaction framework—the interior design

When you enter a room, you can tell a lot about the person living there—are they a minimalist or a trinket-lover? Do they exclusively use warm-hued lamps in lieu of the dreaded big light? A single glance into a well-decorated room can tell a story without words. 

In the case of your digital agent, their interaction framework tells users a story about the vibe of your company and the experience they’re about to receive. Ideally, a great agent is able to capture the essence of your brand in communication style, tone, and techniques. And all that is informed by how you instruct the model to interact with users. 

So, when designing how your agent interacts with users, here are a few things to consider: 

  • Style and grammar: What are your rules on style, tone, and formatting? How do you format hyperlinks or lists? How does it keep the conversation flowing naturally? 
  • Language and tone: What are your guidelines for politeness, formality, and brand voice?
  • Conversational strategies: How is your agent programmed to ask for clarification? What do conversational turns look and sound like? How does it give proactive suggestions? 

I suggest creating and maintaining a style guide and tone-of-voice document to keep your agent’s interaction on brand. This framework requires deep linguistic modeling and an understanding of conversational dynamics, but it also incorporates user feedback and sentiment analysis as you learn more about your agent and your company’s unique needs.

(Stay tuned to Pathways for our take on style guides for your agents.) 

All the makings of an architectural marvel

Architect and planner David Pearson once proposed a list of rules for the design of organic architecture—a framework, you might say—and one of his cardinal rules reads, “Let the design follow the flows and be flexible and adaptable.” Like architecture, conversational AI is both a science and an art. A collection of rules, guidelines, and frameworks and the creative mission of many designers, developers, and thinkers. 

By designing each component of our conversation frameworks thoughtfully—global, local, integrations, interactions, and analytics—you’ll be able to build agents that complete tasks, problem-solve, and delight users. You’ll be able to use these five frameworks as the building blocks to serve a larger conversational AI architecture. 

Your strategic design choices can make your agents strong, functional, and flexible. Or fragile, ineffective, and rigid. But that’s all up to the architect. 


Crawl, walk, run: 28+ tactics for evolving your AI agent

No items found.