LangGraph is a specialized tool within the LangChain ecosystem designed to streamline the creation and management of AI agents. It offers a robust framework for building stateful, multi-actor applications, enhancing the capabilities of AI systems to handle complex workflows and interactions.
Key Components of LangGraph
- State: The state represents the current status of the agent. It acts as a memory, storing the context and information the agent needs to make decisions and respond appropriately during interactions.
- Node: Nodes are the fundamental units of computation in LangGraph. Each node performs specific tasks, such as processing user input or generating responses. Nodes can execute various functions, including calling APIs or running code, and they pass updated state information to the next node in the workflow.
Building an AI Agent With LangGraph
LangGraph simplifies developing advanced AI applications by providing a clear structure for managing states, nodes and edges. This makes it easier to build intelligent, context-aware agents capable of handling complex interactions.
To create an AI agent, define the agent’s behavior and interactions using nodes and edges. For example, you can make a customer support agent that processes user queries and provides responses using OpenAI’s GPT-3.5-Turbo model. The agent’s state keeps track of the conversation context while nodes execute the necessary computations to generate responses. Edges control the flow of the conversation, ensuring the agent responds appropriately to user input.
This tutorial will guide you through building an AI agent using LangGraph, complete with step-by-step code snippets.
Setting Up the Environment
Before we begin, ensure you have the required packages installed. You can do this by running the following command in your code editor:
!pip install openai langchain_community langchain_openai langgraph
Next, import the necessary libraries and set up your environment by connecting to your OpenAI API key:
import os
from typing import List
import openai
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import chain
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
from langchain_community.adapters.openai import convert_message_to_dict
from langgraph.graph import END, MessageGraph
from langgraph.checkpoint.sqlite import SqliteSaver
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
Remember to change your_openai_api_key to your actual OpenAI API key.
Creating a Simple AI Chat Agent
Let’s create a basic conversational interface using OpenAI’s GPT-3.5-Turbo model. The following function defines our chat agent:
def my_chat_bot(messages: List[dict]) -> dict:
system_message = {
"role": "system",
"content": "You are a customer support agent for a product company.",
}
messages = [system_message] + messages
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages
)
return completion.choices[0].message
response = my_chat_bot([{"role": "user", "content": "hi!"}])
print(response)
Building a Customer Support Scenario
In this scenario, we simulate a customer named Olasammy interacting with a support agent about a faulty product he purchased. We will guide the conversation and check whether Olasammy gets a refund.
First, define the system prompt template and instructions:
system_prompt_template = """You are a customer of an organization that sells charging fans in Nigeria. \
You are interacting with a user who is a customer support person in the organization. \
{instructions}
When you are finished with the conversation, respond with a single word 'TERMINATE'"""
instructions = """Your name is Olasammy. You are trying to get a refund for the charging fan you bought.\
You want them to give you ALL the money back. \
You bought the fan 2 days back. \
And it is not working properly after testing it."""
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt_template),
MessagesPlaceholder(variable_name="messages"),
]
).partial(name="Olasammy", instructions=instructions)
model = ChatOpenAI()
simulated_user = prompt | model
messages = [HumanMessage(content="Hi! How can I help you?")]
simulated_user.invoke({"messages": messages})
Creating Nodes and Edges
We will define functions to handle the chatbot and simulate user nodes:
def chat_bot_node(messages):
messages = [convert_message_to_dict(m) for m in messages]
chat_bot_response = my_chat_bot(messages)
return AIMessage(content=chat_bot_response["content"])
def _swap_roles(messages):
new_messages = []
for m in messages:
if isinstance(m, AIMessage):
new_messages.append(HumanMessage(content=m.content))
else:
new_messages.append(AIMessage(content=m.content))
return new_messages
def simulated_user_node(messages):
new_messages = _swap_roles(messages)
response = simulated_user.invoke({"messages": new_messages})
return HumanMessage(content=response.content)
Conversation Continuation Logic
Define a function to decide whether to continue or end the conversation:
def should_continue(messages):
if len(messages) > 6:
return "end"
elif messages[-1].content == "TERMINATE":
return "end"
else:
return "continue"
Building the Graph
Now, let’s build the LangGraph to manage our AI chat agent’s workflow:
graph_message = MessageGraph()
graph_message.add_node("user", simulated_user_node)
graph_message.add_node("chat_bot", chat_bot_node)
# Set the entry point for the graph
graph_message.set_entry_point("chat_bot")
# Add edges
graph_message.add_edge("chat_bot", "user")
graph_message.add_conditional_edges(
"user",
should_continue,
{
"end": END,
"continue": "chat_bot",
},
)
memory = SqliteSaver.from_conn_string(":memory:")
graph_1 = graph_message.compile(checkpointer=memory)
from IPython.display import Image, display
try:
display(Image(graph_1.get_graph(xray=True).draw_mermaid_png()))
except:
pass
Running the Simulation
Initiate the chat and observe the conversation flow:
config = {"configurable": {"thread_id": "1", 'thread_ts': "2" }}
initial_message = {"role": "user", "content": "Hi, How are you?"}
for chunk in graph_1.stream({"role": "user", "content": initial_message["content"]}, config, stream_mode="values"):
if END not in chunk:
print(chunk)
print("--------------------------------------------------------------------------------")
Conclusion
LangGraph simplifies the creation of stateful, multi-actor AI applications using graph-based workflows. LangGraph’s cyclic data flows and stateful workflows open up possibilities for more sophisticated AI applications. Feel free to include enhanced conversational experiences, such as iterative interactions, customizable flows and multi-agent collaboration.
With LangGraph, developers can build more intelligent, context-aware AI systems that provide superior user interactions and solutions.
Abut the author: Oladimeji Sowole
Oladimeji Sowole is a member of the Andela community. A Data Scientist and Data Analyst with more than 6 years of professional experience building data visualizations with different tools and predictive models for actionable insights, he has hands-on expertise in implementing technologies such as Python, R, and SQL to develop solutions that drive client satisfaction. A collaborative team player, he has a great passion for solving problems.