{"id":681664,"date":"2026-02-19T11:40:15","date_gmt":"2026-02-19T10:40:15","guid":{"rendered":"https:\/\/blog.jetbrains.com\/?post_type=pycharm&#038;p=681664"},"modified":"2026-03-13T11:26:06","modified_gmt":"2026-03-13T10:26:06","slug":"langchain-tutorial-2026","status":"publish","type":"pycharm","link":"https:\/\/blog.jetbrains.com\/zh-hans\/pycharm\/2026\/02\/langchain-tutorial-2026","title":{"rendered":"LangChain Python Tutorial: A Complete Guide for 2026"},"content":{"rendered":"\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"1280\" height=\"720\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/PC-social-BlogFeatured-1280x720-1.png\" alt=\"LangChain Python Tutorial\" class=\"wp-image-682317\"\/><\/figure>\n\n\n\n<p>If you\u2019ve read the blog post <a href=\"https:\/\/blog.jetbrains.com\/pycharm\/2024\/08\/how-to-build-chatbots-with-langchain\/\"><em>How to Build Chatbots With LangChain<\/em><\/a>, you may want to know more about LangChain. This blog post will dive deeper into what LangChain offers and guide you through a few more real-world use cases. And even if you haven\u2019t read the first post, you might still find the info in this one helpful for building your next AI agent.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">LangChain fundamentals<\/h2>\n\n\n\n<p>Let\u2019s have a look at what LangChain is. LangChain provides a standard framework for building AI agents powered by LLMs, like the ones offered by OpenAI, Anthropic, Google, etc., and is therefore the easiest way to get started. LangChain supports most of the commonly used LLMs on the market today.<\/p>\n\n\n\n<p>LangChain is a high-level tool built on LangGraph, which provides a low-level framework for orchestrating the agent and runtime and is suitable for more advanced users. Beginners and those who only need a simple agent build are definitely better off with LangChain.<\/p>\n\n\n\n<p>We\u2019ll start by taking a look at several important components in a LangChain agent build.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Agents<\/h3>\n\n\n\n<p>Agents are what we are building. They combine LLMs with tools to create systems that can reason about tasks, decide which tools to use for which steps, analyze intermittent results, and work towards solutions iteratively.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"1600\" height=\"1460\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/image-27.png\" alt=\"\" class=\"wp-image-681665\"\/><\/figure>\n\n\n\n<p>Creating an agent is as simple as using the `create_agent` function with a few parameters:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">from langchain.agents import create_agent\n\nagent = create_agent(\n\n\u00a0\u00a0\u00a0\"gpt-5\",\n\n\u00a0\u00a0\u00a0tools=tools\n\n)<\/pre>\n\n\n\n<p>In this example, the LLM used is GPT-5 by OpenAI. In most cases, the provider of the LLM can be inferred. To see a list of all supported providers, head over <a href=\"https:\/\/reference.langchain.com\/python\/langchain\/models\/#langchain.chat_models.init_chat_model(model)\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">LangChain Models: Static and Dynamic<\/h3>\n\n\n\n<p>There are two types of agent models that you can build: static and dynamic. Static models, as the name suggests, are straightforward and more common. The agent is configured in advance during creation and remains unchanged during execution.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">import os\n\nfrom langchain.chat_models import init_chat_model\n\nos.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\n\nmodel = init_chat_model(\"gpt-5\")\n\nprint(model.invoke(\"What is PyCharm?\"))<\/pre>\n\n\n\n<p><br><br>Dynamic models allow you to build an agent that can switch models during runtime based on customized logic. Different models can then be picked based on the current state and context. For example, we can use ModelFallbackMiddleware (described in the <em>Middleware<\/em> section below) to have a backup model in case the default one fails.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">from langchain.agents import create_agent\n\nfrom langchain.agents.middleware import ModelFallbackMiddleware\n\nagent = create_agent(\n\n\u00a0\u00a0\u00a0model=\"gpt-4o\",\n\n\u00a0\u00a0\u00a0tools=[],\n\n\u00a0\u00a0\u00a0middleware=[\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ModelFallbackMiddleware(\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"gpt-4o-mini\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"claude-3-5-sonnet-20241022\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0),\n\n\u00a0\u00a0\u00a0],\n\n)<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Tools<\/h3>\n\n\n\n<p>Tools are important parts of AI agents. They make AI agents effective at carrying out tasks that involve more than just text as output, which is a fundamental difference between an agent and an LLM. Tools allow agents to interact with external systems \u2013 such as APIs, databases, or file systems. Without tools, agents would only be able to provide text output, with no way of performing actions or iteratively working their way toward a result.<\/p>\n\n\n\n<p>LangChain provides decorators for systematically creating tools for your agent, making the whole process more organized and easier to maintain. Here are a couple of examples:<\/p>\n\n\n\n<p>Basic tool<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">@tool\n\ndef search_db(query: str, limit: int = 10) -> str:\n\n\u00a0\u00a0\u00a0\"\"\"Search the customer database for records matching the query.\n\n\u00a0\u00a0\u00a0\"\"\"\n\n...\n\n\u00a0\u00a0\u00a0return f\"Found {limit} results for '{query}'\"<\/pre>\n\n\n\n<p>Tool with a custom name<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">@tool(\"pycharm_docs_search\", return_direct=False)\n\ndef pycharm_docs_search(q: str) -> str:\n\n\u00a0\u00a0\u00a0\"\"\"Search the local FAISS index of JetBrains PyCharm documentation and return relevant passages.\"\"\"\n\n...\n\n\u00a0\u00a0\u00a0docs = retriever.get_relevant_documents(q)\n\n\u00a0\u00a0\u00a0return format_docs(docs)<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Middleware<\/h3>\n\n\n\n<p>Middleware provides ways to define the logic of your agent and customize its behavior. For example, there is middleware that can monitor the agent during runtime, assist with prompting and selecting tools, or even help with advanced use cases like guardrails, etc.<\/p>\n\n\n\n<p>Here are a few examples of built-in middleware. For the full list, please refer to the <a href=\"https:\/\/docs.langchain.com\/oss\/python\/langchain\/middleware\/built-in#provider-agnostic-middleware\" target=\"_blank\" rel=\"noopener\">LangChain middleware documentation<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Middleware<\/strong><\/td><td><strong>Description<\/strong><\/td><\/tr><tr><td>Summarization<\/td><td>Automatically summarize the conversation history when approaching token limits.<\/td><\/tr><tr><td>Human-in-the-loop<\/td><td>Pause execution for human approval of tool calls.<\/td><\/tr><tr><td>Context editing<\/td><td>Manage conversation context by trimming or clearing tool uses.<\/td><\/tr><tr><td>PII detection<\/td><td>Detect and handle personally identifiable information (PII).<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Real-world LangChain use cases<\/h2>\n\n\n\n<p>LangChain use cases cover a varied range of fields, with common instances including:&nbsp;<\/p>\n\n\n\n<ol>\n<li><a href=\"https:\/\/blog.jetbrains.com\/pycharm\/2026\/02\/langchain-tutorial-2026\/#ai-powered-chatbots\" data-type=\"link\" data-id=\"https:\/\/blog.jetbrains.com\/pycharm\/2026\/02\/langchain-tutorial-2026\/#ai-powered-chatbots\">AI-powered chatbots<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/blog.jetbrains.com\/pycharm\/2026\/02\/langchain-tutorial-2026\/#document-question-answering-systems\" data-type=\"link\" data-id=\"https:\/\/blog.jetbrains.com\/pycharm\/2026\/02\/langchain-tutorial-2026\/#document-question-answering-systems\">Document question answering systems<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/blog.jetbrains.com\/pycharm\/2026\/02\/langchain-tutorial-2026\/#content-generation-tools\" data-type=\"link\" data-id=\"https:\/\/blog.jetbrains.com\/pycharm\/2026\/02\/langchain-tutorial-2026\/#content-generation-tools\">Content generation tools<\/a><\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ai-powered-chatbots\">AI-powered chatbots<\/h3>\n\n\n\n<p>When we think of AI agents, we often think of chatbots first. If you\u2019ve read the <a href=\"https:\/\/blog.jetbrains.com\/pycharm\/2024\/08\/how-to-build-chatbots-with-langchain\/\"><em>How to Build Chatbots With LangChain<\/em><\/a> blog post, then you\u2019re already up to speed about this use case. If not, I highly recommend checking it out.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"document-question-answering-systems\">Document question answering systems<\/h3>\n\n\n\n<p>Another real-world use case for LangChain is a document question answering system. For example, companies often have internal documents and manuals that are rather long and unwieldy. A document question answering system provides a quick way for employees to find the info they need within the documents, without having to manually read through each one.<\/p>\n\n\n\n<p>To demonstrate, we\u2019ll create a <a href=\"https:\/\/github.com\/Cheukting\/langchain-example1\/blob\/main\/src\/langchainexample\/ingest_pycharm_docs.py\" data-type=\"link\" data-id=\"https:\/\/github.com\/Cheukting\/langchain-example1\/blob\/main\/src\/langchainexample\/ingest_pycharm_docs.py\" target=\"_blank\" rel=\"noopener\">script<\/a> to index the <a href=\"https:\/\/www.jetbrains.com\/help\/pycharm\/\" target=\"_blank\" rel=\"noopener\">PyCharm documentation<\/a>. Then we\u2019ll create an AI agent that can answer questions based on the documents we indexed. First let\u2019s take a look at our tool:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">@tool(\"pycharm_docs_search\")\n\ndef pycharm_docs_search(q: str) -> str:\n\n\u00a0\u00a0\u00a0\"\"\"Search the local FAISS index of JetBrains PyCharm documentation and return relevant passages.\"\"\"\n\n\u00a0\u00a0\u00a0# Load vector store and create retriever\n\n\u00a0\u00a0\u00a0embeddings = OpenAIEmbeddings(\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model=settings.openai_embedding_model, api_key=settings.openai_api_key\n\n\u00a0\u00a0\u00a0)\n\n\u00a0\u00a0\u00a0vector_store = FAISS.load_local(\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0settings.index_dir, embeddings, allow_dangerous_deserialization=True\n\n\u00a0\u00a0\u00a0)\n\n\u00a0\u00a0\u00a0k = 4\n\n\u00a0\u00a0\u00a0retriever = vector_store.as_retriever(\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0search_type=\"mmr\", search_kwargs={\"k\": k, \"fetch_k\": max(k * 3, 12)}\n\n\u00a0\u00a0\u00a0)\n\n\u00a0\u00a0\u00a0docs = retriever.invoke(q)<\/pre>\n\n\n\n<p>We are using a <a href=\"https:\/\/docs.langchain.com\/oss\/python\/integrations\/vectorstores\" target=\"_blank\" rel=\"noopener\">vector store<\/a> to perform a similarity search with embeddings provided by OpenAI. Documents are embedded so the doc search tool can perform similarity searches to fetch the relevant documents when called.&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">def main():\n\n\u00a0\u00a0\u00a0parser = argparse.ArgumentParser(\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0description=\"Ask PyCharm docs via an Agent (FAISS + GPT-5)\"\n\n\u00a0\u00a0\u00a0)\n\n\u00a0\u00a0\u00a0parser.add_argument(\"question\", type=str, nargs=\"+\", help=\"Your question\")\n\n\u00a0\u00a0\u00a0parser.add_argument(\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"--k\", type=int, default=6, help=\"Number of documents to retrieve\"\n\n\u00a0\u00a0\u00a0)\n\n\u00a0\u00a0\u00a0args = parser.parse_args()\n\n   question = \" \".join(args.question)\n\n\u00a0\u00a0\u00a0system_prompt = \"\"\"You are a helpful assistant that answers questions about JetBrains PyCharm using the provided tools.\n\n\u00a0\u00a0\u00a0Always consult the 'pycharm_docs_search' tool to find relevant documentation before answering.\n\n\u00a0\u00a0\u00a0Cite sources by including the 'Source:' lines from the tool output when useful. If information isn't found, say you don't know.\"\"\"\n\n\u00a0\u00a0\u00a0agent = create_agent(\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model=settings.openai_chat_model,\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0tools=[pycharm_docs_search],\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0system_prompt=system_prompt,\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0response_format=ToolStrategy(ResponseFormat),\n\n\u00a0\u00a0\u00a0)\n\n\u00a0\u00a0\u00a0result = agent.invoke({\"messages\": [{\"role\": \"user\", \"content\": question}]})\n\n\u00a0\u00a0\u00a0print(result[\"structured_response\"].content)<\/pre>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<p>System prompts are provided to the LLM together with the user\u2019s input prompt. We are using OpenAI as the LLM provider in this example, and we\u2019ll need an API key from them. Head to <a href=\"https:\/\/docs.langchain.com\/oss\/python\/integrations\/chat\/openai\" target=\"_blank\" rel=\"noopener\">this page<\/a> to check out OpenAI\u2019s integration documentation. When creating an agent, we\u2019ll have to configure the settings for `llm`, `tools`, and `prompt`.<\/p>\n\n\n\n<p>For the full scripts and project, see <a href=\"https:\/\/github.com\/Cheukting\/langchain-example1\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"content-generation-tools\">Content generation tools<\/h3>\n\n\n\n<p>Another example is an agent that generates text based on content fetched from other sources. For instance, we might use this when we want to generate marketing content with info taken from documentation. In this example, we\u2019ll pretend we\u2019re doing marketing for Python and creating a newsletter for the latest Python release.<\/p>\n\n\n\n<p>In <a href=\"https:\/\/github.com\/Cheukting\/langchain-example2\/blob\/main\/app\/tools.py\" target=\"_blank\" rel=\"noopener\">tools.py<\/a>, a tool is set up to fetch the relevant information, parse it into a structured format, and extract the necessary information.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">@tool(\"fetch_python_whatsnew\", return_direct=False)\n\ndef fetch_python_whatsnew() -> str:\n\n\u00a0\u00a0\u00a0\"\"\"\n\n\u00a0\u00a0\u00a0Fetch the latest \"What's New in Python\" article and return a concise, cleaned\n\n\u00a0\u00a0\u00a0text payload including the URL and extracted section highlights.\n\n\u00a0\u00a0\u00a0The tool ignores the input argument.\n\n\u00a0\u00a0\u00a0\"\"\"\n\n\u00a0\u00a0\u00a0index_html = _fetch(BASE_URL)\n\n\u00a0\u00a0\u00a0latest = _find_latest_entry(index_html)\n\n\u00a0\u00a0\u00a0if not latest:\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return \"Could not determine latest What's New entry from the index page.\"\n\n\u00a0\u00a0\u00a0article_html = _fetch(latest.url)\n\n\u00a0\u00a0\u00a0highlights = _extract_highlights(article_html)\n\n\u00a0\u00a0\u00a0return f\"URL: {latest.url}\\nVERSION: {latest.version}\\n\\n{highlights}\"<\/pre>\n\n\n\n<p>As for the agent in <a href=\"https:\/\/github.com\/Cheukting\/langchain-example2\/blob\/main\/app\/agent.py\" target=\"_blank\" rel=\"noopener\">agent.py<\/a>.&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">SYSTEM_PROMPT = (\n\n\u00a0\u00a0\u00a0\"You are a senior Product Marketing Manager at the Python Software Foundation. \"\n\n\u00a0\u00a0\u00a0\"Task: Draft a clear, engaging release marketing newsletter for end users and developers, \"\n\n\u00a0\u00a0\u00a0\"highlighting the most compelling new features, performance improvements, and quality-of-life \"\n\n\u00a0\u00a0\u00a0\"changes in the latest Python release.\\n\\n\"\n\n\u00a0\u00a0\u00a0\"Process: Use the tool to fetch the latest 'What's New in Python' page. Read the highlights and craft \"\n\n\u00a0\u00a0\u00a0\"a concise newsletter with: (1) an attention-grabbing subject line, (2) a short intro paragraph, \"\n\n\u00a0\u00a0\u00a0\"(3) 4\u20138 bullet points of key features with user benefits, (4) short code snippets only if they add clarity, \"\n\n\u00a0\u00a0\u00a0\"(5) a 'How to upgrade' section, and (6) links to official docs\/changelog. Keep it accurate and avoid speculation.\"\n\n)\n\n...\n\ndef run_newsletter() -> str:\n\n\u00a0\u00a0\u00a0load_dotenv()\n\n\u00a0\u00a0\u00a0agent = create_agent(\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model=os.getenv(\"OPENAI_MODEL\", \"gpt-4o\"),\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0tools=[fetch_python_whatsnew],\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0system_prompt=SYSTEM_PROMPT,\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# response_format=ToolStrategy(ResponseFormat),\n\n\u00a0\u00a0\u00a0)\n\n...<\/pre>\n\n\n\n<p>As before, we provide a system prompt and the API key for OpenAI to the agent.<\/p>\n\n\n\n<p>For the full scripts and project, see <a href=\"https:\/\/github.com\/Cheukting\/langchain-example2\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Advanced LangChain concepts<\/h2>\n\n\n\n<p>LangChain\u2019s more advanced features can be extremely useful when you\u2019re building a more sophisticated AI agent. Not all AI agents require these extra elements, but they are commonly used in production. Let\u2019s look at some of them.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">MCP adapter<\/h3>\n\n\n\n<p>The MCP (Model Context Protocol) allows you to add extra tools or functionalities to an AI agent, making it increasingly popular among active AI agent users and AI enthusiasts alike.&nbsp;<\/p>\n\n\n\n<p>LangChain\u2019s Client module provides a <a href=\"https:\/\/reference.langchain.com\/python\/langchain_mcp_adapters\/\" target=\"_blank\" rel=\"noopener\">MultiServerMCPClient<\/a> class that allows the AI agent to accept MCP server connections. For example:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">from langchain_mcp_adapters.client import MultiServerMCPClient\n\nclient = MultiServerMCPClient(\n\n\u00a0\u00a0\u00a0{\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"postman-server\": {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"http\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"url\": \"https:\/\/mcp.eu.postman.com\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"headers\": {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"Authorization\": \"Bearer ${input:postman-api-key}\"\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0}\n\n)\n\nall_tools = await client.get_tools()<\/pre>\n\n\n\n<p>The above connects to the <a href=\"https:\/\/www.postman.com\/postman\/postman-public-workspace\/collection\/681dc649440b35935978b8b7\" target=\"_blank\" rel=\"noopener\">Postman MCP server in the EU<\/a> with an API key.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Guardrails<\/h3>\n\n\n\n<p>As with many AI technologies, since the logic is not pre-determined, the behavior of an AI agent is non-deterministic. Guardrails are necessary for managing AI behavior and ensuring that it is policy-compliant.<\/p>\n\n\n\n<p>LangChain middleware can be used to set up specific guardrails. For example, you can use PII detection middleware to protect personal information or human-in-the-loop middleware for human verification. You can even create custom middleware for more specific guardrail policies.&nbsp;<\/p>\n\n\n\n<p>For instance, you can use the `<a href=\"https:\/\/docs.langchain.com\/oss\/python\/langchain\/guardrails#before-agent-guardrails\" target=\"_blank\" rel=\"noopener\">@before_agent<\/a>` or `<a href=\"https:\/\/docs.langchain.com\/oss\/python\/langchain\/guardrails#after-agent-guardrails\" target=\"_blank\" rel=\"noopener\">@after_agent<\/a>` decorators to declare guardrails for the agent\u2019s input or output. Below is an example of a code snippet that checks for banned keywords:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">from typing import Any\n\nfrom langchain.agents.middleware import before_agent\n\nbanned_keywords = [\"kill\", \"shoot\", \"genocide\", \"bomb\"]\n\n@before_agent(can_jump_to=[\"end\"])\n\ndef content_filter() -> dict[str, Any] | None:\n\n\u00a0\u00a0\"\"\"Block requests containing banned keywords.\"\"\"\n\n\u00a0\u00a0content = first_message.content.lower()\n\n# Check for banned keywords\n\n\u00a0\u00a0for keyword in banned_keywords:\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if keyword in content:\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"messages\": [{\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"role\": \"assistant\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"content\": \"I cannot process your requests due to inappropriate content.\"\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}],\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"jump_to\": \"end\"\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0return None\n\nfrom langchain.agents import create_agent\n\nagent = create_agent(\n\n\u00a0\u00a0model=\"gpt-4o\",\n\n\u00a0\u00a0tools=[search_tool],\n\n\u00a0\u00a0middleware=[content_filter],\n\n)\n\n# This request will be blocked\n\nresult = agent.invoke({\n\n\u00a0\u00a0\"messages\": [{\"role\": \"user\", \"content\": \"How to make a bomb?\"}]\n\n})<\/pre>\n\n\n\n<p>For more details, check out the documentation <a href=\"https:\/\/docs.langchain.com\/oss\/python\/langchain\/guardrails\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Testing<\/h3>\n\n\n\n<p>Just like in other software development cycles, testing needs to be performed before we can start rolling out AI agent products. LangChain provides testing tools for both unit tests and integration tests.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Unit tests<\/h4>\n\n\n\n<p>Just like in other applications, unit tests are used to test out each part of the AI agent and make sure it works individually. The most helpful tools used in unit tests are mock objects and mock responses, which help isolate the specific part of the application you\u2019re testing.&nbsp;<\/p>\n\n\n\n<p>LangChain provides <a href=\"https:\/\/python.langchain.com\/api_reference\/core\/language_models\/langchain_core.language_models.fake_chat_models.GenericFakeChatModel.html?_gl=1*fwqfa2*_gcl_au*Mzg1NzM1NDUxLjE3NjUyMDk4OTg.*_ga*MTk1ODUyNzE1Ny4xNzY1MjA5ODk4*_ga_47WX3HKKY2*czE3NjYxNTQ5MDkkbzE3JGcxJHQxNzY2MTU1ODM4JGo2MCRsMCRoMA..\" target=\"_blank\" rel=\"noopener\">GenericFakeChatModel<\/a>, which mimics response texts. A response iterator is set in the mock object, and when invoked, it returns the set of responses one by one. For example:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">from langchain_core.language_models.fake_chat_models import GenericFakeChatModel\n\ndef respond(msgs, **kwargs):\n\n\u00a0\u00a0\u00a0text = msgs[-1].content if msgs else \"\"\n\n\u00a0\u00a0\u00a0examples = {\"Hello\": \"Hi there!\", \"Ping\": \"Pong.\", \"Bye\": \"Goodbye!\"}\n\n\u00a0\u00a0\u00a0return examples.get(text, \"OK.\")\n\nmodel = GenericFakeChatModel(respond=respond)\n\nprint(model.invoke(\"Hello\").content)<\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">Integration tests<\/h4>\n\n\n\n<p>Once we\u2019re sure that all parts of the agent work individually, we have to test whether they work together. For an AI agent, this means testing the trajectory of its actions. To do so, LangChain provides another package: <a href=\"https:\/\/github.com\/langchain-ai\/agentevals\" target=\"_blank\" rel=\"noopener\">AgentEvals<\/a>.<\/p>\n\n\n\n<p>AgentEvals provides two main evaluators to choose from:<\/p>\n\n\n\n<ol>\n<li>Trajectory match \u2013 A reference trajectory is required and will be compared to the trajectory of the result. For this comparison, you have <a href=\"https:\/\/docs.langchain.com\/oss\/python\/langchain\/test#trajectory-match-evaluator\" target=\"_blank\" rel=\"noopener\">4 different models<\/a> to choose from.<\/li>\n\n\n\n<li>LLM judge \u2013 An <a href=\"https:\/\/docs.langchain.com\/oss\/python\/langchain\/test#llm-as-judge-evaluator\" target=\"_blank\" rel=\"noopener\">LLM judge<\/a> can be used with or without a reference trajectory. An LLM judge evaluates whether the resulting trajectory is on the right path.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">LangChain support in PyCharm<\/h2>\n\n\n\n<p>With LangChain, you can develop an AI agent that suits your needs in no time. However, to be able to effectively use LangChain in your application, you need an effective debugger. In PyCharm, we have the <a href=\"https:\/\/plugins.jetbrains.com\/plugin\/26921-ai-agents-debugger\" target=\"_blank\" rel=\"noopener\">AI Agents Debugger plugin<\/a>, which allows you to power up your experience with LangChain.<\/p>\n\n\n\n<p>If you don\u2019t yet have PyCharm, <a href=\"https:\/\/www.jetbrains.com\/pycharm\/download\/\" target=\"_blank\" rel=\"noopener\">you can download it here<\/a>.<\/p>\n\n\n\n<p>Using the AI Agents Debugger is very straightforward. Once you install the plug-in, it will appear as an icon on the right-hand side of the IDE.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"1600\" height=\"1460\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/image-27.png\" alt=\"\" class=\"wp-image-681666\"\/><\/figure>\n\n\n\n<p>When you click on this icon, a side window will open with text saying that no extra code is needed \u2013 just run your agent and traces will be shown automatically.<\/p>\n\n\n\n<p>As an example, we will run the <a href=\"https:\/\/github.com\/Cheukting\/langchain-example2\" target=\"_blank\" rel=\"noopener\">content generation agent<\/a> that we built above. If you need a custom run configuration, you will have to set it up now by following this guide on <a href=\"https:\/\/www.jetbrains.com\/help\/pycharm\/run-debug-configuration.html\" target=\"_blank\" rel=\"noopener\">custom run configurations in PyCharm<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"1600\" height=\"1460\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/image-27.png\" alt=\"\" class=\"wp-image-681676\"\/><\/figure>\n\n\n\n<p>Once it is done, you can review all the input prompts and output responses at a glance. To inspect the LangGraph, click on the <em>Graph<\/em> button in the top-right corner.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"1600\" height=\"1460\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/image-27.png\" alt=\"\" class=\"wp-image-681673\"\/><\/figure>\n\n\n\n<p>The <em>LangGraph <\/em>view is especially useful if you have an agent that has complicated steps or a customized workflow.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Summing up<\/h2>\n\n\n\n<p>LangChain is a powerful tool for building AI agents that work for many use cases and scenarios. It\u2019s built on <a href=\"https:\/\/docs.langchain.com\/oss\/python\/langgraph\/overview\" target=\"_blank\" rel=\"noopener\">LangGraph<\/a>, which provides low-level orchestration and runtime customization, as well as compatibility with a vast variety of LLMs on the market. Together, LangChain and LangGraph set a new industry standard for developing AI agents.<\/p>\n","protected":false},"author":1528,"featured_media":682317,"comment_status":"closed","ping_status":"closed","template":"","categories":[952,2347],"tags":[6847,8724,6230,8556],"cross-post-tag":[],"acf":[],"_links":{"self":[{"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/pycharm\/681664"}],"collection":[{"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/pycharm"}],"about":[{"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/types\/pycharm"}],"author":[{"embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/users\/1528"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/comments?post=681664"}],"version-history":[{"count":9,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/pycharm\/681664\/revisions"}],"predecessor-version":[{"id":687831,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/pycharm\/681664\/revisions\/687831"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/media\/682317"}],"wp:attachment":[{"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/media?parent=681664"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/categories?post=681664"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/tags?post=681664"},{"taxonomy":"cross-post-tag","embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/cross-post-tag?post=681664"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}