LLMテンプレート

上級

これはEngineering, AI RAG分野の自動化ワークフローで、25個のノードを含みます。主にSet, Agent, ChatTrigger, LmChatOpenAi, RerankerCohereなどのノードを使用。 GPT-4o-miniとQdrantベクトルデータベースを使用した永続のなチャットメモリ

前提条件
  • OpenAI API Key
  • Qdrantサーバー接続情報
ワークフロープレビュー
ノード接続関係を可視化、ズームとパンをサポート
ワークフローをエクスポート
以下のJSON設定をn8nにインポートして、このワークフローを使用できます
{
  "id": "EDZcm0r7Lp2uIkTn",
  "meta": {
    "instanceId": "48f9e8e7598a73c86aec19069eefaf1e83b51b8858cbb8999ee59d6fa3d9a3f2",
    "templateCredsSetupCompleted": true
  },
  "name": "LLM_TEMPLATE",
  "tags": [],
  "nodes": [
    {
      "id": "265bbb29-3ae9-49dd-9d77-4a8230af5f3e",
      "name": "Embeddings OpenAI",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "position": [
        816,
        672
      ],
      "parameters": {
        "options": {
          "dimensions": 1024
        }
      },
      "credentials": {
        "openAiApi": {
          "id": "uHrKvsqlQYyImnjO",
          "name": "openai - einarcesar@gmail.com"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "8e8d619d-8356-485e-9ba5-26489e7ef46c",
      "name": "付箋",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        560,
        800
      ],
      "parameters": {
        "width": 324,
        "height": 416,
        "content": "## 🔤 TEXT VECTORIZATION\n\nConverts conversation text into 1024-dimensional vectors for semantic storage.\n\n### ⚙️ Configuration:\n- **Model**: text-embedding-3-small\n- **Dimensions**: 1024 (must match vector DB)\n\n### 💡 Pro tip: \nThis model offers the best balance between performance and cost for most applications.\n\n### 💰 Costs:\n- ~$0.02 per 1M tokens"
      },
      "typeVersion": 1
    },
    {
      "id": "ae0a96a7-6cd5-4868-aadb-2b91e3e8f448",
      "name": "デフォルトデータローダー",
      "type": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
      "position": [
        944,
        672
      ],
      "parameters": {
        "options": {}
      },
      "typeVersion": 1
    },
    {
      "id": "f8685c76-dde4-400a-a359-e52348d9f0ae",
      "name": "付箋 2",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        944,
        1024
      ],
      "parameters": {
        "width": 324,
        "height": 371,
        "content": "## 📄 DOCUMENT PROCESSOR\n\nPrepares conversation data for vector storage by converting it into a format suitable for chunking.\n\n### 🎯 Purpose:\n- Standardizes data format\n- Prepares for text splitting\n- Maintains metadata integrity\n\n### ⚡ Performance:\n- Processing time: ~10ms per conversation"
      },
      "typeVersion": 1
    },
    {
      "id": "fd60c06d-0c22-40fc-ab62-7f87b4c6f29a",
      "name": "再帰的文字テキスト分割器",
      "type": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
      "position": [
        1040,
        880
      ],
      "parameters": {
        "options": {},
        "chunkSize": 200,
        "chunkOverlap": 40
      },
      "typeVersion": 1
    },
    {
      "id": "487e7425-4d1e-48be-9d92-5398e6328279",
      "name": "付箋 3",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1296,
        848
      ],
      "parameters": {
        "width": 340,
        "height": 392,
        "content": "## ✂️ TEXT CHUNKING STRATEGY\n\n### 🔧 Settings:\n- **Chunk Size**: 200 chars\n- **Overlap**: 40 chars\n\n### 📊 Why these values?\n- Small chunks = Better context precision\n- 20% overlap = Maintains context continuity\n- Optimized for conversation snippets\n\n### ⚡ Performance: \nIdeal for real-time chat applications"
      },
      "typeVersion": 1
    },
    {
      "id": "922bfcdb-14ce-40cc-a3df-e89be2d59635",
      "name": "チャットメッセージ受信時",
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "position": [
        -112,
        448
      ],
      "webhookId": "ef238f10-3af1-409d-b7e8-3bf61cd357e4",
      "parameters": {
        "options": {}
      },
      "typeVersion": 1.1
    },
    {
      "id": "a8f24cf0-077f-43ea-a5b6-885ef7069948",
      "name": "付箋 4",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -464,
        320
      ],
      "parameters": {
        "width": 340,
        "height": 428,
        "content": "## 💬 CHAT INTERFACE\n\n### 🚀 Entry point for user interactions\n\n### 📝 Features:\n- Real-time message processing\n- Session management\n- Context preservation\n\n### 🔗 Integration: \nCan be embedded in websites, apps, or used via n8n's chat widget\n\n### 🌐 Webhook URL:\nAvailable after workflow activation"
      },
      "typeVersion": 1
    },
    {
      "id": "58538d83-7b62-47ea-a099-143517886719",
      "name": "検索用Embeddings",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "position": [
        208,
        896
      ],
      "parameters": {
        "options": {
          "dimensions": 1024
        }
      },
      "credentials": {
        "openAiApi": {
          "id": "uHrKvsqlQYyImnjO",
          "name": "openai - einarcesar@gmail.com"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "64ee54af-4f43-4b8a-a73b-f0fb02a69fca",
      "name": "付箋 5",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        208,
        1024
      ],
      "parameters": {
        "color": 6,
        "width": 340,
        "height": 260,
        "content": "## 🔍 RETRIEVAL EMBEDDINGS\n\nGenerates vectors for semantic search in the memory database.\n\n### ⚠️ Important: \nMust use the SAME model and dimensions as storage embeddings!\n\n### 🎯 Used for:\n- Query vectorization\n- Similarity search\n- Context retrieval"
      },
      "typeVersion": 1
    },
    {
      "id": "b86d86d8-8595-4c27-bc9f-45e706d08623",
      "name": "Reranker Cohere",
      "type": "@n8n/n8n-nodes-langchain.rerankerCohere",
      "position": [
        464,
        1328
      ],
      "parameters": {},
      "credentials": {
        "cohereApi": {
          "id": "7GqfOJcuJFHWeOpS",
          "name": "CohereApi account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "43ef1754-fe7a-4435-be5f-5cc912ef7590",
      "name": "付箋 6",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        576,
        1248
      ],
      "parameters": {
        "color": 5,
        "width": 340,
        "height": 396,
        "content": "## 🎯 RELEVANCE OPTIMIZER\n\nRe-ranks retrieved memories by relevance to current context.\n\n### ✨ Benefits:\n- Improves retrieval accuracy by 30-40%\n- Reduces hallucinations\n- Ensures most relevant context is used\n\n### 💰 Cost: \n~$1 per 1000 re-rankings\n\n### 🔧 Optional:\nCan be disabled for cost savings"
      },
      "typeVersion": 1
    },
    {
      "id": "9df9f1e4-b067-4ec8-8ee1-1d64f256081a",
      "name": "RAG_MEMORY",
      "type": "@n8n/n8n-nodes-langchain.vectorStoreQdrant",
      "onError": "continueRegularOutput",
      "position": [
        160,
        688
      ],
      "parameters": {
        "mode": "retrieve-as-tool",
        "topK": 20,
        "options": {},
        "toolName": "RAG_MEMORY",
        "useReranker": true,
        "toolDescription": "Long-term memory storage for maintaining context across conversations. Use this to recall previous interactions, user preferences, and historical context.",
        "qdrantCollection": {
          "__rl": true,
          "mode": "list",
          "value": "ltm",
          "cachedResultName": "ltm"
        }
      },
      "credentials": {
        "qdrantApi": {
          "id": "IMqj7iGvb0Ko0nCj",
          "name": "Qdrant - einar.qzz.io"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "a2f24b1e-df0f-4c10-b525-4eeea31edf7e",
      "name": "付箋 7",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -160,
        784
      ],
      "parameters": {
        "color": 3,
        "width": 340,
        "height": 428,
        "content": "## 🧠 MEMORY RETRIEVAL SYSTEM\n\n### 📊 Configuration:\n- **Collection**: 'ltm' (long-term memory)\n- **Top K**: 20 results\n- **Reranker**: Enabled\n\n### 🔍 How it works:\n1. Searches for similar past conversations\n2. Retrieves top 20 matches\n3. Re-ranks by relevance\n4. Provides context to AI\n\n### ⚡ Performance: \n~50ms average retrieval time\n\n### 💾 Storage:\nQdrant cloud or self-hosted"
      },
      "typeVersion": 1
    },
    {
      "id": "33b1f48b-9700-4edb-a73b-5889316e7cdf",
      "name": "OpenAI チャットモデル",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "position": [
        432,
        816
      ],
      "parameters": {
        "model": {
          "__rl": true,
          "mode": "list",
          "value": "gpt-4o-mini"
        },
        "options": {
          "maxTokens": 2000,
          "temperature": 0.7
        }
      },
      "credentials": {
        "openAiApi": {
          "id": "uHrKvsqlQYyImnjO",
          "name": "openai - einarcesar@gmail.com"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "866bf74b-dc5b-4bf9-b01d-8fbc2da442c1",
      "name": "構造化出力パーサー",
      "type": "@n8n/n8n-nodes-langchain.outputParserStructured",
      "position": [
        544,
        112
      ],
      "parameters": {
        "autoFix": true,
        "jsonSchemaExample": "{\n    \"sessionId\": \"unique-session-identifier\",\n    \"chatInput\": \"User's message\",\n    \"output\": \"AI's response\",\n    \"timestamp\": \"2024-01-01T12:00:00Z\",\n    \"relevanceScore\": 0.95\n}"
      },
      "typeVersion": 1.3
    },
    {
      "id": "130381f8-4d66-4a5c-b233-5079e3630f71",
      "name": "付箋 8",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        848,
        48
      ],
      "parameters": {
        "color": 4,
        "width": 340,
        "height": 344,
        "content": "## 📐 OUTPUT FORMATTER\n\nEnsures AI responses follow a consistent structure for storage.\n\n### 🎯 Schema includes:\n- Session ID (conversation tracking)\n- User input & AI output\n- Timestamp (temporal retrieval)\n- Relevance score (optimization)\n\n### ✅ Auto-fix: \nEnabled to handle schema violations"
      },
      "typeVersion": 1
    },
    {
      "id": "6d7eb364-5c8f-4d6a-9ef6-51e3a1fc45bd",
      "name": "レスポンスフォーマット",
      "type": "n8n-nodes-base.set",
      "position": [
        1504,
        -32
      ],
      "parameters": {
        "options": {},
        "assignments": {
          "assignments": [
            {
              "id": "fdd39640-54c5-4ed7-9f37-c8cd4302a212",
              "name": "output",
              "type": "string",
              "value": "={{ $('AI Agent').first().json.output.output }}"
            }
          ]
        }
      },
      "executeOnce": true,
      "typeVersion": 3.4
    },
    {
      "id": "cdc23122-cf49-4e54-922e-4990f5a2a5ee",
      "name": "付箋 9",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1696,
        -64
      ],
      "parameters": {
        "width": 340,
        "height": 304,
        "content": "## 🎨 RESPONSE FORMATTER\n\nExtracts and formats the AI response for the chat interface.\n\n### 📤 Output: \nClean response without metadata\n\n### 💡 Purpose:\nEnsures users only see the actual message, not the underlying structure"
      },
      "typeVersion": 1
    },
    {
      "id": "98430332-8de1-48f9-b883-017e7ee35983",
      "name": "AIエージェント",
      "type": "@n8n/n8n-nodes-langchain.agent",
      "position": [
        144,
        432
      ],
      "parameters": {
        "options": {
          "systemMessage": "# AI Assistant with Long-Term Memory\n\nYou are an AI assistant equipped with a sophisticated long-term memory system. Your RAG_MEMORY tool allows you to recall past conversations, user preferences, and contextual information across sessions.\n\n## Core Capabilities:\n1. **Context Retention**: Remember and reference previous conversations\n2. **User Personalization**: Adapt responses based on learned preferences\n3. **Knowledge Accumulation**: Build upon past interactions\n4. **Intelligent Retrieval**: Access relevant historical context\n\n## Memory Usage Protocol:\n\n### Before Each Response:\n1. Query RAG_MEMORY for relevant past interactions\n2. Analyze retrieved context for applicable information\n3. Integrate historical knowledge into your response\n4. Maintain consistency with previous conversations\n\n### Memory Query Strategies:\n- Use specific keywords from the current conversation\n- Search for user preferences and patterns\n- Look for related topics discussed previously\n- Check for unresolved questions or follow-ups\n\n## Response Guidelines:\n1. **Acknowledge Continuity**: Reference previous conversations when relevant\n2. **Build on History**: Use past context to provide more informed responses\n3. **Maintain Consistency**: Ensure responses align with established facts\n4. **Update Understanding**: Evolve your knowledge based on new information\n\n## Privacy & Ethics:\n- Only reference information from this user's history\n- Respect conversation boundaries\n- Maintain appropriate context separation\n\n## Example Interaction Flow:\n```\nUser: \"What was that book you recommended last week?\"\n1. Query RAG_MEMORY for \"book recommendation\"\n2. Retrieve relevant conversation\n3. Provide specific book title and context\n4. Offer additional related suggestions\n```\n\nRemember: Your memory makes you more than just an AI - you're a continuous conversation partner who learns and grows with each interaction."
        },
        "hasOutputParser": true
      },
      "typeVersion": 2
    },
    {
      "id": "ef4a29b4-68f4-491e-b44b-3345455907a6",
      "name": "付箋 10",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        48,
        -112
      ],
      "parameters": {
        "width": 360,
        "height": 516,
        "content": "## 🧠 INTELLIGENT AI AGENT\n\n### 🎯 Core Features:\n- Long-term memory integration\n- Context-aware responses\n- Tool usage (RAG_MEMORY)\n- Structured output generation\n\n### 📋 System Prompt:\n- Defines memory usage protocol\n- Sets behavioral guidelines\n- Ensures privacy compliance\n\n### ⚡ Performance:\n- Avg response time: 2-3 seconds\n- Memory queries: 1-3 per response\n- Context window: Effectively unlimited\n\n### 💰 Cost:\n- ~$0.15/$0.60 per 1M tokens (in/out)"
      },
      "typeVersion": 1
    },
    {
      "id": "db4e8e6b-0aee-4a93-8a01-bf38b7de9d98",
      "name": "会話の保存",
      "type": "@n8n/n8n-nodes-langchain.vectorStoreQdrant",
      "position": [
        832,
        448
      ],
      "parameters": {
        "mode": "insert",
        "options": {},
        "qdrantCollection": {
          "__rl": true,
          "mode": "list",
          "value": "ltm",
          "cachedResultName": "ltm"
        }
      },
      "credentials": {
        "qdrantApi": {
          "id": "IMqj7iGvb0Ko0nCj",
          "name": "Qdrant - einar.qzz.io"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "13c42bd9-273d-4c9e-9e69-a324963f3f4f",
      "name": "付箋 11",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1248,
        304
      ],
      "parameters": {
        "color": 2,
        "width": 340,
        "height": 496,
        "content": "## 💾 MEMORY STORAGE\n\n### 📥 What gets stored:\n- User input\n- AI response\n- Conversation metadata\n- Session information\n\n### 🔧 Configuration:\n- **Collection**: 'ltm'\n- **Batch size**: 100 (for efficiency)\n\n### 📈 Storage metrics:\n- Avg storage time: 100ms\n- Vector dimensions: 1024\n- Retention: Unlimited*\n\n### ⚠️ Production tip:\nImplement cleanup policies for scalability"
      },
      "typeVersion": 1
    },
    {
      "id": "4237b604-513f-463c-b891-cb5bd4d588a6",
      "name": "GPT-4o-mini (メイン)",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "position": [
        32,
        576
      ],
      "parameters": {
        "model": {
          "__rl": true,
          "mode": "list",
          "value": "gpt-4o-mini",
          "cachedResultName": "gpt-4o-mini"
        },
        "options": {
          "topP": 0.7,
          "temperature": 0.2,
          "presencePenalty": 0.3,
          "frequencyPenalty": 0.6
        }
      },
      "credentials": {
        "openAiApi": {
          "id": "uHrKvsqlQYyImnjO",
          "name": "openai - einarcesar@gmail.com"
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "1a882e59-d391-4923-9c18-68dfe99d6b47",
      "name": "ワークフロー概要",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -944,
        160
      ],
      "parameters": {
        "color": 7,
        "width": 460,
        "height": 972,
        "content": "## 🚀 WORKFLOW OVERVIEW\n\n### Long-Term Memory for AI Assistants\n\nThis workflow implements a sophisticated memory system that allows AI assistants to remember conversations across sessions.\n\n### 🔑 Key Benefits:\n1. **Persistent Context**: No more repeating yourself\n2. **Personalization**: AI learns user preferences\n3. **Cost Efficiency**: Reduces token usage over time\n4. **Scalability**: Handles unlimited conversations\n\n### 📊 Architecture:\n- **Vector Database**: Qdrant for semantic search\n- **LLM**: OpenAI GPT-4o-mini\n- **Embeddings**: text-embedding-3-small\n- **Reranking**: Cohere for accuracy\n\n### 🛠️ Setup Requirements:\n1. OpenAI API key\n2. Qdrant instance (cloud or self-hosted)\n3. Cohere API key (optional)\n4. n8n instance\n\n### 💡 Use Cases:\n- Customer support bots\n- Personal AI assistants\n- Knowledge management systems\n- Educational tutors\n\n### 📈 Performance Metrics:\n- Response time: 2-3 seconds\n- Memory recall: 95%+ accuracy\n- Cost: ~$0.01 per conversation\n\n### 🔗 Resources:\n- [Documentation](https://docs.n8n.io)\n- [Qdrant Setup](https://qdrant.tech)\n- [OpenAI Pricing](https://openai.com/pricing)"
      },
      "typeVersion": 1
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "executionOrder": "v1"
  },
  "versionId": "6b90a41f-8415-4e59-9082-48bf175e4804",
  "connections": {
    "98430332-8de1-48f9-b883-017e7ee35983": {
      "main": [
        [
          {
            "node": "db4e8e6b-0aee-4a93-8a01-bf38b7de9d98",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "9df9f1e4-b067-4ec8-8ee1-1d64f256081a": {
      "ai_tool": [
        [
          {
            "node": "98430332-8de1-48f9-b883-017e7ee35983",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "b86d86d8-8595-4c27-bc9f-45e706d08623": {
      "ai_reranker": [
        [
          {
            "node": "9df9f1e4-b067-4ec8-8ee1-1d64f256081a",
            "type": "ai_reranker",
            "index": 0
          }
        ]
      ]
    },
    "265bbb29-3ae9-49dd-9d77-4a8230af5f3e": {
      "ai_embedding": [
        [
          {
            "node": "db4e8e6b-0aee-4a93-8a01-bf38b7de9d98",
            "type": "ai_embedding",
            "index": 0
          }
        ]
      ]
    },
    "33b1f48b-9700-4edb-a73b-5889316e7cdf": {
      "ai_languageModel": [
        [
          {
            "node": "866bf74b-dc5b-4bf9-b01d-8fbc2da442c1",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "4237b604-513f-463c-b891-cb5bd4d588a6": {
      "ai_languageModel": [
        [
          {
            "node": "98430332-8de1-48f9-b883-017e7ee35983",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "db4e8e6b-0aee-4a93-8a01-bf38b7de9d98": {
      "main": [
        [
          {
            "node": "6d7eb364-5c8f-4d6a-9ef6-51e3a1fc45bd",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "ae0a96a7-6cd5-4868-aadb-2b91e3e8f448": {
      "ai_document": [
        [
          {
            "node": "db4e8e6b-0aee-4a93-8a01-bf38b7de9d98",
            "type": "ai_document",
            "index": 0
          }
        ]
      ]
    },
    "58538d83-7b62-47ea-a099-143517886719": {
      "ai_embedding": [
        [
          {
            "node": "9df9f1e4-b067-4ec8-8ee1-1d64f256081a",
            "type": "ai_embedding",
            "index": 0
          }
        ]
      ]
    },
    "866bf74b-dc5b-4bf9-b01d-8fbc2da442c1": {
      "ai_outputParser": [
        [
          {
            "node": "98430332-8de1-48f9-b883-017e7ee35983",
            "type": "ai_outputParser",
            "index": 0
          }
        ]
      ]
    },
    "922bfcdb-14ce-40cc-a3df-e89be2d59635": {
      "main": [
        [
          {
            "node": "98430332-8de1-48f9-b883-017e7ee35983",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "fd60c06d-0c22-40fc-ab62-7f87b4c6f29a": {
      "ai_textSplitter": [
        [
          {
            "node": "ae0a96a7-6cd5-4868-aadb-2b91e3e8f448",
            "type": "ai_textSplitter",
            "index": 0
          }
        ]
      ]
    }
  }
}
よくある質問

このワークフローの使い方は?

上記のJSON設定コードをコピーし、n8nインスタンスで新しいワークフローを作成して「JSONからインポート」を選択、設定を貼り付けて認証情報を必要に応じて変更してください。

このワークフローはどんな場面に適していますか?

上級 - エンジニアリング, AI RAG検索拡張

有料ですか?

このワークフローは完全無料です。ただし、ワークフローで使用するサードパーティサービス(OpenAI APIなど)は別途料金が発生する場合があります。

ワークフロー情報
難易度
上級
ノード数25
カテゴリー2
ノードタイプ11
難易度説明

上級者向け、16ノード以上の複雑なワークフロー

外部リンク
n8n.ioで表示

このワークフローを共有

カテゴリー

カテゴリー: 34