Fortgeschrittener Multi-Query-RAG-System basierend auf Supabase und GPT-5
Experte
Dies ist ein AI RAG, Multimodal AI-Bereich Automatisierungsworkflow mit 22 Nodes. Hauptsächlich werden If, Set, Filter, SplitOut, Aggregate und andere Nodes verwendet. Erstelle ein erweitertes Multi-Query-RAG-System basierend auf Supabase und GPT-5
Voraussetzungen
- •OpenAI API Key
- •Supabase URL und API Key
Verwendete Nodes (22)
Kategorie
Workflow-Vorschau
Visualisierung der Node-Verbindungen, mit Zoom und Pan
Workflow exportieren
Kopieren Sie die folgende JSON-Konfiguration und importieren Sie sie in n8n
{
"nodes": [
{
"id": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"name": "KI-Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [
208,
-704
],
"parameters": {
"options": {
"systemMessage": "=You are a helpful assistant that answers based on a biology course.\n\nFor that, you always start by calling the tool \"Query knowledge base\" to send an array of 1 to 5 questions that are relevant to ask to the RAG knowledge base that contains all the content of the course and get as an output all chunks that seem to help to craft the final answer. The more the user query is complex, the more you will break it down into sub-queries (up to 5).\n\nFrom there, use the Think tool to critically analyse the initial user query and the content you've retrieved from the knowledge retrieval tool and reason to prepare the best answer possible, challenge the content to be sure that you actually have the right information to be able to respond.\n\nOnly answer based on the course content that you get from using the tool, if you receive any question outside that scope, redirect the conversation, if you don't have the right information to answer, be transparent and say so - don't try to reply anyway with general knowledge.",
"enableStreaming": false
}
},
"typeVersion": 2.2
},
{
"id": "4df46be3-c8b7-4f88-9af2-a644ca1bab2d",
"name": "Bei Empfang einer Chat-Nachricht",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"position": [
-256,
-704
],
"webhookId": "19fb162f-87ff-454f-96b2-cce0aaa6e22b",
"parameters": {
"public": true,
"options": {
"responseMode": "lastNode"
}
},
"typeVersion": 1.3
},
{
"id": "5f07d924-7727-478a-abf6-eaf11543e19b",
"name": "OpenAI Chat-Modell",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"position": [
48,
-480
],
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-5-mini",
"cachedResultName": "gpt-5-mini"
},
"options": {}
},
"credentials": {
"openAiApi": {
"id": "dMiSy27YCK6c6rra",
"name": "Duv's OpenAI"
}
},
"typeVersion": 1.2
},
{
"id": "dfc7c805-79cc-4326-8edb-f53a88af285d",
"name": "Einfacher Speicher",
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"position": [
240,
-480
],
"parameters": {
"contextWindowLength": 8
},
"typeVersion": 1.3
},
{
"id": "7ade6fc1-84cc-48b2-bb20-672f0c5b4c27",
"name": "Aufteilung",
"type": "n8n-nodes-base.splitOut",
"position": [
-160,
96
],
"parameters": {
"options": {},
"fieldToSplitOut": "queries"
},
"typeVersion": 1
},
{
"id": "f4c92e45-e037-4477-ac50-1d6096fd902e",
"name": "Chunks aggregieren",
"type": "n8n-nodes-base.aggregate",
"position": [
1312,
0
],
"parameters": {
"options": {},
"aggregate": "aggregateAllItemData",
"destinationFieldName": "All chunks for this question"
},
"typeVersion": 1
},
{
"id": "cb5d42fe-9e27-4117-8a1c-9a78da8e770f",
"name": "Elemente aggregieren",
"type": "n8n-nodes-base.aggregate",
"position": [
352,
-208
],
"parameters": {
"options": {},
"aggregate": "aggregateAllItemData",
"destinationFieldName": "Knowledge base retrieval"
},
"typeVersion": 1
},
{
"id": "4e7f3e28-c316-4e21-b505-a211c1b23841",
"name": "Chunks vorhanden?",
"type": "n8n-nodes-base.if",
"position": [
1088,
96
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "66402fe0-918e-4268-8928-f4e83cbb3c4f",
"operator": {
"type": "string",
"operation": "exists",
"singleValue": true
},
"leftValue": "={{ $json['Chunk content'] }}",
"rightValue": ""
}
]
}
},
"typeVersion": 2.2
},
{
"id": "26d04029-da7f-4292-802a-4c233caef219",
"name": "RAG-Ausgabe bereinigen",
"type": "n8n-nodes-base.set",
"position": [
640,
96
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "1eddb72f-9c99-465b-8f94-0ff0f686b542",
"name": "Chunk content",
"type": "string",
"value": "={{ $json.document.pageContent }}"
},
{
"id": "09fe6c91-2cce-40ff-9f8c-86a6857f0772",
"name": "=Chunk metadata",
"type": "object",
"value": "={\n \"Resource chapter name\": \"{{ $json.document.metadata['Chapter name'] }}\",\n \"Retrieval relevance score\": {{ $json.score.round(2) }}\n}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "545514d9-107e-4af9-b407-7cdfc3770e3f",
"name": "Über Elemente iterieren1",
"type": "n8n-nodes-base.splitInBatches",
"position": [
64,
96
],
"parameters": {
"options": {}
},
"typeVersion": 3
},
{
"id": "ebdbaea5-405f-4a58-b0b4-198154344329",
"name": "RAG-Sub-Workflow",
"type": "n8n-nodes-base.executeWorkflowTrigger",
"position": [
-384,
96
],
"parameters": {
"workflowInputs": {
"values": [
{
"name": "queries",
"type": "array"
}
]
}
},
"typeVersion": 1.1
},
{
"id": "d2362d6f-a6a0-4651-9f2b-827b8f7eb1c1",
"name": "Wissensdatenbank abfragen",
"type": "@n8n/n8n-nodes-langchain.toolWorkflow",
"position": [
432,
-480
],
"parameters": {
"workflowId": {
"__rl": true,
"mode": "list",
"value": "c9FlK6mLuWAwqLsP",
"cachedResultName": "TEMPLATE RAG with Supabase and GPT5"
},
"description": "Call this tool to get content about the biology course before crafting your final user answer. Send an array of queries to the knowledge base.",
"workflowInputs": {
"value": {
"queries": "={{ $fromAI('queries', `The array of queries (between 1 and 5) that you've planned to ask to the RAG knowledge base of the course. \nUse an Array format even if there's only one question - this is necessary to not break the workflow format!\n\nExample array output: \n\n[\n {\n \"query\": \"What is Lorem ipsum sir amet?\"\n },\n {\n \"query\": \"How to lorem ipsum dolor sir lorem when lorem ipsum?'?\"\n },\n {\n \"query\": \"Lorem ipsum lorem ipsum dolor sir lorem when lorem ipsum??\"\n }\n]\n`, 'json') }}"
},
"schema": [
{
"id": "queries",
"type": "array",
"display": true,
"removed": false,
"required": false,
"displayName": "queries",
"defaultMatch": false,
"canBeUsedToMatch": true
}
],
"mappingMode": "defineBelow",
"matchingColumns": [
"queries"
],
"attemptToConvertTypes": false,
"convertFieldsToString": false
}
},
"typeVersion": 2.2
},
{
"id": "db958756-f1a2-4162-afcf-2b6a0f936200",
"name": "Supabase Vektorspeicher1",
"type": "@n8n/n8n-nodes-langchain.vectorStoreSupabase",
"position": [
288,
96
],
"parameters": {
"mode": "load",
"prompt": "={{ $json.query }}",
"options": {
"queryName": "match_documents"
},
"tableName": {
"__rl": true,
"mode": "list",
"value": "documents",
"cachedResultName": "documents"
}
},
"credentials": {
"supabaseApi": {
"id": "WuxmgfzPKmocqt0M",
"name": "Supabase account 2"
}
},
"typeVersion": 1.3
},
{
"id": "478c2c07-ec28-427e-b33a-85a0f72c576f",
"name": "Embeddings OpenAI1",
"type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
"position": [
368,
320
],
"parameters": {
"options": {}
},
"credentials": {
"openAiApi": {
"id": "G6pwE0s12sGlHRe3",
"name": "1 - Plan A's OpenAI"
}
},
"typeVersion": 1.2
},
{
"id": "da138097-8c28-4662-b916-8de388894330",
"name": "Notizzettel1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-480,
-832
],
"parameters": {
"color": 5,
"width": 1472,
"height": 528,
"content": "# AI agent"
},
"typeVersion": 1
},
{
"id": "93a8e212-2a8f-4e9f-8956-b1cca02da212",
"name": "Notizzettel2",
"type": "n8n-nodes-base.stickyNote",
"position": [
-480,
-272
],
"parameters": {
"color": 4,
"width": 2320,
"height": 768,
"content": "# Sub-workflow, tool for agent\n"
},
"typeVersion": 1
},
{
"id": "21ade708-3f0e-4419-9edb-bc57fb543963",
"name": "Notizzettel3",
"type": "n8n-nodes-base.stickyNote",
"position": [
816,
-80
],
"parameters": {
"color": 7,
"width": 688,
"height": 432,
"content": "## Filtering system\nOnly keeping chunks that have a score >0.4"
},
"typeVersion": 1
},
{
"id": "ce4ce8ce-0f12-4dc6-ab24-585a81d71ca5",
"name": "Analysieren",
"type": "@n8n/n8n-nodes-langchain.toolThink",
"position": [
608,
-480
],
"parameters": {
"description": "Use this tool after you got the output of the knowledge retrieval tool to critically analyse the initial user query and the content you've retrieved from the knowledge retrieval tool and reason to prepare the best answer possible, challenge the content to be sure that you actually have the right information to be able to respond.\n\nBe very token efficient when using this tool, write 50 words max which is enough to reason."
},
"typeVersion": 1.1
},
{
"id": "f1d619f3-42fb-4f48-83b3-3c0d1c43d574",
"name": "Notizzettel",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1024,
-832
],
"parameters": {
"width": 512,
"height": 784,
"content": "# Advanced Multi-Query RAG Agent\n\nThis template demonstrates a sophisticated RAG (Retrieval-Augmented Generation) pattern for building high-quality AI agents. It's designed to overcome the limitations of a basic RAG setup.\n\n## How it works\n\nInstead of a simple query, this agent uses a more intelligent, four-step process:\n1. **Decompose:** It breaks complex questions into multiple, simpler sub-queries.\n2. **Retrieve:** It sends these queries to a smart sub-workflow that fetches data from your vector store.\n3. **Filter:** The sub-workflow filters out any retrieved information that doesn't meet a minimum relevance score, ensuring high-quality context.\n4. **Synthesize:** The agent uses a \"Think\" tool to reason over the filtered information before crafting a final, comprehensive answer.\n\n## How to use\n\n1. **Connect your accounts:** You need to connect **Supabase** and **OpenAI** in both this main workflow and in the \"RAG sub-workflow\".\n2. **Customize the agent:** Edit the **AI Agent's system prompt** to match your specific knowledge base (e.g., \"You are a helpful assistant that answers based on our company's internal documents.\").\n3. **Adjust the relevance filter:** In the sub-workflow, you can change the similarity score in the **Filter** node (default is >0.4) to control the quality of the retrieved information."
},
"typeVersion": 1
},
{
"id": "b26b291d-9f95-4012-b830-cd07a9b8015f",
"name": "Score über 0.4 behalten",
"type": "n8n-nodes-base.filter",
"position": [
864,
96
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "9a3f844e-7d19-4631-9876-140118e61b6b",
"operator": {
"type": "number",
"operation": "gt"
},
"leftValue": "={{ $json['Chunk metadata']['Retrieval relevance score'] }}",
"rightValue": 0.4
}
]
}
},
"typeVersion": 2.2,
"alwaysOutputData": true
},
{
"id": "14d3efaf-dc35-491f-91df-f085829812ee",
"name": "Keine Chunk-Übereinstimmung melden",
"type": "n8n-nodes-base.set",
"position": [
1312,
192
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "245fe8f8-b217-4626-bc4d-84f53e47fbbf",
"name": "Retrieval output",
"type": "string",
"value": "=No chunks reached the relevance threshold, the knowledge base was unable to provide information that would be helpful to answer this question."
}
]
}
},
"typeVersion": 3.4
},
{
"id": "e9eb2328-e9e2-4138-9d9e-468359a5e49d",
"name": "Schleifenausgabe vorbereiten",
"type": "n8n-nodes-base.set",
"position": [
1568,
192
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "838f21a4-f7bc-414e-83da-99fbaca4fcca",
"name": "Query to the knowledge base",
"type": "string",
"value": "={{ $('Loop Over Items1').first().json.query }}"
},
{
"id": "10a89085-1937-459f-9721-8715cd51ad39",
"name": "Chunks returned",
"type": "string",
"value": "={{ JSON.stringify($json, null, 2) }}"
}
]
}
},
"typeVersion": 3.4
}
],
"connections": {
"ce4ce8ce-0f12-4dc6-ab24-585a81d71ca5": {
"ai_tool": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "ai_tool",
"index": 0
}
]
]
},
"14e54443-1722-476a-9f7a-44be7bd2b2bf": {
"main": [
[]
]
},
"7ade6fc1-84cc-48b2-bb20-672f0c5b4c27": {
"main": [
[
{
"node": "545514d9-107e-4af9-b407-7cdfc3770e3f",
"type": "main",
"index": 0
}
]
]
},
"4e7f3e28-c316-4e21-b505-a211c1b23841": {
"main": [
[
{
"node": "f4c92e45-e037-4477-ac50-1d6096fd902e",
"type": "main",
"index": 0
}
],
[
{
"node": "14d3efaf-dc35-491f-91df-f085829812ee",
"type": "main",
"index": 0
}
]
]
},
"dfc7c805-79cc-4326-8edb-f53a88af285d": {
"ai_memory": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "ai_memory",
"index": 0
}
]
]
},
"f4c92e45-e037-4477-ac50-1d6096fd902e": {
"main": [
[
{
"node": "e9eb2328-e9e2-4138-9d9e-468359a5e49d",
"type": "main",
"index": 0
}
]
]
},
"26d04029-da7f-4292-802a-4c233caef219": {
"main": [
[
{
"node": "b26b291d-9f95-4012-b830-cd07a9b8015f",
"type": "main",
"index": 0
}
]
]
},
"545514d9-107e-4af9-b407-7cdfc3770e3f": {
"main": [
[
{
"node": "cb5d42fe-9e27-4117-8a1c-9a78da8e770f",
"type": "main",
"index": 0
}
],
[
{
"node": "db958756-f1a2-4162-afcf-2b6a0f936200",
"type": "main",
"index": 0
}
]
]
},
"ebdbaea5-405f-4a58-b0b4-198154344329": {
"main": [
[
{
"node": "7ade6fc1-84cc-48b2-bb20-672f0c5b4c27",
"type": "main",
"index": 0
}
]
]
},
"5f07d924-7727-478a-abf6-eaf11543e19b": {
"ai_languageModel": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"478c2c07-ec28-427e-b33a-85a0f72c576f": {
"ai_embedding": [
[
{
"node": "db958756-f1a2-4162-afcf-2b6a0f936200",
"type": "ai_embedding",
"index": 0
}
]
]
},
"14d3efaf-dc35-491f-91df-f085829812ee": {
"main": [
[
{
"node": "e9eb2328-e9e2-4138-9d9e-468359a5e49d",
"type": "main",
"index": 0
}
]
]
},
"b26b291d-9f95-4012-b830-cd07a9b8015f": {
"main": [
[
{
"node": "4e7f3e28-c316-4e21-b505-a211c1b23841",
"type": "main",
"index": 0
}
]
]
},
"e9eb2328-e9e2-4138-9d9e-468359a5e49d": {
"main": [
[
{
"node": "545514d9-107e-4af9-b407-7cdfc3770e3f",
"type": "main",
"index": 0
}
]
]
},
"d2362d6f-a6a0-4651-9f2b-827b8f7eb1c1": {
"ai_tool": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "ai_tool",
"index": 0
}
]
]
},
"db958756-f1a2-4162-afcf-2b6a0f936200": {
"main": [
[
{
"node": "26d04029-da7f-4292-802a-4c233caef219",
"type": "main",
"index": 0
}
]
]
},
"4df46be3-c8b7-4f88-9af2-a644ca1bab2d": {
"main": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "main",
"index": 0
}
]
]
}
}
}Häufig gestellte Fragen
Wie verwende ich diesen Workflow?
Kopieren Sie den obigen JSON-Code, erstellen Sie einen neuen Workflow in Ihrer n8n-Instanz und wählen Sie "Aus JSON importieren". Fügen Sie die Konfiguration ein und passen Sie die Anmeldedaten nach Bedarf an.
Für welche Szenarien ist dieser Workflow geeignet?
Experte - KI RAG, Multimodales KI
Ist es kostenpflichtig?
Dieser Workflow ist völlig kostenlos. Beachten Sie jedoch, dass Drittanbieterdienste (wie OpenAI API), die im Workflow verwendet werden, möglicherweise kostenpflichtig sind.
Verwandte Workflows
Kontextbasierte hybride RAG-KI-Texterstellung
Google Drive zu Supabase Kontext-Vektordatenbank-Synchronisierung für RAG-Anwendungen
If
Set
Code
+
If
Set
Code
76 NodesMichael Taleb
KI RAG
n8n-Knoten in der visuellen Referenzbibliothek erkunden
Erkundung von n8n-Knoten in der visuellen Referenzbibliothek
If
Ftp
Set
+
If
Ftp
Set
113 NodesI versus AI
Sonstiges
InhaltGenerator v3
If
Set
Code
+
If
Set
Code
144 NodesJay Emp0
Content-Erstellung
BambooHR KI-gestützte Unternehmensrichtlinien und Benefits-Chatbot
BambooHR KI-gestützter Firmenrichtlinien- und Benefits-Chatbot
Set
Filter
Bamboo Hr
+
Set
Filter
Bamboo Hr
50 NodesLudwig
Personalwesen
🤖 Dokumentenexperten-Chatbot mit Gemini RAG-Pipeline erstellen
Baue einen n8n-Dokumentenexperten-Chatbot mit OpenAI RAG-Pipeline
Set
Html
Filter
+
Set
Html
Filter
46 NodesAyham
Internes Wiki
Eigener n8N-Workflow-MCP-Server
Benutzerdefinierten n8n-Workflow MCP-Server erstellen
If
N8n
Set
+
If
N8n
Set
46 NodesJimleuk
Sonstiges
Workflow-Informationen
Schwierigkeitsgrad
Experte
Anzahl der Nodes22
Kategorie2
Node-Typen16
Autor
Externe Links
Auf n8n.io ansehen →
Diesen Workflow teilen