Construire un système RAG multi-requêtes avancé basé sur Supabase et GPT-5
Avancé
Ceci est unAI RAG, Multimodal AIworkflow d'automatisation du domainecontenant 22 nœuds.Utilise principalement des nœuds comme If, Set, Filter, SplitOut, Aggregate. Construire un système RAG multi-requêtes avancé basé sur Supabase et GPT-5
Prérequis
- •Clé API OpenAI
- •URL et Clé API Supabase
Nœuds utilisés (22)
Catégorie
Aperçu du workflow
Visualisation des connexions entre les nœuds, avec support du zoom et du déplacement
Exporter le workflow
Copiez la configuration JSON suivante dans n8n pour importer et utiliser ce workflow
{
"nodes": [
{
"id": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"name": "Agent IA",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [
208,
-704
],
"parameters": {
"options": {
"systemMessage": "=You are a helpful assistant that answers based on a biology course.\n\nFor that, you always start by calling the tool \"Query knowledge base\" to send an array of 1 to 5 questions that are relevant to ask to the RAG knowledge base that contains all the content of the course and get as an output all chunks that seem to help to craft the final answer. The more the user query is complex, the more you will break it down into sub-queries (up to 5).\n\nFrom there, use the Think tool to critically analyse the initial user query and the content you've retrieved from the knowledge retrieval tool and reason to prepare the best answer possible, challenge the content to be sure that you actually have the right information to be able to respond.\n\nOnly answer based on the course content that you get from using the tool, if you receive any question outside that scope, redirect the conversation, if you don't have the right information to answer, be transparent and say so - don't try to reply anyway with general knowledge.",
"enableStreaming": false
}
},
"typeVersion": 2.2
},
{
"id": "4df46be3-c8b7-4f88-9af2-a644ca1bab2d",
"name": "À la réception d'un message de chat",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"position": [
-256,
-704
],
"webhookId": "19fb162f-87ff-454f-96b2-cce0aaa6e22b",
"parameters": {
"public": true,
"options": {
"responseMode": "lastNode"
}
},
"typeVersion": 1.3
},
{
"id": "5f07d924-7727-478a-abf6-eaf11543e19b",
"name": "OpenAI Modèle de Chat",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"position": [
48,
-480
],
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-5-mini",
"cachedResultName": "gpt-5-mini"
},
"options": {}
},
"credentials": {
"openAiApi": {
"id": "dMiSy27YCK6c6rra",
"name": "Duv's OpenAI"
}
},
"typeVersion": 1.2
},
{
"id": "dfc7c805-79cc-4326-8edb-f53a88af285d",
"name": "Mémoire simple",
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"position": [
240,
-480
],
"parameters": {
"contextWindowLength": 8
},
"typeVersion": 1.3
},
{
"id": "7ade6fc1-84cc-48b2-bb20-672f0c5b4c27",
"name": "Diviser",
"type": "n8n-nodes-base.splitOut",
"position": [
-160,
96
],
"parameters": {
"options": {},
"fieldToSplitOut": "queries"
},
"typeVersion": 1
},
{
"id": "f4c92e45-e037-4477-ac50-1d6096fd902e",
"name": "Agréger les chunks",
"type": "n8n-nodes-base.aggregate",
"position": [
1312,
0
],
"parameters": {
"options": {},
"aggregate": "aggregateAllItemData",
"destinationFieldName": "All chunks for this question"
},
"typeVersion": 1
},
{
"id": "cb5d42fe-9e27-4117-8a1c-9a78da8e770f",
"name": "Agréger les éléments",
"type": "n8n-nodes-base.aggregate",
"position": [
352,
-208
],
"parameters": {
"options": {},
"aggregate": "aggregateAllItemData",
"destinationFieldName": "Knowledge base retrieval"
},
"typeVersion": 1
},
{
"id": "4e7f3e28-c316-4e21-b505-a211c1b23841",
"name": "Des chunks disponibles ?",
"type": "n8n-nodes-base.if",
"position": [
1088,
96
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "66402fe0-918e-4268-8928-f4e83cbb3c4f",
"operator": {
"type": "string",
"operation": "exists",
"singleValue": true
},
"leftValue": "={{ $json['Chunk content'] }}",
"rightValue": ""
}
]
}
},
"typeVersion": 2.2
},
{
"id": "26d04029-da7f-4292-802a-4c233caef219",
"name": "Nettoyer la sortie RAG",
"type": "n8n-nodes-base.set",
"position": [
640,
96
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "1eddb72f-9c99-465b-8f94-0ff0f686b542",
"name": "Chunk content",
"type": "string",
"value": "={{ $json.document.pageContent }}"
},
{
"id": "09fe6c91-2cce-40ff-9f8c-86a6857f0772",
"name": "=Chunk metadata",
"type": "object",
"value": "={\n \"Resource chapter name\": \"{{ $json.document.metadata['Chapter name'] }}\",\n \"Retrieval relevance score\": {{ $json.score.round(2) }}\n}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "545514d9-107e-4af9-b407-7cdfc3770e3f",
"name": "Boucler sur les éléments 1",
"type": "n8n-nodes-base.splitInBatches",
"position": [
64,
96
],
"parameters": {
"options": {}
},
"typeVersion": 3
},
{
"id": "ebdbaea5-405f-4a58-b0b4-198154344329",
"name": "Sous-workflow RAG",
"type": "n8n-nodes-base.executeWorkflowTrigger",
"position": [
-384,
96
],
"parameters": {
"workflowInputs": {
"values": [
{
"name": "queries",
"type": "array"
}
]
}
},
"typeVersion": 1.1
},
{
"id": "d2362d6f-a6a0-4651-9f2b-827b8f7eb1c1",
"name": "Interroger la base de connaissances",
"type": "@n8n/n8n-nodes-langchain.toolWorkflow",
"position": [
432,
-480
],
"parameters": {
"workflowId": {
"__rl": true,
"mode": "list",
"value": "c9FlK6mLuWAwqLsP",
"cachedResultName": "TEMPLATE RAG with Supabase and GPT5"
},
"description": "Call this tool to get content about the biology course before crafting your final user answer. Send an array of queries to the knowledge base.",
"workflowInputs": {
"value": {
"queries": "={{ $fromAI('queries', `The array of queries (between 1 and 5) that you've planned to ask to the RAG knowledge base of the course. \nUse an Array format even if there's only one question - this is necessary to not break the workflow format!\n\nExample array output: \n\n[\n {\n \"query\": \"What is Lorem ipsum sir amet?\"\n },\n {\n \"query\": \"How to lorem ipsum dolor sir lorem when lorem ipsum?'?\"\n },\n {\n \"query\": \"Lorem ipsum lorem ipsum dolor sir lorem when lorem ipsum??\"\n }\n]\n`, 'json') }}"
},
"schema": [
{
"id": "queries",
"type": "array",
"display": true,
"removed": false,
"required": false,
"displayName": "queries",
"defaultMatch": false,
"canBeUsedToMatch": true
}
],
"mappingMode": "defineBelow",
"matchingColumns": [
"queries"
],
"attemptToConvertTypes": false,
"convertFieldsToString": false
}
},
"typeVersion": 2.2
},
{
"id": "db958756-f1a2-4162-afcf-2b6a0f936200",
"name": "Supabase Vector Store1",
"type": "@n8n/n8n-nodes-langchain.vectorStoreSupabase",
"position": [
288,
96
],
"parameters": {
"mode": "load",
"prompt": "={{ $json.query }}",
"options": {
"queryName": "match_documents"
},
"tableName": {
"__rl": true,
"mode": "list",
"value": "documents",
"cachedResultName": "documents"
}
},
"credentials": {
"supabaseApi": {
"id": "WuxmgfzPKmocqt0M",
"name": "Supabase account 2"
}
},
"typeVersion": 1.3
},
{
"id": "478c2c07-ec28-427e-b33a-85a0f72c576f",
"name": "Embeddings OpenAI1",
"type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
"position": [
368,
320
],
"parameters": {
"options": {}
},
"credentials": {
"openAiApi": {
"id": "G6pwE0s12sGlHRe3",
"name": "1 - Plan A's OpenAI"
}
},
"typeVersion": 1.2
},
{
"id": "da138097-8c28-4662-b916-8de388894330",
"name": "Note adhésive1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-480,
-832
],
"parameters": {
"color": 5,
"width": 1472,
"height": 528,
"content": "# AI agent"
},
"typeVersion": 1
},
{
"id": "93a8e212-2a8f-4e9f-8956-b1cca02da212",
"name": "Note adhésive2",
"type": "n8n-nodes-base.stickyNote",
"position": [
-480,
-272
],
"parameters": {
"color": 4,
"width": 2320,
"height": 768,
"content": "# Sub-workflow, tool for agent\n"
},
"typeVersion": 1
},
{
"id": "21ade708-3f0e-4419-9edb-bc57fb543963",
"name": "Note adhésive3",
"type": "n8n-nodes-base.stickyNote",
"position": [
816,
-80
],
"parameters": {
"color": 7,
"width": 688,
"height": 432,
"content": "## Filtering system\nOnly keeping chunks that have a score >0.4"
},
"typeVersion": 1
},
{
"id": "ce4ce8ce-0f12-4dc6-ab24-585a81d71ca5",
"name": "Réfléchir",
"type": "@n8n/n8n-nodes-langchain.toolThink",
"position": [
608,
-480
],
"parameters": {
"description": "Use this tool after you got the output of the knowledge retrieval tool to critically analyse the initial user query and the content you've retrieved from the knowledge retrieval tool and reason to prepare the best answer possible, challenge the content to be sure that you actually have the right information to be able to respond.\n\nBe very token efficient when using this tool, write 50 words max which is enough to reason."
},
"typeVersion": 1.1
},
{
"id": "f1d619f3-42fb-4f48-83b3-3c0d1c43d574",
"name": "Note adhésive",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1024,
-832
],
"parameters": {
"width": 512,
"height": 784,
"content": "# Advanced Multi-Query RAG Agent\n\nThis template demonstrates a sophisticated RAG (Retrieval-Augmented Generation) pattern for building high-quality AI agents. It's designed to overcome the limitations of a basic RAG setup.\n\n## How it works\n\nInstead of a simple query, this agent uses a more intelligent, four-step process:\n1. **Decompose:** It breaks complex questions into multiple, simpler sub-queries.\n2. **Retrieve:** It sends these queries to a smart sub-workflow that fetches data from your vector store.\n3. **Filter:** The sub-workflow filters out any retrieved information that doesn't meet a minimum relevance score, ensuring high-quality context.\n4. **Synthesize:** The agent uses a \"Think\" tool to reason over the filtered information before crafting a final, comprehensive answer.\n\n## How to use\n\n1. **Connect your accounts:** You need to connect **Supabase** and **OpenAI** in both this main workflow and in the \"RAG sub-workflow\".\n2. **Customize the agent:** Edit the **AI Agent's system prompt** to match your specific knowledge base (e.g., \"You are a helpful assistant that answers based on our company's internal documents.\").\n3. **Adjust the relevance filter:** In the sub-workflow, you can change the similarity score in the **Filter** node (default is >0.4) to control the quality of the retrieved information."
},
"typeVersion": 1
},
{
"id": "b26b291d-9f95-4012-b830-cd07a9b8015f",
"name": "Conserver score > 0.4",
"type": "n8n-nodes-base.filter",
"position": [
864,
96
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "9a3f844e-7d19-4631-9876-140118e61b6b",
"operator": {
"type": "number",
"operation": "gt"
},
"leftValue": "={{ $json['Chunk metadata']['Retrieval relevance score'] }}",
"rightValue": 0.4
}
]
}
},
"typeVersion": 2.2,
"alwaysOutputData": true
},
{
"id": "14d3efaf-dc35-491f-91df-f085829812ee",
"name": "Indiquer aucune correspondance de chunk",
"type": "n8n-nodes-base.set",
"position": [
1312,
192
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "245fe8f8-b217-4626-bc4d-84f53e47fbbf",
"name": "Retrieval output",
"type": "string",
"value": "=No chunks reached the relevance threshold, the knowledge base was unable to provide information that would be helpful to answer this question."
}
]
}
},
"typeVersion": 3.4
},
{
"id": "e9eb2328-e9e2-4138-9d9e-468359a5e49d",
"name": "Préparer la sortie de boucle",
"type": "n8n-nodes-base.set",
"position": [
1568,
192
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "838f21a4-f7bc-414e-83da-99fbaca4fcca",
"name": "Query to the knowledge base",
"type": "string",
"value": "={{ $('Loop Over Items1').first().json.query }}"
},
{
"id": "10a89085-1937-459f-9721-8715cd51ad39",
"name": "Chunks returned",
"type": "string",
"value": "={{ JSON.stringify($json, null, 2) }}"
}
]
}
},
"typeVersion": 3.4
}
],
"connections": {
"ce4ce8ce-0f12-4dc6-ab24-585a81d71ca5": {
"ai_tool": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "ai_tool",
"index": 0
}
]
]
},
"14e54443-1722-476a-9f7a-44be7bd2b2bf": {
"main": [
[]
]
},
"7ade6fc1-84cc-48b2-bb20-672f0c5b4c27": {
"main": [
[
{
"node": "545514d9-107e-4af9-b407-7cdfc3770e3f",
"type": "main",
"index": 0
}
]
]
},
"4e7f3e28-c316-4e21-b505-a211c1b23841": {
"main": [
[
{
"node": "f4c92e45-e037-4477-ac50-1d6096fd902e",
"type": "main",
"index": 0
}
],
[
{
"node": "14d3efaf-dc35-491f-91df-f085829812ee",
"type": "main",
"index": 0
}
]
]
},
"dfc7c805-79cc-4326-8edb-f53a88af285d": {
"ai_memory": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "ai_memory",
"index": 0
}
]
]
},
"f4c92e45-e037-4477-ac50-1d6096fd902e": {
"main": [
[
{
"node": "e9eb2328-e9e2-4138-9d9e-468359a5e49d",
"type": "main",
"index": 0
}
]
]
},
"26d04029-da7f-4292-802a-4c233caef219": {
"main": [
[
{
"node": "b26b291d-9f95-4012-b830-cd07a9b8015f",
"type": "main",
"index": 0
}
]
]
},
"545514d9-107e-4af9-b407-7cdfc3770e3f": {
"main": [
[
{
"node": "cb5d42fe-9e27-4117-8a1c-9a78da8e770f",
"type": "main",
"index": 0
}
],
[
{
"node": "db958756-f1a2-4162-afcf-2b6a0f936200",
"type": "main",
"index": 0
}
]
]
},
"ebdbaea5-405f-4a58-b0b4-198154344329": {
"main": [
[
{
"node": "7ade6fc1-84cc-48b2-bb20-672f0c5b4c27",
"type": "main",
"index": 0
}
]
]
},
"5f07d924-7727-478a-abf6-eaf11543e19b": {
"ai_languageModel": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"478c2c07-ec28-427e-b33a-85a0f72c576f": {
"ai_embedding": [
[
{
"node": "db958756-f1a2-4162-afcf-2b6a0f936200",
"type": "ai_embedding",
"index": 0
}
]
]
},
"14d3efaf-dc35-491f-91df-f085829812ee": {
"main": [
[
{
"node": "e9eb2328-e9e2-4138-9d9e-468359a5e49d",
"type": "main",
"index": 0
}
]
]
},
"b26b291d-9f95-4012-b830-cd07a9b8015f": {
"main": [
[
{
"node": "4e7f3e28-c316-4e21-b505-a211c1b23841",
"type": "main",
"index": 0
}
]
]
},
"e9eb2328-e9e2-4138-9d9e-468359a5e49d": {
"main": [
[
{
"node": "545514d9-107e-4af9-b407-7cdfc3770e3f",
"type": "main",
"index": 0
}
]
]
},
"d2362d6f-a6a0-4651-9f2b-827b8f7eb1c1": {
"ai_tool": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "ai_tool",
"index": 0
}
]
]
},
"db958756-f1a2-4162-afcf-2b6a0f936200": {
"main": [
[
{
"node": "26d04029-da7f-4292-802a-4c233caef219",
"type": "main",
"index": 0
}
]
]
},
"4df46be3-c8b7-4f88-9af2-a644ca1bab2d": {
"main": [
[
{
"node": "14e54443-1722-476a-9f7a-44be7bd2b2bf",
"type": "main",
"index": 0
}
]
]
}
}
}Foire aux questions
Comment utiliser ce workflow ?
Copiez le code de configuration JSON ci-dessus, créez un nouveau workflow dans votre instance n8n et sélectionnez "Importer depuis le JSON", collez la configuration et modifiez les paramètres d'authentification selon vos besoins.
Dans quelles scénarios ce workflow est-il adapté ?
Avancé - RAG IA, IA Multimodale
Est-ce payant ?
Ce workflow est entièrement gratuit et peut être utilisé directement. Veuillez noter que les services tiers utilisés dans le workflow (comme l'API OpenAI) peuvent nécessiter un paiement de votre part.
Workflows recommandés
Rédaction IA de textes avec RAG contextuel hybride
Synchronisation Google Drive vers Supabase pour une base de données vectorielle contextuelle pour les applications RAG
If
Set
Code
+
If
Set
Code
76 NœudsMichael Taleb
RAG IA
Explorer les nœuds n8n dans la bibliothèque de références visuelles
Explorer les nœuds n8n dans la base de références visuelles
If
Ftp
Set
+
If
Ftp
Set
113 NœudsI versus AI
Autres
contenugénérateur v3
AI驱动blogautomatisation:utilisationGPT-4génération并publicationSEOarticle至WordPressetTwitter
If
Set
Code
+
If
Set
Code
144 NœudsJay Emp0
Création de contenu
Chatbot d'entreprise B2BHR IA pour les politiques et avantages des employés
Chatbot d'entreprise piloté par l'IA pour les politiques et avantages BambooHR
Set
Filter
Bamboo Hr
+
Set
Filter
Bamboo Hr
50 NœudsLudwig
Ressources Humaines
🤖 Construire un chatbot expert en documents avec un pipeline Gemini RAG
Construire un chatbot d'expert pour la documentation n8n en utilisant le pipeline RAG d'OpenAI
Set
Html
Filter
+
Set
Html
Filter
46 NœudsAyham
Wiki interne
Construire un serveur MCP de workflow n8n personnalisé
Construire un serveur MCP personnalisé pour les workflows n8n
If
N8n
Set
+
If
N8n
Set
46 NœudsJimleuk
Autres
Informations sur le workflow
Niveau de difficulté
Avancé
Nombre de nœuds22
Catégorie2
Types de nœuds16
Description de la difficulté
Auteur
Liens externes
Voir sur n8n.io →
Partager ce workflow