IPL-Cricket-Regeln-Fragen- und Antworten-Chatbot mit RAG und Google Gemini API
Experte
Dies ist ein Engineering, Multimodal AI-Bereich Automatisierungsworkflow mit 24 Nodes. Hauptsächlich werden HttpRequest, ManualTrigger, Agent, ChatTrigger, LmChatGoogleGemini und andere Nodes verwendet. IPL-Cricket-Regeln-Chatbot basierend auf RAG und Google Gemini API
Voraussetzungen
- •Möglicherweise sind Ziel-API-Anmeldedaten erforderlich
- •Google Gemini API Key
Verwendete Nodes (24)
Kategorie
Workflow-Vorschau
Visualisierung der Node-Verbindungen, mit Zoom und Pan
Workflow exportieren
Kopieren Sie die folgende JSON-Konfiguration und importieren Sie sie in n8n
{
"id": "CkgF5zRqCL4BS6I5",
"meta": {
"instanceId": "5c50f3d58b333c0490a31213f0ec76116e02346dcdd9088649ba9dd6fbe45ca1",
"templateCredsSetupCompleted": true
},
"name": "IPL Cricket Rules Q&A Chat Bot using RAG and Google Gemini API",
"tags": [],
"nodes": [
{
"id": "4c32f558-efff-4eff-b714-202c7419a96c",
"name": "Bei Empfang einer Chat-Nachricht",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"position": [
-1216,
192
],
"webhookId": "4df707a8-70c8-4fab-a970-a97ce7d7594f",
"parameters": {
"options": {}
},
"typeVersion": 1.1
},
{
"id": "352186bb-07d1-4d7d-9f0f-b57e0880fc11",
"name": "KI-Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [
-1008,
64
],
"parameters": {
"options": {
"systemMessage": "You are a cricket expert. \n\nYou are tasked with answering questions on ipl cricket queries. Information should only be referred to and provided if it is provided explicitly in the data base to you. Your goal is to provide accurate information based on this information.\n\nIf information is not provided to you explicitly or if you can not answer the question using the provided information, say \"Sorry I donot know\""
}
},
"typeVersion": 2.1
},
{
"id": "15f7fbdc-ab77-4007-9a8e-8ddbe881d984",
"name": "Einfacher Speicher",
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"position": [
-784,
336
],
"parameters": {
"contextWindowLength": 20
},
"typeVersion": 1.3
},
{
"id": "dc61d50a-fdd8-4a21-974f-33aa8aab5c0a",
"name": "Einfacher Vektorspeicher",
"type": "@n8n/n8n-nodes-langchain.vectorStoreInMemory",
"position": [
-720,
176
],
"parameters": {
"mode": "retrieve-as-tool",
"topK": 10,
"memoryKey": {
"__rl": true,
"mode": "list",
"value": "vector_store_key"
},
"toolDescription": "This is a repository of ipl cricket rules and international cricket rules"
},
"typeVersion": 1.3
},
{
"id": "69f8782c-c5d2-4693-bc00-a2ab58c61e08",
"name": "Google Gemini Chat-Modell",
"type": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"position": [
-944,
336
],
"parameters": {
"options": {
"topP": 0.3
}
},
"credentials": {
"googlePalmApi": {
"id": "3f4CCF4BMZnEfG6y",
"name": "Google Gemini(PaLM) Api account"
}
},
"typeVersion": 1
},
{
"id": "33d9a2a4-6f13-4cbe-a3b3-19f3d0b7d6a1",
"name": "Embeddings Google Gemini",
"type": "@n8n/n8n-nodes-langchain.embeddingsGoogleGemini",
"position": [
-608,
320
],
"parameters": {},
"credentials": {
"googlePalmApi": {
"id": "3f4CCF4BMZnEfG6y",
"name": "Google Gemini(PaLM) Api account"
}
},
"typeVersion": 1
},
{
"id": "05bbad6c-877c-4d6d-90e1-6c82d6560ae2",
"name": "Einfacher Vektorspeicher1",
"type": "@n8n/n8n-nodes-langchain.vectorStoreInMemory",
"position": [
-896,
-544
],
"parameters": {
"mode": "insert",
"memoryKey": {
"__rl": true,
"mode": "list",
"value": "vector_store_key"
}
},
"typeVersion": 1.3
},
{
"id": "34948452-2e69-40cc-9b86-b78500873aab",
"name": "Embeddings Google Gemini1",
"type": "@n8n/n8n-nodes-langchain.embeddingsGoogleGemini",
"position": [
-896,
-320
],
"parameters": {},
"credentials": {
"googlePalmApi": {
"id": "3f4CCF4BMZnEfG6y",
"name": "Google Gemini(PaLM) Api account"
}
},
"typeVersion": 1
},
{
"id": "d6b2871c-78c6-4785-8913-262eb2364f7d",
"name": "Standard-Datenlader",
"type": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
"position": [
-720,
-400
],
"parameters": {
"options": {},
"dataType": "binary",
"textSplittingMode": "custom"
},
"typeVersion": 1.1
},
{
"id": "6818e50a-ecc1-40e5-aac9-9d38fc85d3ec",
"name": "Rekursiver Zeichentext-Splitter",
"type": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
"position": [
-704,
-256
],
"parameters": {
"options": {},
"chunkOverlap": 200
},
"typeVersion": 1
},
{
"id": "48da425a-c41f-4301-b4a7-df00f604ba5b",
"name": "HTTP Anfrage",
"type": "n8n-nodes-base.httpRequest",
"position": [
-1040,
-448
],
"parameters": {
"url": "https://documents.iplt20.com/bcci/documents/1742707993986_Match_Playing_Conditions.pdf",
"options": {}
},
"typeVersion": 4.2
},
{
"id": "3fc9062b-fdef-421d-a7a3-d348c83cb51c",
"name": "Bei Klick auf 'Workflow ausführen'",
"type": "n8n-nodes-base.manualTrigger",
"position": [
-1232,
-448
],
"parameters": {},
"typeVersion": 1
},
{
"id": "60491e32-d0c1-4e4a-922f-8ce976b481d1",
"name": "Notizzettel",
"type": "n8n-nodes-base.stickyNote",
"position": [
-2576,
-48
],
"parameters": {
"color": 6,
"width": 2144,
"height": 624,
"content": "## Step 2\n## 2.1 Chat Trigger to initiate n8n native chat interface\n## 2.2 Simple Memory keeps the last 20 chat turns for context. This value can be edited within the node\n## 2.3 Simple Vector Store (retrieve-as-tool mode) receives the user’s query embedding, \n## finds the top-10 most relevant chunks stored in step 1, and supplies them as tool output. This will drive RAG\n**The name of vector store should match from Step 1, the embedding rule should match step 1\n## 2.4 Google Gemini Chat Model is the language model that is used as the llm model\n## 2.5 AI Agent orchestrates everything:\n** Uses the system prompt (“You are a cricket expert… If info is missing, say ‘Sorry I don’t know’”). to prompt the model\n** Has access to the memory (2.2) and the RAG tool (2.3).\n** Generates the final response with Google Gemini, strictly limited to the retrieved IPL cricket rules data.\n\n\n\n\n\n\n## Note: Google gemini API key credential needed\n##Using simple memory store nodes provided by n8n is the best way to get started to test out the workflow before you switch to more enterprise grade vector store nodes"
},
"typeVersion": 1
},
{
"id": "1909411f-90b0-4cd5-823a-39f4f918cc5e",
"name": "Notizzettel1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-2576,
-624
],
"parameters": {
"width": 2160,
"height": 544,
"content": "## Step 1\n## Load the reference material (run once via the Manual Trigger)\n## 1.1 Manual Trigger → HTTP Request downloads the IPL “Match Playing Conditions” PDF. \n## 1.2 Default Data Loader extracts text from the PDF.\n **Type of data is binary\n## 1.3 Recursive Character Text Splitter breaks the text into overlapping chunks.\n **This step ensures that the data chunks that are created in vector store have some overlap and hence less chance of hallucination\n **Chunk size and chunk overlap are 2 variables to manage this \n## 1.4 Embeddings Google Gemini (1) converts each chunk to a vector.\n **Connect the model with google gemini model. You will need your own api key for this\n **Make note of the embedding model also since the same embedding model has to be selected in Step 2\n## 1.5 Simple Vector Store 1 inserts those vectors into an in-memory store under key\n **Make note of the vector store name since it is same vector store you will have to use in Step 2\n\n\n## Note: Google gemini API key credential needed\n##Using Vector store nodes provided by n8n is the best way to get started to test out the workflow before you switch to more enterprise grade vector store nodes"
},
"typeVersion": 1
},
{
"id": "63e38b73-3e30-47d7-86bb-afa2ad92dc2b",
"name": "Notizzettel7",
"type": "n8n-nodes-base.stickyNote",
"position": [
-2576,
-768
],
"parameters": {
"color": 5,
"width": 2160,
"height": 128,
"content": "## This workflow has 2 Broad Steps\n## Step 1 - Vector store creation with set of ipl rules using Google Gemini Embedding. This will we used to drive RAG for model grouding \n## Step 2 - Connecting the vector store with google gemini API model and enabling a chat interface to drive the chat bot\n"
},
"typeVersion": 1
},
{
"id": "f45e2852-88a8-4f70-a124-01f2b06d9a19",
"name": "Notizzettel2",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1232,
-544
],
"parameters": {
"color": 3,
"width": 278,
"height": 80,
"content": "## Step 1.1"
},
"typeVersion": 1
},
{
"id": "0b72e856-23c6-42c2-860e-8f761f861d95",
"name": "Notizzettel3",
"type": "n8n-nodes-base.stickyNote",
"position": [
-608,
-304
],
"parameters": {
"color": 3,
"width": 166,
"height": 128,
"content": "## Step 1.2\n## Step 1.3"
},
"typeVersion": 1
},
{
"id": "96c343b7-3961-49c1-97e0-35b4eee90d78",
"name": "Notizzettel4",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1088,
-240
],
"parameters": {
"color": 3,
"width": 150,
"height": 80,
"content": "## Step 1.4"
},
"typeVersion": 1
},
{
"id": "f78516ba-4b17-4e48-9450-ba5d7cb123f1",
"name": "Notizzettel5",
"type": "n8n-nodes-base.stickyNote",
"position": [
-592,
-544
],
"parameters": {
"color": 3,
"width": 150,
"height": 80,
"content": "## Step 1.5"
},
"typeVersion": 1
},
{
"id": "b97281a4-6b1f-41a1-9a1e-c48be5a6854c",
"name": "Notizzettel6",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1248,
96
],
"parameters": {
"color": 4,
"width": 160,
"height": 80,
"content": "## Step 2.1"
},
"typeVersion": 1
},
{
"id": "a8de0dce-eaa0-441d-b050-5374741f3b5f",
"name": "Notizzettel8",
"type": "n8n-nodes-base.stickyNote",
"position": [
-976,
464
],
"parameters": {
"color": 4,
"width": 160,
"height": 80,
"content": "## Step 2.4"
},
"typeVersion": 1
},
{
"id": "1f405862-c83e-4687-b919-3e128bcd2073",
"name": "Notizzettel9",
"type": "n8n-nodes-base.stickyNote",
"position": [
-608,
64
],
"parameters": {
"color": 4,
"width": 160,
"height": 80,
"content": "## Step 2.3"
},
"typeVersion": 1
},
{
"id": "dfb4cbe2-f6b0-45c4-bda7-d5f33a3b8e5f",
"name": "Notizzettel10",
"type": "n8n-nodes-base.stickyNote",
"position": [
-800,
464
],
"parameters": {
"color": 4,
"width": 160,
"height": 80,
"content": "## Step 2.2"
},
"typeVersion": 1
},
{
"id": "c5cfbb0b-2d09-40b8-ba18-5c4028d8a556",
"name": "Notizzettel11",
"type": "n8n-nodes-base.stickyNote",
"position": [
-928,
-32
],
"parameters": {
"color": 4,
"width": 160,
"height": 80,
"content": "## Step 2.5"
},
"typeVersion": 1
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "98c130a5-eef0-4246-8a95-88a29c4e8ce6",
"connections": {
"48da425a-c41f-4301-b4a7-df00f604ba5b": {
"main": [
[
{
"node": "05bbad6c-877c-4d6d-90e1-6c82d6560ae2",
"type": "main",
"index": 0
}
]
]
},
"15f7fbdc-ab77-4007-9a8e-8ddbe881d984": {
"ai_memory": [
[
{
"node": "352186bb-07d1-4d7d-9f0f-b57e0880fc11",
"type": "ai_memory",
"index": 0
}
]
]
},
"d6b2871c-78c6-4785-8913-262eb2364f7d": {
"ai_document": [
[
{
"node": "05bbad6c-877c-4d6d-90e1-6c82d6560ae2",
"type": "ai_document",
"index": 0
}
]
]
},
"dc61d50a-fdd8-4a21-974f-33aa8aab5c0a": {
"ai_tool": [
[
{
"node": "352186bb-07d1-4d7d-9f0f-b57e0880fc11",
"type": "ai_tool",
"index": 0
}
]
]
},
"33d9a2a4-6f13-4cbe-a3b3-19f3d0b7d6a1": {
"ai_embedding": [
[
{
"node": "dc61d50a-fdd8-4a21-974f-33aa8aab5c0a",
"type": "ai_embedding",
"index": 0
}
]
]
},
"69f8782c-c5d2-4693-bc00-a2ab58c61e08": {
"ai_languageModel": [
[
{
"node": "352186bb-07d1-4d7d-9f0f-b57e0880fc11",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"34948452-2e69-40cc-9b86-b78500873aab": {
"ai_embedding": [
[
{
"node": "05bbad6c-877c-4d6d-90e1-6c82d6560ae2",
"type": "ai_embedding",
"index": 0
}
]
]
},
"4c32f558-efff-4eff-b714-202c7419a96c": {
"main": [
[
{
"node": "352186bb-07d1-4d7d-9f0f-b57e0880fc11",
"type": "main",
"index": 0
}
]
]
},
"6818e50a-ecc1-40e5-aac9-9d38fc85d3ec": {
"ai_textSplitter": [
[
{
"node": "d6b2871c-78c6-4785-8913-262eb2364f7d",
"type": "ai_textSplitter",
"index": 0
}
]
]
},
"3fc9062b-fdef-421d-a7a3-d348c83cb51c": {
"main": [
[
{
"node": "48da425a-c41f-4301-b4a7-df00f604ba5b",
"type": "main",
"index": 0
}
]
]
}
}
}Häufig gestellte Fragen
Wie verwende ich diesen Workflow?
Kopieren Sie den obigen JSON-Code, erstellen Sie einen neuen Workflow in Ihrer n8n-Instanz und wählen Sie "Aus JSON importieren". Fügen Sie die Konfiguration ein und passen Sie die Anmeldedaten nach Bedarf an.
Für welche Szenarien ist dieser Workflow geeignet?
Experte - Engineering, Multimodales KI
Ist es kostenpflichtig?
Dieser Workflow ist völlig kostenlos. Beachten Sie jedoch, dass Drittanbieterdienste (wie OpenAI API), die im Workflow verwendet werden, möglicherweise kostenpflichtig sind.
Verwandte Workflows
Dokumenten-Experten-Chatbot mit Gemini-RAG-Pipeline erstellen
Ein Dokumenten-Experten-Chatbot mit Gemini-RAG-Pipeline erstellen
Set
Html
Filter
+
Set
Html
Filter
48 NodesLucas Peyrin
Internes Wiki
n8n-Knoten in der visuellen Referenzbibliothek erkunden
Erkundung von n8n-Knoten in der visuellen Referenzbibliothek
If
Ftp
Set
+
If
Ftp
Set
113 NodesI versus AI
Sonstiges
🤖 Erstellen Sie einen Dokumenten-Experten-Roboter mit RAG, Gemini und Supabase
🤖 Erstellen Sie einen Dokumenten-Experten-Roboter mit RAG, Gemini und Supabase
Set
Html
Filter
+
Set
Html
Filter
54 NodesLucas Peyrin
Internes Wiki
KI-Assistent: Konversation mit Supabase-Speicher und Google Drive-Dateien
KI-Smart-Assistent: Konversation mit Dateien in Supabase Storage und Google Drive
If
Set
Wait
+
If
Set
Wait
62 NodesMark Shcherbakov
Engineering
Mit GitHub OpenAPI-Spezifikationen via RAG (Pinecone und OpenAI) chatten
Mit GitHub-API-Dokumenten sprechen: RAG-basierter Chat-Bot, der Pinecone und OpenAI verwendet
Http Request
Manual Trigger
Agent
+
Http Request
Manual Trigger
Agent
17 NodesMihai Farcas
Engineering
Konversationsbasierte WooCommerce-Vertriebsassistent mit GPT-4, Stripe- und CRM-Integration
WooCommerce Conversational Sales Agent mit GPT-4, Stripe und CRM-Integration
Set
Google Drive
Http Request
+
Set
Google Drive
Http Request
27 NodesCong Nguyen
KI-Chatbot
Workflow-Informationen
Schwierigkeitsgrad
Experte
Anzahl der Nodes24
Kategorie2
Node-Typen11
Autor
Sidd
@p10siddarthapExterne Links
Auf n8n.io ansehen →
Diesen Workflow teilen