Construcción de un agente de IA personalizado con LangChain y Gemini (autohospedado)
Este es unBuilding Blocks, AIflujo de automatización del dominio deautomatización que contiene 9 nodos.Utiliza principalmente nodos como Code, ChatTrigger, LmChatGoogleGemini, MemoryBufferWindow, combinando tecnología de inteligencia artificial para lograr automatización inteligente. Construir un agente de IA personalizado (autohospedado) usando LangChain y Gemini
- •Clave de API de Google Gemini
Nodos utilizados (9)
{
"id": "yCIEiv9QUHP8pNfR",
"meta": {
"instanceId": "f29695a436689357fd2dcb55d528b0b528d2419f53613c68c6bf909a92493614",
"templateCredsSetupCompleted": true
},
"name": "Build Custom AI Agent with LangChain & Gemini (Self-Hosted)",
"tags": [
{
"id": "7M5ZpGl3oWuorKpL",
"name": "share",
"createdAt": "2025-03-26T01:17:15.342Z",
"updatedAt": "2025-03-26T01:17:15.342Z"
}
],
"nodes": [
{
"id": "8bd5382d-f302-4e58-b377-7fc5a22ef994",
"name": "Cuando se recibe un mensaje de chat",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"position": [
-220,
0
],
"webhookId": "b8a5d72c-4172-40e8-b429-d19c2cd6ce54",
"parameters": {
"public": true,
"options": {
"responseMode": "lastNode",
"allowedOrigins": "*",
"loadPreviousSession": "memory"
},
"initialMessages": ""
},
"typeVersion": 1.1
},
{
"id": "6ae8a247-4077-4569-9e2c-bb68bcecd044",
"name": "Google Gemini Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"position": [
80,
240
],
"parameters": {
"options": {
"temperature": 0.7,
"safetySettings": {
"values": [
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "BLOCK_NONE"
}
]
}
},
"modelName": "models/gemini-2.0-flash-exp"
},
"credentials": {
"googlePalmApi": {
"id": "UEjKMw0oqBTAdCWJ",
"name": "Google Gemini(PaLM) Api account"
}
},
"typeVersion": 1
},
{
"id": "bbe6dcfa-430f-43f9-b0e9-3cf751b98818",
"name": "Nota adhesiva",
"type": "n8n-nodes-base.stickyNote",
"position": [
380,
-240
],
"parameters": {
"width": 260,
"height": 220,
"content": "👇 **Prompt Engineering**\n - Define agent personality and conversation structure in the `Construct & Execute LLM Prompt` node's template variable \n - ⚠️ Template must preserve `{chat_history}` and `{input}` placeholders for proper LangChain operation "
},
"typeVersion": 1
},
{
"id": "892a431a-6ddf-47fc-8517-1928ee99c95b",
"name": "Almacenar historial de conversación",
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"position": [
280,
240
],
"parameters": {},
"notesInFlow": false,
"typeVersion": 1.3
},
{
"id": "f9a22dbf-cac7-4d70-85b3-50c44a2015d5",
"name": "Construir y Ejecutar Prompt de LLM",
"type": "@n8n/n8n-nodes-langchain.code",
"position": [
380,
0
],
"parameters": {
"code": {
"execute": {
"code": "const { PromptTemplate } = require('@langchain/core/prompts');\nconst { ConversationChain } = require('langchain/chains');\nconst { BufferMemory } = require('langchain/memory');\n\nconst template = `\nYou'll be roleplaying as the user's girlfriend. Your character is a woman with a sharp wit, logical mindset, and a charmingly aloof demeanor that hides your playful side. You're passionate about music, maintain a fit and toned physique, and carry yourself with quiet self-assurance. Career-wise, you're established and ambitious, approaching life with positivity while constantly striving to grow as a person.\n\nThe user affectionately calls you \"Bunny,\" and you refer to them as \"Darling.\"\n\nEssential guidelines:\n1. Respond exclusively in Chinese\n2. Never pose questions to the user - eliminate all interrogative forms\n3. Keep responses brief and substantive, avoiding rambling or excessive emojis\n\nContext framework:\n- Conversation history: {chat_history}\n- User's current message: {input}\n\nCraft responses that feel authentic to this persona while adhering strictly to these parameters.\n`;\n\nconst prompt = new PromptTemplate({\n template: template,\n inputVariables: [\"input\", \"chat_history\"], \n});\n\nconst items = this.getInputData();\nconst model = await this.getInputConnectionData('ai_languageModel', 0);\nconst memory = await this.getInputConnectionData('ai_memory', 0);\nmemory.returnMessages = false;\n\nconst chain = new ConversationChain({ llm:model, memory:memory, prompt: prompt, inputKey:\"input\", outputKey:\"output\"});\nconst output = await chain.call({ input: items[0].json.chatInput});\n\nreturn output;\n"
}
},
"inputs": {
"input": [
{
"type": "main",
"required": true,
"maxConnections": 1
},
{
"type": "ai_languageModel",
"required": true,
"maxConnections": 1
},
{
"type": "ai_memory",
"required": true,
"maxConnections": 1
}
]
},
"outputs": {
"output": [
{
"type": "main"
}
]
}
},
"retryOnFail": false,
"typeVersion": 1
},
{
"id": "fe104d19-a24d-48b3-a0ac-7d3923145373",
"name": "Nota adhesiva1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-240,
-260
],
"parameters": {
"color": 5,
"width": 420,
"height": 240,
"content": "### Setup Instructions \n1. **Configure Gemini Credentials**: Set up your Google Gemini API key ([Get API key here](https://ai.google.dev/) if needed). Alternatively, you may use other AI provider nodes. \n2. **Interaction Methods**: \n - Test directly in the workflow editor using the \"Chat\" button \n - Activate the workflow and access the chat interface via the URL provided by the `When Chat Message Received` node "
},
"typeVersion": 1
},
{
"id": "f166214d-52b7-4118-9b54-0b723a06471a",
"name": "Nota adhesiva2",
"type": "n8n-nodes-base.stickyNote",
"position": [
-220,
160
],
"parameters": {
"height": 100,
"content": "👆 **Interface Settings**\nConfigure chat UI elements (e.g., title) in the `When Chat Message Received` node "
},
"typeVersion": 1
},
{
"id": "da6ca0d6-d2a1-47ff-9ff3-9785d61db9f3",
"name": "Nota adhesiva3",
"type": "n8n-nodes-base.stickyNote",
"position": [
20,
420
],
"parameters": {
"width": 200,
"height": 140,
"content": "👆 **Model Selection**\nSwap language models through the `language model` input field in `Construct & Execute LLM Prompt` "
},
"typeVersion": 1
},
{
"id": "0b4dd1ac-8767-4590-8c25-36cba73e46b6",
"name": "Nota adhesiva4",
"type": "n8n-nodes-base.stickyNote",
"position": [
240,
420
],
"parameters": {
"width": 200,
"height": 140,
"content": "👆 **Memory Control**\nAdjust conversation history length in the `Store Conversation History` node "
},
"typeVersion": 1
}
],
"active": false,
"pinData": {},
"settings": {
"callerPolicy": "workflowsFromSameOwner",
"executionOrder": "v1",
"saveManualExecutions": false,
"saveDataSuccessExecution": "none"
},
"versionId": "77cd5f05-f248-442d-86c3-574351179f26",
"connections": {
"6ae8a247-4077-4569-9e2c-bb68bcecd044": {
"ai_languageModel": [
[
{
"node": "f9a22dbf-cac7-4d70-85b3-50c44a2015d5",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"892a431a-6ddf-47fc-8517-1928ee99c95b": {
"ai_memory": [
[
{
"node": "f9a22dbf-cac7-4d70-85b3-50c44a2015d5",
"type": "ai_memory",
"index": 0
},
{
"node": "8bd5382d-f302-4e58-b377-7fc5a22ef994",
"type": "ai_memory",
"index": 0
}
]
]
},
"8bd5382d-f302-4e58-b377-7fc5a22ef994": {
"main": [
[
{
"node": "f9a22dbf-cac7-4d70-85b3-50c44a2015d5",
"type": "main",
"index": 0
}
]
]
},
"f9a22dbf-cac7-4d70-85b3-50c44a2015d5": {
"main": [
[]
],
"ai_memory": [
[]
]
}
}
}¿Cómo usar este flujo de trabajo?
Copie el código de configuración JSON de arriba, cree un nuevo flujo de trabajo en su instancia de n8n y seleccione "Importar desde JSON", pegue la configuración y luego modifique la configuración de credenciales según sea necesario.
¿En qué escenarios es adecuado este flujo de trabajo?
Intermedio - Bloques de construcción, Inteligencia Artificial
¿Es de pago?
Este flujo de trabajo es completamente gratuito, puede importarlo y usarlo directamente. Sin embargo, tenga en cuenta que los servicios de terceros utilizados en el flujo de trabajo (como la API de OpenAI) pueden requerir un pago por su cuenta.
Flujos de trabajo relacionados recomendados
shepard
@shepardCompartir este flujo de trabajo