Slack-Projektaktualisierungs-RAG-Agent

Fortgeschritten

Dies ist ein AI RAG, Multimodal AI-Bereich Automatisierungsworkflow mit 11 Nodes. Hauptsächlich werden Slack, SlackTrigger, Agent, LmChatOpenAi, EmbeddingsOpenAi und andere Nodes verwendet. Automatisches Beantworten von Slack-Nachrichten mit GPT und Pinecone-Vektor-RAG-Kontext

Voraussetzungen
  • Slack Bot Token oder Webhook URL
  • OpenAI API Key
  • Pinecone API Key
Workflow-Vorschau
Visualisierung der Node-Verbindungen, mit Zoom und Pan
Workflow exportieren
Kopieren Sie die folgende JSON-Konfiguration und importieren Sie sie in n8n
{
  "id": "mB32fQ5OyrLgbIIZ",
  "meta": {
    "instanceId": "1c7b08fed4406d546caf4a44e8b942ca317e7e207bb9a5701955a1a6e1ce1843"
  },
  "name": "Slack Project Update RAG Agent",
  "tags": [],
  "nodes": [
    {
      "id": "44bc7fc6-9736-48e9-90dc-3098047abdc7",
      "name": "Slack-Trigger",
      "type": "n8n-nodes-base.slackTrigger",
      "position": [
        880,
        160
      ],
      "parameters": {
        "options": {
          "userIds": "==[\"User_ID\"]"
        },
        "trigger": [
          "any_event",
          "app_mention"
        ],
        "watchWorkspace": true
      },
      "typeVersion": 1
    },
    {
      "id": "aabbb277-80f5-4316-8845-f34bce33261b",
      "name": "OpenAI-Chat-Modell",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "position": [
        1100,
        380
      ],
      "parameters": {
        "model": {
          "__rl": true,
          "mode": "list",
          "value": "gpt-5",
          "cachedResultName": "gpt-5"
        },
        "options": {}
      },
      "typeVersion": 1.2
    },
    {
      "id": "14cb0538-fe7e-4739-9de9-129723400e44",
      "name": "Simple Speicher",
      "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
      "position": [
        1280,
        380
      ],
      "parameters": {
        "sessionKey": "={{ $json.channel }}",
        "sessionIdType": "customKey"
      },
      "typeVersion": 1.3
    },
    {
      "id": "92db15e1-3228-476f-a3da-1736e8f34d53",
      "name": "Nachricht senden",
      "type": "n8n-nodes-base.slack",
      "position": [
        1840,
        160
      ],
      "parameters": {
        "text": "={{ $json.output }}",
        "select": "channel",
        "channelId": {
          "__rl": true,
          "mode": "id",
          "value": "={{ $('Slack Trigger').item.json.channel }}"
        },
        "otherOptions": {
          "sendAsUser": "Jacob",
          "includeLinkToWorkflow": false
        }
      },
      "typeVersion": 2.3
    },
    {
      "id": "24714547-eecf-4b11-a58f-c394dc7bc9e4",
      "name": "Haftnotiz",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1760,
        0
      ],
      "parameters": {
        "color": 3,
        "width": 304,
        "height": 624,
        "content": "Slack Respond as a User"
      },
      "typeVersion": 1
    },
    {
      "id": "387b6478-c255-42ba-b456-8b90d889e261",
      "name": "Haftnotiz1",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1040,
        0
      ],
      "parameters": {
        "color": 4,
        "width": 704,
        "height": 624,
        "content": "GPT-5 Agent"
      },
      "typeVersion": 1
    },
    {
      "id": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
      "name": "GPT 5 Slack Agent",
      "type": "@n8n/n8n-nodes-langchain.agent",
      "position": [
        1200,
        160
      ],
      "parameters": {
        "text": "={{ $json.text }}",
        "options": {
          "systemMessage": "You are Jacob, an Engineer at Purple Unicorn IT Solutions. Respond to your members' message on Jacob's behalf on Slack. Sound friendly and natural in a typical tech working environment. \n\n##Tool\nUse the Pinecone Vector Store Tool when asked about Project Updates"
        },
        "promptType": "define"
      },
      "typeVersion": 2
    },
    {
      "id": "7070bd4b-bc9e-426b-a6d9-074d386d86dd",
      "name": "Haftnotiz2",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        780,
        0
      ],
      "parameters": {
        "color": 5,
        "height": 624,
        "content": "Slack Trigger"
      },
      "typeVersion": 1
    },
    {
      "id": "d8e65fda-3927-4404-accf-300c30ebef8e",
      "name": "Pinecone Vektorspeicher",
      "type": "@n8n/n8n-nodes-langchain.vectorStorePinecone",
      "position": [
        1440,
        340
      ],
      "parameters": {
        "mode": "retrieve-as-tool",
        "options": {},
        "pineconeIndex": {
          "__rl": true,
          "mode": "list",
          "value": "test",
          "cachedResultName": "test"
        },
        "toolDescription": "Refer to Database for Work Related Information"
      },
      "typeVersion": 1.3
    },
    {
      "id": "fe5ef41c-9496-461a-b44a-5bb34aca4967",
      "name": "Einbettungen OpenAI",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "position": [
        1580,
        500
      ],
      "parameters": {
        "options": {}
      },
      "typeVersion": 1.2
    },
    {
      "id": "c11871c8-557c-42f6-ab82-f287b1178798",
      "name": "Haftnotiz3",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        0,
        0
      ],
      "parameters": {
        "color": 2,
        "width": 752,
        "height": 1008,
        "content": "🛠 GPT-5 + Pinecone-Powered Slack Auto-Responder — Real-Time, Context-Aware Replies for IT & Engineering Teams\n\nDescription\nCut down on context-switching and keep your Slack threads moving with an AI agent that responds on your behalf, pulling real-time knowledge from a Pinecone vector database. Built for IT, DevOps, and engineering environments, this n8n workflow ensures every reply is accurate, context-aware, and instantly available—without you lifting a finger.\n\nCheck out step-by-step video build of workflows like these here:\nhttps://www.youtube.com/@automatewithmarc\n\nHow It Works\n\nSlack Listener: Triggers when you’re mentioned or messaged in relevant channels.\n\nPinecone RAG Retrieval: Pulls the most relevant technical details from your indexed documents, architecture notes, or runbooks.\n\nGPT-5 Processing: Formats the retrieved data into a clear, concise, and technically accurate reply.\n\nThread-Aware Memory: Maintains the conversation state to avoid repeating answers.\n\nSlack Send-as-User: Posts the message under your identity for seamless integration into team workflows.\n\nWhy IT Teams Will Love It\n\n📚 Always up-to-date — If your Pinecone index is refreshed with system docs, runbooks, or KB articles, the bot will always deliver the latest info.\n\n🏗 Technical context retention — Perfect for answering ongoing infrastructure or incident threads.\n\n⏱ Reduced interruption time — No more breaking focus to answer “quick questions.”\n\n🔐 Controlled outputs — Tune GPT-5 to deliver fact-based, low-fluff responses for critical environments.\n\nCommon Use Cases\n\nDevOps: Automated responses to common CI/CD, deployment, or incident queries.\n\nSupport Engineering: Pulling troubleshooting steps directly from KB entries.\n\nProject Coordination: Instant status updates pulled from sprint or release notes.\n\nPro Tips for Deployment\n\nKeep your Pinecone vector DB updated with the latest architecture diagrams, release notes, and SOPs.\n\nUse embeddings tuned for technical documentation to improve retrieval accuracy.\n\nAdd channel-specific prompts if different teams require different response styles (e.g., #devops vs #product)."
      },
      "typeVersion": 1
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "executionOrder": "v1"
  },
  "versionId": "5a498f5f-a962-44c6-ada3-7426d2cb62c3",
  "connections": {
    "Simple Memory": {
      "ai_memory": [
        [
          {
            "node": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
            "type": "ai_memory",
            "index": 0
          }
        ]
      ]
    },
    "Slack Trigger": {
      "main": [
        [
          {
            "node": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Embeddings OpenAI": {
      "ai_embedding": [
        [
          {
            "node": "Pinecone Vector Store",
            "type": "ai_embedding",
            "index": 0
          }
        ]
      ]
    },
    "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e": {
      "main": [
        [
          {
            "node": "92db15e1-3228-476f-a3da-1736e8f34d53",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "OpenAI Chat Model": {
      "ai_languageModel": [
        [
          {
            "node": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "Pinecone Vector Store": {
      "ai_tool": [
        [
          {
            "node": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    }
  }
}
Häufig gestellte Fragen

Wie verwende ich diesen Workflow?

Kopieren Sie den obigen JSON-Code, erstellen Sie einen neuen Workflow in Ihrer n8n-Instanz und wählen Sie "Aus JSON importieren". Fügen Sie die Konfiguration ein und passen Sie die Anmeldedaten nach Bedarf an.

Für welche Szenarien ist dieser Workflow geeignet?

Fortgeschritten - KI RAG, Multimodales KI

Ist es kostenpflichtig?

Dieser Workflow ist völlig kostenlos. Beachten Sie jedoch, dass Drittanbieterdienste (wie OpenAI API), die im Workflow verwendet werden, möglicherweise kostenpflichtig sind.

Workflow-Informationen
Schwierigkeitsgrad
Fortgeschritten
Anzahl der Nodes11
Kategorie2
Node-Typen8
Schwierigkeitsbeschreibung

Für erfahrene Benutzer, mittelkomplexe Workflows mit 6-15 Nodes

Autor
Automate With Marc

Automate With Marc

@marconi

Automating Start-Up and Business processes. Helping non-techies understand and leverage Agentic AI with easy to understand step-by-step tutorials. Check out my educational content: https://www.youtube.com/@Automatewithmarc

Externe Links
Auf n8n.io ansehen

Diesen Workflow teilen

Kategorien

Kategorien: 34