Slackプロジェクト更新RAGエージェント

中級

これはAI RAG, Multimodal AI分野の自動化ワークフローで、11個のノードを含みます。主にSlack, SlackTrigger, Agent, LmChatOpenAi, EmbeddingsOpenAiなどのノードを使用。 GPTとPineconeベクターRAG文脈を使ってSlackメッセージに自動返信

前提条件
  • Slack Bot Token または Webhook URL
  • OpenAI API Key
  • Pinecone API Key
ワークフロープレビュー
ノード接続関係を可視化、ズームとパンをサポート
ワークフローをエクスポート
以下のJSON設定をn8nにインポートして、このワークフローを使用できます
{
  "id": "mB32fQ5OyrLgbIIZ",
  "meta": {
    "instanceId": "1c7b08fed4406d546caf4a44e8b942ca317e7e207bb9a5701955a1a6e1ce1843"
  },
  "name": "Slack Project Update RAG Agent",
  "tags": [],
  "nodes": [
    {
      "id": "44bc7fc6-9736-48e9-90dc-3098047abdc7",
      "name": "Slackトリガー",
      "type": "n8n-nodes-base.slackTrigger",
      "position": [
        880,
        160
      ],
      "parameters": {
        "options": {
          "userIds": "==[\"User_ID\"]"
        },
        "trigger": [
          "any_event",
          "app_mention"
        ],
        "watchWorkspace": true
      },
      "typeVersion": 1
    },
    {
      "id": "aabbb277-80f5-4316-8845-f34bce33261b",
      "name": "OpenAIチャットモデル",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "position": [
        1100,
        380
      ],
      "parameters": {
        "model": {
          "__rl": true,
          "mode": "list",
          "value": "gpt-5",
          "cachedResultName": "gpt-5"
        },
        "options": {}
      },
      "typeVersion": 1.2
    },
    {
      "id": "14cb0538-fe7e-4739-9de9-129723400e44",
      "name": "シンプルメモリ",
      "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
      "position": [
        1280,
        380
      ],
      "parameters": {
        "sessionKey": "={{ $json.channel }}",
        "sessionIdType": "customKey"
      },
      "typeVersion": 1.3
    },
    {
      "id": "92db15e1-3228-476f-a3da-1736e8f34d53",
      "name": "メッセージ送信",
      "type": "n8n-nodes-base.slack",
      "position": [
        1840,
        160
      ],
      "parameters": {
        "text": "={{ $json.output }}",
        "select": "channel",
        "channelId": {
          "__rl": true,
          "mode": "id",
          "value": "={{ $('Slack Trigger').item.json.channel }}"
        },
        "otherOptions": {
          "sendAsUser": "Jacob",
          "includeLinkToWorkflow": false
        }
      },
      "typeVersion": 2.3
    },
    {
      "id": "24714547-eecf-4b11-a58f-c394dc7bc9e4",
      "name": "付箋",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1760,
        0
      ],
      "parameters": {
        "color": 3,
        "width": 304,
        "height": 624,
        "content": "Slack Respond as a User"
      },
      "typeVersion": 1
    },
    {
      "id": "387b6478-c255-42ba-b456-8b90d889e261",
      "name": "付箋1",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1040,
        0
      ],
      "parameters": {
        "color": 4,
        "width": 704,
        "height": 624,
        "content": "GPT-5 Agent"
      },
      "typeVersion": 1
    },
    {
      "id": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
      "name": "GPT5 Slackエージェント",
      "type": "@n8n/n8n-nodes-langchain.agent",
      "position": [
        1200,
        160
      ],
      "parameters": {
        "text": "={{ $json.text }}",
        "options": {
          "systemMessage": "You are Jacob, an Engineer at Purple Unicorn IT Solutions. Respond to your members' message on Jacob's behalf on Slack. Sound friendly and natural in a typical tech working environment. \n\n##Tool\nUse the Pinecone Vector Store Tool when asked about Project Updates"
        },
        "promptType": "define"
      },
      "typeVersion": 2
    },
    {
      "id": "7070bd4b-bc9e-426b-a6d9-074d386d86dd",
      "name": "付箋2",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        780,
        0
      ],
      "parameters": {
        "color": 5,
        "height": 624,
        "content": "Slack Trigger"
      },
      "typeVersion": 1
    },
    {
      "id": "d8e65fda-3927-4404-accf-300c30ebef8e",
      "name": "Pineconeベクトルストア",
      "type": "@n8n/n8n-nodes-langchain.vectorStorePinecone",
      "position": [
        1440,
        340
      ],
      "parameters": {
        "mode": "retrieve-as-tool",
        "options": {},
        "pineconeIndex": {
          "__rl": true,
          "mode": "list",
          "value": "test",
          "cachedResultName": "test"
        },
        "toolDescription": "Refer to Database for Work Related Information"
      },
      "typeVersion": 1.3
    },
    {
      "id": "fe5ef41c-9496-461a-b44a-5bb34aca4967",
      "name": "Embeddings OpenAI",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "position": [
        1580,
        500
      ],
      "parameters": {
        "options": {}
      },
      "typeVersion": 1.2
    },
    {
      "id": "c11871c8-557c-42f6-ab82-f287b1178798",
      "name": "付箋3",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        0,
        0
      ],
      "parameters": {
        "color": 2,
        "width": 752,
        "height": 1008,
        "content": "🛠 GPT-5 + Pinecone-Powered Slack Auto-Responder — Real-Time, Context-Aware Replies for IT & Engineering Teams\n\nDescription\nCut down on context-switching and keep your Slack threads moving with an AI agent that responds on your behalf, pulling real-time knowledge from a Pinecone vector database. Built for IT, DevOps, and engineering environments, this n8n workflow ensures every reply is accurate, context-aware, and instantly available—without you lifting a finger.\n\nCheck out step-by-step video build of workflows like these here:\nhttps://www.youtube.com/@automatewithmarc\n\nHow It Works\n\nSlack Listener: Triggers when you’re mentioned or messaged in relevant channels.\n\nPinecone RAG Retrieval: Pulls the most relevant technical details from your indexed documents, architecture notes, or runbooks.\n\nGPT-5 Processing: Formats the retrieved data into a clear, concise, and technically accurate reply.\n\nThread-Aware Memory: Maintains the conversation state to avoid repeating answers.\n\nSlack Send-as-User: Posts the message under your identity for seamless integration into team workflows.\n\nWhy IT Teams Will Love It\n\n📚 Always up-to-date — If your Pinecone index is refreshed with system docs, runbooks, or KB articles, the bot will always deliver the latest info.\n\n🏗 Technical context retention — Perfect for answering ongoing infrastructure or incident threads.\n\n⏱ Reduced interruption time — No more breaking focus to answer “quick questions.”\n\n🔐 Controlled outputs — Tune GPT-5 to deliver fact-based, low-fluff responses for critical environments.\n\nCommon Use Cases\n\nDevOps: Automated responses to common CI/CD, deployment, or incident queries.\n\nSupport Engineering: Pulling troubleshooting steps directly from KB entries.\n\nProject Coordination: Instant status updates pulled from sprint or release notes.\n\nPro Tips for Deployment\n\nKeep your Pinecone vector DB updated with the latest architecture diagrams, release notes, and SOPs.\n\nUse embeddings tuned for technical documentation to improve retrieval accuracy.\n\nAdd channel-specific prompts if different teams require different response styles (e.g., #devops vs #product)."
      },
      "typeVersion": 1
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "executionOrder": "v1"
  },
  "versionId": "5a498f5f-a962-44c6-ada3-7426d2cb62c3",
  "connections": {
    "14cb0538-fe7e-4739-9de9-129723400e44": {
      "ai_memory": [
        [
          {
            "node": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
            "type": "ai_memory",
            "index": 0
          }
        ]
      ]
    },
    "44bc7fc6-9736-48e9-90dc-3098047abdc7": {
      "main": [
        [
          {
            "node": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "fe5ef41c-9496-461a-b44a-5bb34aca4967": {
      "ai_embedding": [
        [
          {
            "node": "d8e65fda-3927-4404-accf-300c30ebef8e",
            "type": "ai_embedding",
            "index": 0
          }
        ]
      ]
    },
    "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e": {
      "main": [
        [
          {
            "node": "92db15e1-3228-476f-a3da-1736e8f34d53",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "aabbb277-80f5-4316-8845-f34bce33261b": {
      "ai_languageModel": [
        [
          {
            "node": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "d8e65fda-3927-4404-accf-300c30ebef8e": {
      "ai_tool": [
        [
          {
            "node": "d4e0c080-fdcc-45b9-89ac-da6ff9d1de4e",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    }
  }
}
よくある質問

このワークフローの使い方は?

上記のJSON設定コードをコピーし、n8nインスタンスで新しいワークフローを作成して「JSONからインポート」を選択、設定を貼り付けて認証情報を必要に応じて変更してください。

このワークフローはどんな場面に適していますか?

中級 - AI RAG検索拡張, マルチモーダルAI

有料ですか?

このワークフローは完全無料です。ただし、ワークフローで使用するサードパーティサービス(OpenAI APIなど)は別途料金が発生する場合があります。

ワークフロー情報
難易度
中級
ノード数11
カテゴリー2
ノードタイプ8
難易度説明

経験者向け、6-15ノードの中程度の複雑さのワークフロー

作成者
Automate With Marc

Automate With Marc

@marconi

Automating Start-Up and Business processes. Helping non-techies understand and leverage Agentic AI with easy to understand step-by-step tutorials. Check out my educational content: https://www.youtube.com/@Automatewithmarc

外部リンク
n8n.ioで表示

このワークフローを共有

カテゴリー

カテゴリー: 34