MCPサーバーのランキングにContextual AIリランカーを使用
上級
これはMiscellaneous, AI RAG, Multimodal AI分野の自動化ワークフローで、16個のノードを含みます。主にIf, Code, Merge, HttpRequest, Chatなどのノードを使用。 OpenAI GPT-4.1とコンテキストAIリーダーによる動のMCPサーバーセレクションの実装
前提条件
- •ターゲットAPIの認証情報が必要な場合あり
- •OpenAI API Key
ワークフロープレビュー
ノード接続関係を可視化、ズームとパンをサポート
ワークフローをエクスポート
以下のJSON設定をn8nにインポートして、このワークフローを使用できます
{
"id": "d1iK84AVOBn7nPRx",
"meta": {
"instanceId": "11121a0a0c6d26991d417aaff350a8e1836bf48496a817dba8b2be23aec9b053",
"templateCredsSetupCompleted": true
},
"name": "Rank MCP Servers using Contextual AI Reranker",
"tags": [],
"nodes": [
{
"id": "59b497fe-1934-4183-8a17-f3b30ca0f5c4",
"name": "OpenAI Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"position": [
216,
-56
],
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-4.1-mini"
},
"options": {
"responseFormat": "json_object"
}
},
"credentials": {
"openAiApi": {
"id": "1qWYthUxPflxQXam",
"name": "OpenAi account"
}
},
"typeVersion": 1.2
},
{
"id": "a1c8a119-9b23-44ad-a1c0-2acef910beaf",
"name": "If",
"type": "n8n-nodes-base.if",
"position": [
496,
-280
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "47fd1d36-7a24-4086-9b68-ba5b42d9a714",
"operator": {
"type": "boolean",
"operation": "true",
"singleValue": true
},
"leftValue": "={{ $json.output.parseJson().use_mcp }}",
"rightValue": ""
}
]
}
},
"typeVersion": 2.2
},
{
"id": "3cfcff90-fdee-430a-951a-d30f8f487a6e",
"name": "Merge",
"type": "n8n-nodes-base.merge",
"position": [
944,
-352
],
"parameters": {},
"typeVersion": 3.2
},
{
"id": "33cdc727-eaee-4898-b583-ec57c79362af",
"name": "Merge1",
"type": "n8n-nodes-base.merge",
"position": [
1616,
-352
],
"parameters": {},
"typeVersion": 3.2
},
{
"id": "07450849-96b2-40a7-a9d1-5e1925d76f6c",
"name": "Sticky Note",
"type": "n8n-nodes-base.stickyNote",
"position": [
-624,
-528
],
"parameters": {
"width": 480,
"height": 1152,
"content": "# Dynamic MCP Selection\n## PROBLEM\nThousands of MCP Servers exist and many are updated daily, making server selection difficult for LLMs.\n- Current approaches require manually downloading and configuring servers, limiting flexibility.\n- When multiple servers are pre-configured, LLMs get overwhelmed and confused about which server to use for specific tasks.\n\n### This template enables dynamic server selection from a live PulseMCP directory of 5000+ servers.\n\n## How it works\n- A user query goes to an LLM that decides whether to use MCP servers to fulfill a given query and provides reasoning for its decision.\n- Next, we fetch MCP Servers from Pulse MCP API and format them as documents for reranking\n- Now, we use Contextual AI's Reranker to score and rank all MCP Servers based on our query and instructions\n\n## How to set up\n- Sign up for a free trial of Contextual AI [here](https://app.contextual.ai/) to find CONTEXTUALAI_API_KEY.\n- Click on variables option in left panel and add a new environment variable CONTEXTUALAI_API_KEY.\n- For the baseline model, we have used GPT 4.1 mini, you can find your OpenAI API key[ here](https://platform.openai.com/api-keys)\n\n## How to customize the workflow\n- We use chat trigger to initate the workflow. Feel free to replace it with a webhook or other trigger as required.\n- We use OpenAI's GPT 4.1 mini as the baseline model and reranker prompt generator. You can swap out this section to use the LLM of your choice.\n- We fetch 5000 MCP Servers from the PulseMCP directory as a baseline number, feel free to adjust this parameter as required.\n- We are using Contextual AI's ctxl-rerank-v2-instruct-multilingual reranker model, which can be swapped with any one of the following rerankers: \n 1) ctxl-rerank-v2-instruct-multilingual\n 2) ctxl-rerank-v2-instruct-multilingual-mini\n 3) ctxl-rerank-v1-instruct\n- You can checkout this [blog](https://contextual.ai/blog/introducing-instruction-following-reranker/) for more information about rerankers to make informed choice.\n- If you have feedback or need support, please email reranker-feedback@contextual.ai"
},
"typeVersion": 1
},
{
"id": "4fc2caf6-ba03-4507-82f9-3b88d0460e57",
"name": "Sticky Note1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-96,
-520
],
"parameters": {
"color": 7,
"width": 704,
"height": 608,
"content": "## 1. Determine whether MCP servers are needed\nBased on user's request, LLM determines the need for an MCP Server, provides a reason, and if needed, provides reranking instruction text which will be passed to reranker"
},
"typeVersion": 1
},
{
"id": "37386e9a-6051-4ef9-9e46-cbd4c60c7f80",
"name": "Sticky Note2",
"type": "n8n-nodes-base.stickyNote",
"position": [
672,
-520
],
"parameters": {
"color": 7,
"width": 640,
"height": 400,
"content": "## 2. Fetch MCP Server list and format them\nWe fetch 5000 MCP Servers from PulseMCP directory and parse them as documents to pass it onto the Contextual AI Reranker"
},
"typeVersion": 1
},
{
"id": "eef73a4d-eb47-4d2d-a7a9-44650e5ffc6b",
"name": "Sticky Note3",
"type": "n8n-nodes-base.stickyNote",
"position": [
1368,
-520
],
"parameters": {
"color": 7,
"width": 816,
"height": 400,
"content": "## 3. Rerank the servers and display top five results\nWe use Contextual AI's reranker to re-rank the servers and identify the top 5 servers based ont eh user query and re-ranker instruction, which is then formatted to be displayed in user friendly format.\n- You can checkout this [blog](https://contextual.ai/blog/introducing-instruction-following-reranker/) to learn more about rerankers"
},
"typeVersion": 1
},
{
"id": "b82d5e55-3ff9-4fd9-a37c-fc75c155353e",
"name": "User-Query",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"position": [
-80,
-280
],
"webhookId": "018048be-810b-4a22-82c4-9e7ed7f05e1a",
"parameters": {
"public": true,
"options": {
"responseMode": "responseNodes",
"allowFileUploads": true
},
"initialMessages": "Try MCP Reranker using Contextual AI's Reranker v2"
},
"typeVersion": 1.3
},
{
"id": "04a2eb05-a82b-4a86-a18d-ed01094ba638",
"name": "意思決定のためのLLMエージェント",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [
144,
-280
],
"parameters": {
"options": {
"systemMessage": "=Analyze this user query and decide if it requires external tools/APIs (Model Context Protocol (MCP) servers) or can be answered directly.\n Query: \"{{ $json.chatInput }}\"\n\n Consider:\n - Does it need real-time data, web search, or external APIs?\n - Does it require specialized tools (file management, databases, etc.)?\n - Is it a complex task that would benefit from external services?\n - Can it be answered with general knowledge alone?\n\n If MCP is needed, also generate a concise reranking instruction for selecting the best external tools/APIs (MCPs) for this query.\n\n The instruction should:\n - Specify the exact capabilities/features/details that an MCP server requires for this query\n - Look for domain/field specificity and functionality needs\n - Any specific requirements that the user asks for\n - Highlight the user's prioritized criteria for server selection\n\n Base the instruction only on what is explicitly stated or clearly implied in the user's query.\n Do not assume additional requirements or preferences that are not present in the query.\n\n Respond with JSON: {\"use_mcp\": true/false, \"reason\": \"brief explanation\", \"instruction\": \"reranking instruction text or null if not needed\"}"
}
},
"typeVersion": 2.2
},
{
"id": "1cfbc30b-68ef-402f-a8ad-2aad77789d08",
"name": "PulseMCP Fetch MCP Servers",
"type": "n8n-nodes-base.httpRequest",
"position": [
720,
-280
],
"parameters": {
"url": "=https://api.pulsemcp.com/v0beta/servers",
"options": {},
"sendQuery": true,
"queryParameters": {
"parameters": [
{
"name": "count_per_page",
"value": "5000"
},
{
"name": "offset",
"value": "0"
}
]
}
},
"typeVersion": 4.2
},
{
"id": "955343c1-540a-460b-a27f-84d2da2da40a",
"name": "Final Response1",
"type": "@n8n/n8n-nodes-langchain.chat",
"position": [
720,
-88
],
"parameters": {
"message": "= {{ $json.output.parseJson().reason }} Therefore, no MCP Servers are required to fulfill this request.",
"options": {},
"waitUserReply": false
},
"typeVersion": 1
},
{
"id": "a788876e-4bc7-4f6e-82aa-8617ba99cdc9",
"name": "メタデータ付きドキュメントへのMCPサーバーリストの解析",
"type": "n8n-nodes-base.code",
"position": [
1168,
-352
],
"parameters": {
"jsCode": "const servers = $input.first().json.servers || [];\nconst documents = [];\nconst metadata = [];\n\nfor (const server of servers) {\n documents.push(`MCP Server: ${server.name}\\nDescription: ${server.short_description}`);\n metadata.push(`Name: ${server.name}, Stars: ${server.github_stars}, Downloads: ${server.package_download_count}`);\n}\n\nconst aiOutputRaw = $('LLM Agent for Decision-Making').first().json.output;\nconst aiOutput = JSON.parse(aiOutputRaw);\n\nreturn [{\n json: {\n query: $('User-Query').first().json.chatInput,\n instruction: aiOutput.instruction, \n documents,\n metadata,\n servers\n }\n}];\n"
},
"typeVersion": 2
},
{
"id": "0b49e518-d9b6-4865-9cd4-658bb7317927",
"name": "ContextualAI Reranker",
"type": "n8n-nodes-base.httpRequest",
"position": [
1392,
-280
],
"parameters": {
"url": "https://api.contextual.ai/v1/rerank",
"method": "POST",
"options": {},
"sendBody": true,
"sendHeaders": true,
"bodyParameters": {
"parameters": [
{
"name": "query",
"value": "={{ $json.query }}"
},
{
"name": "instruction",
"value": "={{ $json.instruction }}"
},
{
"name": "documents",
"value": "={{ $json.documents }}"
},
{
"name": "metadata",
"value": "={{ $json.metadata }}"
},
{
"name": "model",
"value": "ctxl-rerank-v2-instruct-multilingual"
}
]
},
"headerParameters": {
"parameters": [
{
"name": "Authorization",
"value": "=Bearer {{$vars.CONTEXTUALAI_API_KEY}}"
},
{
"name": "Content-type",
"value": "application/json"
}
]
}
},
"typeVersion": 4.2
},
{
"id": "30cf71cc-d8cb-44af-aaab-4fd9ae0bceb5",
"name": "上位5件の結果のフォーマット",
"type": "n8n-nodes-base.code",
"position": [
1840,
-352
],
"parameters": {
"jsCode": "const results = $input.first().json.results || [];\nconst servers = $('Parse MCP Server list into documents w metadata').first().json.servers || [];\n\nconst top = results.slice(0, 5).map((r, i) => {\n const server = servers[r.index] || {};\n return {\n name: server.name || \"Unknown\",\n description: server.short_description || \"N/A\",\n stars: server.github_stars || 0,\n downloads: server.package_download_count || 0,\n score: r.relevance_score\n };\n});\n\nlet message = \"Top MCP Servers \\n\\n\";\ntop.forEach((s, i) => {\n message += `${i + 1}. ${s.name} (⭐ ${s.stars}, ⬇️ ${s.downloads}, 🔎 ${s.score.toFixed(2)})\\n ${s.description}\\n\\n`;\n});\n\nreturn [{ json: { message } }];\n"
},
"typeVersion": 2
},
{
"id": "395b94c6-bba5-4585-bbf8-e3272699c2ac",
"name": "Final Response2",
"type": "@n8n/n8n-nodes-langchain.chat",
"position": [
2064,
-352
],
"parameters": {
"message": "={{ $json.message }}",
"options": {},
"waitUserReply": false
},
"typeVersion": 1
}
],
"active": true,
"pinData": {},
"settings": {
"callerPolicy": "workflowsFromSameOwner",
"executionOrder": "v1"
},
"versionId": "4fd9aecc-d9c0-4efd-87c7-3385c810fc75",
"connections": {
"a1c8a119-9b23-44ad-a1c0-2acef910beaf": {
"main": [
[
{
"node": "1cfbc30b-68ef-402f-a8ad-2aad77789d08",
"type": "main",
"index": 0
},
{
"node": "3cfcff90-fdee-430a-951a-d30f8f487a6e",
"type": "main",
"index": 1
}
],
[
{
"node": "955343c1-540a-460b-a27f-84d2da2da40a",
"type": "main",
"index": 0
}
]
]
},
"3cfcff90-fdee-430a-951a-d30f8f487a6e": {
"main": [
[
{
"node": "a788876e-4bc7-4f6e-82aa-8617ba99cdc9",
"type": "main",
"index": 0
}
]
]
},
"33cdc727-eaee-4898-b583-ec57c79362af": {
"main": [
[
{
"node": "30cf71cc-d8cb-44af-aaab-4fd9ae0bceb5",
"type": "main",
"index": 0
}
]
]
},
"b82d5e55-3ff9-4fd9-a37c-fc75c155353e": {
"main": [
[
{
"node": "04a2eb05-a82b-4a86-a18d-ed01094ba638",
"type": "main",
"index": 0
}
]
]
},
"59b497fe-1934-4183-8a17-f3b30ca0f5c4": {
"ai_languageModel": [
[
{
"node": "04a2eb05-a82b-4a86-a18d-ed01094ba638",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"0b49e518-d9b6-4865-9cd4-658bb7317927": {
"main": [
[
{
"node": "33cdc727-eaee-4898-b583-ec57c79362af",
"type": "main",
"index": 0
}
]
]
},
"30cf71cc-d8cb-44af-aaab-4fd9ae0bceb5": {
"main": [
[
{
"node": "395b94c6-bba5-4585-bbf8-e3272699c2ac",
"type": "main",
"index": 0
}
]
]
},
"1cfbc30b-68ef-402f-a8ad-2aad77789d08": {
"main": [
[
{
"node": "3cfcff90-fdee-430a-951a-d30f8f487a6e",
"type": "main",
"index": 0
}
]
]
},
"04a2eb05-a82b-4a86-a18d-ed01094ba638": {
"main": [
[
{
"node": "a1c8a119-9b23-44ad-a1c0-2acef910beaf",
"type": "main",
"index": 0
}
]
]
},
"a788876e-4bc7-4f6e-82aa-8617ba99cdc9": {
"main": [
[
{
"node": "0b49e518-d9b6-4865-9cd4-658bb7317927",
"type": "main",
"index": 0
},
{
"node": "33cdc727-eaee-4898-b583-ec57c79362af",
"type": "main",
"index": 1
}
]
]
}
}
}よくある質問
このワークフローの使い方は?
上記のJSON設定コードをコピーし、n8nインスタンスで新しいワークフローを作成して「JSONからインポート」を選択、設定を貼り付けて認証情報を必要に応じて変更してください。
このワークフローはどんな場面に適していますか?
上級 - その他, AI RAG検索拡張, マルチモーダルAI
有料ですか?
このワークフローは完全無料です。ただし、ワークフローで使用するサードパーティサービス(OpenAI APIなど)は別途料金が発生する場合があります。
関連ワークフロー
不動産検索クローラーアシスタント
PropertyFinder.ae、OpenRouter、SerpAPIを使ってAIで不動産に関する質問に回答する
If
Set
Code
+
If
Set
Code
18 ノードGeorge Zargaryan
その他
PDF から注文へ
AIを使ってPDFの購入注文をAdobe Commerceの販売注文に自動変換する
If
Set
Code
+
If
Set
Code
96 ノードJKingma
文書抽出
Instagramカルーセル広告をAIチャットで自動化
AIとBlotatoを使用して5つのプラットフォームでソーシャルメディアのカルーセルを作成・配信
If
Wait
Merge
+
If
Wait
Merge
29 ノードSabrina Ramonov 🍄
その他
コンテキスト・ハイブリッドRAG AIコピー
RAGアプリケーション向けのGoogle DriveからSupabaseコンテキストベクトルデータベースへの同期
If
Set
Code
+
If
Set
Code
76 ノードMichael Taleb
AI RAG検索拡張
✨🩷自動化ソーシャルメディアコンテンツ公開工厂 + 系统提示组合
基于动态系统提示とGPT-4oのAI驱动多平台ソーシャルメディアコンテンツ工厂
If
Set
Code
+
If
Set
Code
100 ノードAmit Mehta
コンテンツ作成
BigQuery RAGにOpenAI埋め込みを使用
BigQuery RAGとOpenAIを使ってドキュメントに関連する質問に回答する
Set
Http Request
Agent
+
Set
Http Request
Agent
24 ノードDataki
その他