Nvidia APIを使ってAIモデルを比較:Qwen、DeepSeek、Seed-OSS、Nemotron
中級
これは自動化ワークフローで、11個のノードを含みます。主にSet, Merge, Switch, Webhook, HttpRequestなどのノードを使用。 Nvidia APIを使ってAIモデルを比較する:Qwen、DeepSeek、Seed-OSS、Nemotron
前提条件
- •HTTP Webhookエンドポイント(n8nが自動生成)
- •ターゲットAPIの認証情報が必要な場合あり
カテゴリー
-
ワークフロープレビュー
ノード接続関係を可視化、ズームとパンをサポート
ワークフローをエクスポート
以下のJSON設定をn8nにインポートして、このワークフローを使用できます
{
"id": "vwBMikFazJ8dTN7C",
"meta": {
"instanceId": "b91e510ebae4127f953fd2f5f8d40d58ca1e71c746d4500c12ae86aad04c1502",
"templateCredsSetupCompleted": true
},
"name": "Compare AI Models with Nvidia API: Qwen, DeepSeek, Seed-OSS & Nemotron",
"tags": [],
"nodes": [
{
"id": "2fd77eab-0817-4d39-a206-4506b5373765",
"name": "Webhook Trigger",
"type": "n8n-nodes-base.webhook",
"position": [
-144,
-528
],
"webhookId": "6737b4b1-3c2f-47b9-89ff-a012c1fa4f29",
"parameters": {
"path": "6737b4b1-3c2f-47b9-89ff-a012c1fa4f29",
"options": {},
"httpMethod": "POST",
"responseMode": "responseNode"
},
"typeVersion": 2.1
},
{
"id": "1f78059c-f7a8-493c-886e-05047d83a7b4",
"name": "付箋",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1072,
-848
],
"parameters": {
"width": 864,
"height": 944,
"content": "# Compare AI Models with Nvidia API: Qwen, DeepSeek, Seed-OSS & Nemotron\n\n## Overview\n- Queries four AI models simultaneously via Nvidia's API in 2-3 seconds—4x faster than sequential processing. Perfect for ensemble intelligence, model comparison, or redundancy.\n\n\n## How It Works\n- Webhook Trigger receives queries\n- AI Router distributes to four parallel branches: Qwen2, SyncGenInstruct, DeepSeek-v3.1, and Nvidia Nemotron\n- Merge Node aggregates responses (continues with partial results on timeout)\n- Format Response structures output\n- Webhook Response returns JSON with all model outputs\n\n## Prerequisites\n\n- Nvidia API key from [build.nvidia.com](https://build.nvidia.com) (free tier available)\n- n8n v1.0.0+ with HTTP access\n- Model access in Nvidia dashboard\n\n## Setup\n\n1. Import workflow JSON\n2. Configure HTTP nodes: Authentication → Header Auth → `Authorization: Bearer YOUR_TOKEN_HERE`\n3. Activate workflow and test\n\n## Customization\n\nAdjust temperature/max_tokens in HTTP nodes, add/remove models by duplicating nodes, change primary response selection in Format node, or add Redis caching for frequent queries.\n\n## Use Cases\n\nMulti-model chatbots, A/B testing, code review, research assistance, and production systems with AI fallback.\n"
},
"typeVersion": 1
},
{
"id": "e7f74b77-470b-49e4-a191-577afda45296",
"name": "付箋4",
"type": "n8n-nodes-base.stickyNote",
"position": [
-192,
-848
],
"parameters": {
"color": 3,
"width": 1312,
"height": 784,
"content": ""
},
"typeVersion": 1
},
{
"id": "8a0ca7d2-f4c0-4a95-9a7a-63c9d40ef77e",
"name": "応答のフォーマット",
"type": "n8n-nodes-base.set",
"position": [
720,
-544
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "bbfd9a05-0e6c-44cf-80e2-2a79ecb3f67a",
"name": "choices[0].message.content",
"type": "string",
"value": "={{ $json.choices[0].message.content }}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "20e9c15e-cd3d-4624-8620-5e100081bab1",
"name": "集約AIモデル応答の送信",
"type": "n8n-nodes-base.respondToWebhook",
"position": [
944,
-544
],
"parameters": {
"options": {}
},
"typeVersion": 1.4
},
{
"id": "0b86c542-74ce-4456-b025-07025e6f57a7",
"name": "AIモデルの統合",
"type": "n8n-nodes-base.merge",
"position": [
528,
-576
],
"parameters": {
"numberInputs": 4
},
"typeVersion": 3.2
},
{
"id": "556f837e-5958-4121-9142-f3a05b560190",
"name": "AIモデルルーター",
"type": "n8n-nodes-base.switch",
"position": [
80,
-576
],
"parameters": {
"rules": {
"values": [
{
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "8c79834b-efde-4096-8a97-687dbaac1eaa",
"operator": {
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $json['AI Model'] }}",
"rightValue": "1"
}
]
}
},
{
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "6f423cc4-08e3-41aa-8c5a-40a2d37a248d",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $json['AI Model'] }}",
"rightValue": "2"
}
]
}
},
{
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "b8ba2c94-78d3-4325-8dda-e139d2dad24d",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $json['AI Model'] }}",
"rightValue": "3"
}
]
}
},
{
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "0d1a15d3-047f-4489-896e-af2c079de4ae",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $json['AI Model'] }}",
"rightValue": "4"
}
]
}
},
{
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "634191cd-73c9-4335-987b-93e07ba7ab0f",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $json['AI Model'] }}",
"rightValue": "5"
}
]
}
}
]
},
"options": {}
},
"typeVersion": 3.2
},
{
"id": "38a42944-835b-422c-b872-b20c8f899210",
"name": "Query Qwen3-next-80b-a3b-thinking (Alibaba)",
"type": "n8n-nodes-base.httpRequest",
"position": [
304,
-832
],
"parameters": {
"url": "https://integrate.api.nvidia.com/v1/chat/completions",
"method": "POST",
"options": {},
"jsonBody": "={\n \"model\": \"qwen/qwen3-next-80b-a3b-thinking\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"{{ $('On form submission').item.json['Insert your Query'] }}\"\n }\n ],\n \"temperature\": 0.7,\n \"max_tokens\": 1024\n} ",
"sendBody": true,
"sendHeaders": true,
"specifyBody": "json",
"authentication": "genericCredentialType",
"headerParameters": {
"parameters": [
{
"name": "accept",
"value": "application/json"
}
]
}
},
"credentials": {
"httpBearerAuth": {
"id": "AM38cMMgmt5pCa3J",
"name": "Bearer YOUR_TOKEN_HERE"
}
},
"typeVersion": 4.2
},
{
"id": "0d948f27-f325-4776-88f5-17993c22f382",
"name": "Query Bytedance/seed-oss-36b-instruct (Bytedance)",
"type": "n8n-nodes-base.httpRequest",
"position": [
304,
-640
],
"parameters": {
"url": "https://integrate.api.nvidia.com/v1/chat/completions",
"method": "POST",
"options": {},
"jsonBody": "={\n \"model\": \"bytedance/seed-oss-36b-instruct\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"{{ $json['Insert your Query'] }}\"\n }\n ],\n \"temperature\": 1.1,\n \"top_p\": 0.95,\n \"max_tokens\": 4096,\n \"thinking_budget\": -1,\n \"frequency_penalty\": 0,\n \"presence_penalty\": 0,\n \"stream\": false\n}",
"sendBody": true,
"specifyBody": "json",
"authentication": "genericCredentialType"
},
"credentials": {
"httpBearerAuth": {
"id": "81rXxn13x9fyoYSK",
"name": "Bearer YOUR_TOKEN_HERE Nvidia_bytedance/seed-oss-36b-instruct"
}
},
"typeVersion": 4.2
},
{
"id": "8fb1c1df-6544-4275-af67-c7f85b9fed92",
"name": "Query Nvidia-nemotron-nano-9b-v2 (Nvidia)",
"type": "n8n-nodes-base.httpRequest",
"position": [
304,
-256
],
"parameters": {
"url": "https://integrate.api.nvidia.com/v1/chat/completions",
"method": "POST",
"options": {},
"jsonBody": "{\n \"model\": \"nvidia/nvidia-nemotron-nano-9b-v2\",\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"/think\"\n }\n ],\n \"temperature\": 0.6,\n \"top_p\": 0.95,\n \"max_tokens\": 2048,\n \"min_thinking_tokens\": 1024,\n \"max_thinking_tokens\": 2048,\n \"frequency_penalty\": 0,\n \"presence_penalty\": 0,\n \"stream\": true\n}",
"sendBody": true,
"sendHeaders": true,
"specifyBody": "json",
"authentication": "genericCredentialType",
"headerParameters": {
"parameters": [
{}
]
}
},
"credentials": {
"httpBearerAuth": {
"id": "De0YbIT8HKmoZ2QW",
"name": "Bearer YOUR_TOKEN_HERE"
}
},
"typeVersion": 4.2
},
{
"id": "d0e9668b-1c75-4e41-90ec-684abeae0d49",
"name": "Query DeepSeekv3_1",
"type": "n8n-nodes-base.httpRequest",
"position": [
304,
-432
],
"parameters": {
"url": "https://integrate.api.nvidia.com/v1/chat/completions",
"method": "POST",
"options": {},
"jsonBody": "={\n \"model\": \"deepseek-ai/deepseek-r1\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"{{ $('On form submission').item.json['Insert your Query'] }}\"\n }\n ],\n \"temperature\": 0.6,\n \"top_p\": 0.7,\n \"frequency_penalty\": 0,\n \"presence_penalty\": 0,\n \"max_tokens\": 4096,\n \"stream\": true\n} ",
"sendBody": true,
"sendHeaders": true,
"specifyBody": "json",
"authentication": "genericCredentialType",
"headerParameters": {
"parameters": [
{
"name": "Accept",
"value": "application/json"
}
]
}
},
"credentials": {
"httpBearerAuth": {
"id": "C39RW210A9LPDPUu",
"name": "Bearer YOUR_TOKEN_HERE Nvidia_Deepseekv31"
}
},
"typeVersion": 4.2
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "34faee65-7df2-4012-93bf-50660415c2d2",
"connections": {
"0b86c542-74ce-4456-b025-07025e6f57a7": {
"main": [
[
{
"node": "8a0ca7d2-f4c0-4a95-9a7a-63c9d40ef77e",
"type": "main",
"index": 0
}
]
]
},
"556f837e-5958-4121-9142-f3a05b560190": {
"main": [
[
{
"node": "38a42944-835b-422c-b872-b20c8f899210",
"type": "main",
"index": 0
}
],
[
{
"node": "0d948f27-f325-4776-88f5-17993c22f382",
"type": "main",
"index": 0
}
],
[
{
"node": "d0e9668b-1c75-4e41-90ec-684abeae0d49",
"type": "main",
"index": 0
}
],
[
{
"node": "8fb1c1df-6544-4275-af67-c7f85b9fed92",
"type": "main",
"index": 0
}
],
[
{
"node": "38a42944-835b-422c-b872-b20c8f899210",
"type": "main",
"index": 0
},
{
"node": "0d948f27-f325-4776-88f5-17993c22f382",
"type": "main",
"index": 0
},
{
"node": "8fb1c1df-6544-4275-af67-c7f85b9fed92",
"type": "main",
"index": 0
}
]
]
},
"8a0ca7d2-f4c0-4a95-9a7a-63c9d40ef77e": {
"main": [
[
{
"node": "20e9c15e-cd3d-4624-8620-5e100081bab1",
"type": "main",
"index": 0
}
]
]
},
"2fd77eab-0817-4d39-a206-4506b5373765": {
"main": [
[
{
"node": "556f837e-5958-4121-9142-f3a05b560190",
"type": "main",
"index": 0
}
]
]
},
"d0e9668b-1c75-4e41-90ec-684abeae0d49": {
"main": [
[
{
"node": "0b86c542-74ce-4456-b025-07025e6f57a7",
"type": "main",
"index": 2
}
]
]
},
"8fb1c1df-6544-4275-af67-c7f85b9fed92": {
"main": [
[
{
"node": "0b86c542-74ce-4456-b025-07025e6f57a7",
"type": "main",
"index": 3
}
]
]
},
"38a42944-835b-422c-b872-b20c8f899210": {
"main": [
[
{
"node": "0b86c542-74ce-4456-b025-07025e6f57a7",
"type": "main",
"index": 0
}
]
]
},
"0d948f27-f325-4776-88f5-17993c22f382": {
"main": [
[
{
"node": "0b86c542-74ce-4456-b025-07025e6f57a7",
"type": "main",
"index": 1
}
]
]
}
}
}よくある質問
このワークフローの使い方は?
上記のJSON設定コードをコピーし、n8nインスタンスで新しいワークフローを作成して「JSONからインポート」を選択、設定を貼り付けて認証情報を必要に応じて変更してください。
このワークフローはどんな場面に適していますか?
中級
有料ですか?
このワークフローは完全無料です。ただし、ワークフローで使用するサードパーティサービス(OpenAI APIなど)は別途料金が発生する場合があります。
関連ワークフロー
Qwen3-VL-8B-Thinking旅行計画ツール
Skyscanner、Booking.com、Gmailを活用したAI最適化旅行行程生成システム
Set
Code
Gmail
+
Set
Code
Gmail
18 ノードCheng Siong Chin
個人の生産性
AI駆動の同僚レビュー作業システム with 自動採点基準生成
GPT-4-nano、Slack、メール通知を使用したペアレビュー割り当ての自動化
Set
Code
Slack
+
Set
Code
Slack
22 ノードCheng Siong Chin
文書抽出
AI 駆動型の Grok-3 早期警告システム(家族への通知機能付き)
Grok-3 AI分析に基づいた健康モニタリングシステム、家族/医師メールアラート付き
If
Set
Merge
+
If
Set
Merge
17 ノードCheng Siong Chin
個人の生産性
Microsoft Teams会議分析をGPT-4.1、Outlook、Mem.aiを使用して自動化
GPT-4.1、Outlook、Mem.aiを使ってMicrosoft Teams会議の分析を自動化する
If
Set
Code
+
If
Set
Code
61 ノードWayne Simpson
人事
Bitrix24 オープンチャット RAG チャットボットアプリケーション ワークフロー サンプル
Bitrix24 AI駆動のオープンチャネルRAGチャットボット
If
Set
Merge
+
If
Set
Merge
34 ノードFerenc Erb
その他
n8n Placeholdarr for Plex
Radarr/SonarrおよびPlex向けのオンデマンドダウンロード機能付き動のメディアライブラリ
If
Set
Ssh
+
If
Set
Ssh
87 ノードArjan ter Heegde
ファイル管理
ワークフロー情報
難易度
中級
ノード数11
カテゴリー-
ノードタイプ7
作成者
Cheng Siong Chin
@cschinProf. Cheng Siong CHIN serves as Chair Professor in Intelligent Systems Modelling and Simulation in Newcastle University, Singapore. His academic credentials include an M.Sc. in Advanced Control and Systems Engineering from The University of Manchester and a Ph.D. in Robotics from Nanyang Technological University.
外部リンク
n8n.ioで表示 →
このワークフローを共有