🦙👁️👁️ 最適なローカルOllamaビジュアルモデルを比較して見つける
上級
これはAI分野の自動化ワークフローで、19個のノードを含みます。主にSet, SplitOut, GoogleDocs, GoogleDrive, HttpRequestなどのノードを使用、AI技術を活用したスマート自動化を実現。 Google Docs を使ってローカルの Ollama vision モデルで画像分析を比較する
前提条件
- •Google Drive API認証情報
- •ターゲットAPIの認証情報が必要な場合あり
使用ノード (19)
カテゴリー
ワークフロープレビュー
ノード接続関係を可視化、ズームとパンをサポート
ワークフローをエクスポート
以下のJSON設定をn8nにインポートして、このワークフローを使用できます
{
"id": "keFEBUqHOrsib60G",
"meta": {
"instanceId": "31e69f7f4a77bf465b805824e303232f0227212ae922d12133a0f96ffeab4fef",
"templateCredsSetupCompleted": true
},
"name": "🦙👁️👁️ Find the Best Local Ollama Vision Models by Comparison",
"tags": [],
"nodes": [
{
"id": "dd2f1201-a78a-4ea9-b5ff-7543673e8445",
"name": "付箋1",
"type": "n8n-nodes-base.stickyNote",
"position": [
1080,
1160
],
"parameters": {
"color": 4,
"width": 340,
"height": 340,
"content": "## 👁️ Analyze Image with Local Ollama LLM\n"
},
"typeVersion": 1
},
{
"id": "81975be4-1e40-41e9-b938-612270f80a92",
"name": "付箋2",
"type": "n8n-nodes-base.stickyNote",
"position": [
280,
640
],
"parameters": {
"color": 4,
"width": 300,
"height": 300,
"content": "## 👍Try Me!"
},
"typeVersion": 1
},
{
"id": "3a56f75b-4836-4c37-a246-83ef6507c581",
"name": "'Test workflow'クリック時",
"type": "n8n-nodes-base.manualTrigger",
"position": [
380,
740
],
"parameters": {},
"typeVersion": 1
},
{
"id": "bb4570c7-269c-4d28-85d4-183ca2fabb89",
"name": "Ollama LLMリクエスト",
"type": "n8n-nodes-base.httpRequest",
"position": [
1200,
1280
],
"parameters": {
"url": "http://127.0.0.1:11434/api/chat",
"method": "POST",
"options": {},
"jsonBody": "={{ $json.body }}",
"sendBody": true,
"sendHeaders": true,
"specifyBody": "json",
"headerParameters": {
"parameters": [
{
"name": "Content-Type",
"value": " application/json"
}
]
}
},
"typeVersion": 4.2
},
{
"id": "0a6e064d-9a67-4cd2-b3ed-247a1684c1fb",
"name": "リクエストボディ作成",
"type": "n8n-nodes-base.set",
"position": [
840,
1280
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "be9a8e21-9bb6-4588-a77a-61bc2def0648",
"name": "body",
"type": "string",
"value": "={\n \"model\": \"{{ $json.models }}\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"{{ $json.user_prompt }}\",\n \"images\": [\"{{ $('List of Vision Models').item.json.data }}\"]\n }\n ],\n \"stream\": false\n}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "f3119aff-62bc-4ffc-abe9-835dea105d76",
"name": "Ollama モデルループ処理",
"type": "n8n-nodes-base.splitInBatches",
"position": [
360,
1180
],
"parameters": {
"options": {}
},
"typeVersion": 3
},
{
"id": "1e26f493-881e-40a3-922d-5c8d6cb86374",
"name": "結果オブジェクト作成",
"type": "n8n-nodes-base.set",
"position": [
620,
1080
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "780086e5-2733-435a-90b5-fd10946ddd7a",
"name": "result",
"type": "object",
"value": "={{ $json }}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "ac0d3ada-8890-4945-aedb-fd6be4ffc020",
"name": "一般画像プロンプト",
"type": "n8n-nodes-base.set",
"position": [
620,
1280
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "be9a8e21-9bb6-4588-a77a-61bc2def0648",
"name": "user_prompt",
"type": "string",
"value": "=Analyze this image in exhaustive detail using this structure:\\n\\n1. **Comprehensive Inventory**\\n- List all visible objects with descriptors (size, color, position)\\n- Group related items hierarchically (primary subject → secondary elements → background)\\n- Note object conditions (intact/broken, new/aged)\\n\\n2. **Contextual Analysis**\\n- Identify probable setting/location with supporting evidence\\n- Determine time period/season through visual cues\\n- Analyze lighting conditions and shadow orientation\\n\\n3. **Spatial Relationships**\\n- Map object positions using grid coordinates (front/center/back, left/right)\\n- Describe size comparisons between elements\\n- Note overlapping/occluded objects\\n\\n4. **Textual Elements**\\n- Extract ALL text with font characteristics\\n- Identify logos/brands with confidence estimates\\n- Translate non-native text with cultural context\\n\\nFormat response in markdown with clear section headers and bullet points."
}
]
},
"includeOtherFields": true
},
"typeVersion": 3.4
},
{
"id": "b7faafca-a179-43b2-8318-29b4659d424f",
"name": "不動産スプレッドシートプロンプト",
"type": "n8n-nodes-base.set",
"disabled": true,
"position": [
620,
1480
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "be9a8e21-9bb6-4588-a77a-61bc2def0648",
"name": "user_prompt",
"type": "string",
"value": "=Analyze this spreadsheet image in exhaustive detail using this structure:\\n\\n1. **Table Structure**\\n- Identify all column headers (months) in order\\n- List all row labels exactly as shown\\n- Note any table titles, footnotes, or metadata\\n\\n2. **Data Extraction**\\n- Extract all numeric values with precise formatting (decimals, currency symbols)\\n- Maintain exact numbers for Listings, Sales, Months of Inventory\\n- Preserve currency formatting for Avg. Price values\\n- Include DOM values from separate section\\n\\n3. **Markdown Representation**\\n- Convert the entire spreadsheet into a perfectly formatted markdown table\\n- Maintain alignment of all columns and rows\\n- Preserve all relationships between data points\\n\\n4. **Data Analysis**\\n- Identify trends across months for each metric\\n- Note highest and lowest values in each category\\n- Calculate percentage changes between months where relevant\\n\\nFormat response with a complete markdown table first, followed by brief analysis of the real estate market data shown."
}
]
},
"includeOtherFields": true
},
"typeVersion": 3.4
},
{
"id": "5c28053b-a44e-494b-ad03-27d5b217f6b3",
"name": "視覚モデル一覧",
"type": "n8n-nodes-base.set",
"position": [
1440,
740
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "86add667-cd96-4e1c-877a-c437f6b1e040",
"name": "models",
"type": "array",
"value": "=[\"granite3.2-vision\",\"llama3.2-vision\",\"gemma3:27b\"]"
}
]
},
"includeOtherFields": true
},
"typeVersion": 3.4
},
{
"id": "58c2fcd2-0ac4-4684-a30f-37650cc8dac1",
"name": "Base64文字列取得",
"type": "n8n-nodes-base.extractFromFile",
"position": [
1140,
740
],
"parameters": {
"options": {},
"operation": "binaryToPropery"
},
"typeVersion": 1
},
{
"id": "b60c0589-397c-445b-a084-a791bef95b15",
"name": "Google Driveから画像ファイルをダウンロード",
"type": "n8n-nodes-base.googleDrive",
"position": [
920,
740
],
"parameters": {
"fileId": {
"__rl": true,
"mode": "id",
"value": "={{ $json.id }}"
},
"options": {},
"operation": "download"
},
"credentials": {
"googleDriveOAuth2Api": {
"id": "UhdXGYLTAJbsa0xX",
"name": "Google Drive account"
}
},
"typeVersion": 3
},
{
"id": "55a8f511-fdb5-4830-837a-104cbf6c6167",
"name": "ループ処理用視覚モデルリスト分割",
"type": "n8n-nodes-base.splitOut",
"position": [
1640,
740
],
"parameters": {
"options": {},
"fieldToSplitOut": "models"
},
"typeVersion": 1
},
{
"id": "8e48c8bd-15c9-4389-8698-77dc5ae698bc",
"name": "付箋",
"type": "n8n-nodes-base.stickyNote",
"position": [
620,
640
],
"parameters": {
"color": 7,
"width": 700,
"height": 300,
"content": "## ⬇️Download Image from Google Drive"
},
"typeVersion": 1
},
{
"id": "6ed8925d-b031-4052-9009-91e2e7d8f360",
"name": "付箋3",
"type": "n8n-nodes-base.stickyNote",
"position": [
1360,
640
],
"parameters": {
"color": 7,
"width": 460,
"height": 300,
"content": "## 📜Create List of Local Ollama Vision Models"
},
"typeVersion": 1
},
{
"id": "ae383e4f-21e6-479f-97e0-029f43dacc56",
"name": "付箋4",
"type": "n8n-nodes-base.stickyNote",
"position": [
280,
980
],
"parameters": {
"color": 7,
"width": 1200,
"height": 720,
"content": "## 🦙👁️👁️ Process Image with Ollama Vision Models and Save Results to Google Drive"
},
"typeVersion": 1
},
{
"id": "a27bcb6e-c6e8-4777-9887-428363256b4a",
"name": "Google Doc画像ID",
"type": "n8n-nodes-base.set",
"position": [
700,
740
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "7d5a0385-4d8b-4f70-b3b0-4182bda29e5c",
"name": "id",
"type": "string",
"value": "=[your-google-id]"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "8e6114f8-c724-40fd-9be3-253e3cb882fa",
"name": "画像説明をGoogle Docsに保存",
"type": "n8n-nodes-base.googleDocs",
"position": [
840,
1080
],
"parameters": {
"actionsUi": {
"actionFields": [
{
"text": "=<{{ $json.result.model }}>\n{{ $json.result.message.content }}\n</{{ $json.result.model }}>\n\n",
"action": "insert"
}
]
},
"operation": "update",
"documentURL": "[your-google-doc-id]"
},
"credentials": {
"googleDocsOAuth2Api": {
"id": "YWEHuG28zOt532MQ",
"name": "Google Docs account"
}
},
"typeVersion": 2
},
{
"id": "abed9af8-0d50-413a-9e6d-c6100ddaf015",
"name": "付箋5",
"type": "n8n-nodes-base.stickyNote",
"position": [
-240,
640
],
"parameters": {
"width": 480,
"height": 1340,
"content": "## 🦙👁️👁️ Find the Best Local Ollama Vision Models for Your Use Case\n\nProcess images using locally hosted Ollama Vision Models to extract detailed descriptions, contextual insights, and structured data. Save results directly to Google Docs for efficient collaboration.\n\n### Who is this for?\nThis workflow is ideal for developers, data analysts, and AI enthusiasts who need to process and analyze images using locally hosted Ollama Vision Language Models. It’s particularly useful for tasks requiring detailed image descriptions, contextual analysis, and structured data extraction.\n\n### What problem is this workflow solving? / Use Case\nThe workflow solves the challenge of extracting meaningful insights from images in exhaustive detail, such as identifying objects, analyzing spatial relationships, extracting textual elements, and providing contextual information. This is especially helpful for applications in real estate, marketing, engineering, and research.\n\n### What this workflow does\nThis workflow:\n1. Downloads an image file from Google Drive.\n2. Processes the image using multiple Ollama Vision Models (e.g., Granite3.2-Vision, Llama3.2-Vision).\n3. Generates detailed markdown-based descriptions of the image.\n4. Saves the output to a Google Docs file for easy sharing and further analysis.\n\n### Setup\n1. Ensure you have access to a local instance of Ollama. https://ollama.com/\n2. Pull the Ollama vision models.\n3. Configure your Google Drive and Google Docs credentials in n8n.\n4. Provide the image file ID from Google Drive in the designated node.\n5. Update the list of Ollama vision models\n6. Test the workflow by clicking ‘Test Workflow’ to trigger the process.\n\n### How to customize this workflow to your needs\n- Replace the image source with another provider if needed (e.g., AWS S3 or Dropbox).\n- Modify the prompts in the \"General Image Prompt\" node to suit specific analysis requirements.\n- Add additional nodes for post-processing or integrating results into other platforms like Slack or HubSpot.\n\n## Key Features:\n- **Detailed Image Analysis**: Extracts comprehensive details about objects, spatial relationships, text elements, and contextual settings.\n- **Multi-Model Support**: Utilizes multiple vision models dynamically for optimal performance.\n- **Markdown Output**: Formats results in markdown for easy readability and documentation.\n- **Google Drive Integration**: Seamlessly downloads images and saves results to Google Docs.\n\n\n"
},
"typeVersion": 1
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "a337e019-1c9a-4736-8dcd-4f12a9d989f4",
"connections": {
"58c2fcd2-0ac4-4684-a30f-37650cc8dac1": {
"main": [
[
{
"node": "5c28053b-a44e-494b-ad03-27d5b217f6b3",
"type": "main",
"index": 0
}
]
]
},
"bb4570c7-269c-4d28-85d4-183ca2fabb89": {
"main": [
[
{
"node": "f3119aff-62bc-4ffc-abe9-835dea105d76",
"type": "main",
"index": 0
}
]
]
},
"0a6e064d-9a67-4cd2-b3ed-247a1684c1fb": {
"main": [
[
{
"node": "bb4570c7-269c-4d28-85d4-183ca2fabb89",
"type": "main",
"index": 0
}
]
]
},
"a27bcb6e-c6e8-4777-9887-428363256b4a": {
"main": [
[
{
"node": "b60c0589-397c-445b-a084-a791bef95b15",
"type": "main",
"index": 0
}
]
]
},
"ac0d3ada-8890-4945-aedb-fd6be4ffc020": {
"main": [
[
{
"node": "0a6e064d-9a67-4cd2-b3ed-247a1684c1fb",
"type": "main",
"index": 0
}
]
]
},
"1e26f493-881e-40a3-922d-5c8d6cb86374": {
"main": [
[
{
"node": "8e6114f8-c724-40fd-9be3-253e3cb882fa",
"type": "main",
"index": 0
}
]
]
},
"5c28053b-a44e-494b-ad03-27d5b217f6b3": {
"main": [
[
{
"node": "55a8f511-fdb5-4830-837a-104cbf6c6167",
"type": "main",
"index": 0
}
]
]
},
"f3119aff-62bc-4ffc-abe9-835dea105d76": {
"main": [
[
{
"node": "1e26f493-881e-40a3-922d-5c8d6cb86374",
"type": "main",
"index": 0
}
],
[
{
"node": "ac0d3ada-8890-4945-aedb-fd6be4ffc020",
"type": "main",
"index": 0
}
]
]
},
"b7faafca-a179-43b2-8318-29b4659d424f": {
"main": [
[]
]
},
"3a56f75b-4836-4c37-a246-83ef6507c581": {
"main": [
[
{
"node": "a27bcb6e-c6e8-4777-9887-428363256b4a",
"type": "main",
"index": 0
}
]
]
},
"b60c0589-397c-445b-a084-a791bef95b15": {
"main": [
[
{
"node": "58c2fcd2-0ac4-4684-a30f-37650cc8dac1",
"type": "main",
"index": 0
}
]
]
},
"55a8f511-fdb5-4830-837a-104cbf6c6167": {
"main": [
[
{
"node": "f3119aff-62bc-4ffc-abe9-835dea105d76",
"type": "main",
"index": 0
}
]
]
}
}
}よくある質問
このワークフローの使い方は?
上記のJSON設定コードをコピーし、n8nインスタンスで新しいワークフローを作成して「JSONからインポート」を選択、設定を貼り付けて認証情報を必要に応じて変更してください。
このワークフローはどんな場面に適していますか?
上級 - 人工知能
有料ですか?
このワークフローは完全無料です。ただし、ワークフローで使用するサードパーティサービス(OpenAI APIなど)は別途料金が発生する場合があります。
関連ワークフロー
Jina.ai ベースのマルチページウェブサイトクローラー
Jina.aiを使用した複数ページウェブサイトクローラーツール
Set
Xml
Code
+
Set
Xml
Code
16 ノードJoseph LePage
人工知能
🤖 ドキュメント + Google Drive + Gemini + Qdrant 対応の AI 駆動型 RAG チャットボット
🤖 ドキュメント+Google Drive+Gemini+Qdranto対応のAI駆動型RAGチャットボット
If
Set
Wait
+
If
Set
Wait
50 ノードJoseph LePage
人工知能
YouTube 動画コメント分析エージェント
YouTube動画コメント分析エージェント
Set
Code
Gmail
+
Set
Code
Gmail
25 ノードJoseph LePage
人工知能
✍️🌄最初のWordPressコンテンツクリエーター - クイックスタート
✍️🌄 最初のWordPress + AIコンテンツクリエーター - クイックスタート
If
Set
Code
+
If
Set
Code
39 ノードJoseph LePage
人工知能
🤖🧑💻 n8n クリエイター ランキング レポート用 AI エージェント
🤖🧑💻 トップn8nクリエイターのランキングレポート作成AIエージェント
Set
Sort
Gmail
+
Set
Sort
Gmail
49 ノードJoseph LePage
人工知能
n8nでGemini AIを使って画像とPDFを5つの方法で処理
n8nでGemini AIを使って画像とPDFを処理する5つの方法
Set
Filter
Split Out
+
Set
Filter
Split Out
28 ノードJulian Kaiser
ビルディングブロック
ワークフロー情報
難易度
上級
ノード数19
カテゴリー1
ノードタイプ9
作成者
Joseph LePage
@joeAs an AI Automation consultant based in Canada, I partner with forward-thinking organizations to implement AI solutions that streamline operations and drive growth.
外部リンク
n8n.ioで表示 →
このワークフローを共有