๐จ๏ธ Ollama ์ฑํ
์ด๊ฒ์AI๋ถ์ผ์์๋ํ ์ํฌํ๋ก์ฐ๋ก, 14๊ฐ์ ๋ ธ๋๋ฅผ ํฌํจํฉ๋๋ค.์ฃผ๋ก Set, ChainLlm, LmOllama, ChatTrigger ๋ฑ์ ๋ ธ๋๋ฅผ ์ฌ์ฉํ๋ฉฐ์ธ๊ณต์ง๋ฅ ๊ธฐ์ ์ ๊ฒฐํฉํ์ฌ ์ค๋งํธ ์๋ํ๋ฅผ ๊ตฌํํฉ๋๋ค. ๐๐ฆ๐ค ํ๋ผ์ด๋น ๋ก์ปฌ Ollama ์์ฒด ํธ์คํ AI ์ด์์คํดํธ
- โขAI ์๋น์ค API Key(์: OpenAI, Anthropic ๋ฑ)
์ฌ์ฉ๋ ๋ ธ๋ (14)
์นดํ ๊ณ ๋ฆฌ
{
"id": "Telr6HU0ltH7s9f7",
"meta": {
"instanceId": "31e69f7f4a77bf465b805824e303232f0227212ae922d12133a0f96ffeab4fef"
},
"name": "๐จ๏ธOllama Chat",
"tags": [],
"nodes": [
{
"id": "9560e89b-ea08-49dc-924e-ec8b83477340",
"name": "์ฑํ
๋ฉ์์ง ์์ ์",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"position": [
280,
60
],
"webhookId": "4d06a912-2920-489c-a33c-0e3ea0b66745",
"parameters": {
"options": {}
},
"typeVersion": 1.1
},
{
"id": "c7919677-233f-4c48-ba01-ae923aef511e",
"name": "๊ธฐ๋ณธ LLM ์ฒด์ธ",
"type": "@n8n/n8n-nodes-langchain.chainLlm",
"onError": "continueErrorOutput",
"position": [
640,
60
],
"parameters": {
"text": "=Provide the users prompt and response as a JSON object with two fields:\n- Prompt\n- Response\n\nAvoid any preample or further explanation.\n\nThis is the question: {{ $json.chatInput }}",
"promptType": "define"
},
"typeVersion": 1.5
},
{
"id": "b9676a8b-f790-4661-b8b9-3056c969bdf5",
"name": "Ollama ๋ชจ๋ธ",
"type": "@n8n/n8n-nodes-langchain.lmOllama",
"position": [
740,
340
],
"parameters": {
"model": "llama3.2:latest",
"options": {}
},
"credentials": {
"ollamaApi": {
"id": "IsSBWGtcJbjRiKqD",
"name": "Ollama account"
}
},
"typeVersion": 1
},
{
"id": "61dfcda5-083c-43ff-8451-b2417f1e4be4",
"name": "๋ฉ๋ชจ์ง",
"type": "n8n-nodes-base.stickyNote",
"position": [
-380,
-380
],
"parameters": {
"color": 4,
"width": 520,
"height": 860,
"content": "# ๐ฆ Ollama Chat Workflow\n\nA simple N8N workflow that integrates Ollama LLM for chat message processing and returns a structured JSON object.\n\n## Overview\nThis workflow creates a chat interface that processes messages using the Llama 3.2 model through Ollama. When a chat message is received, it gets processed through a basic LLM chain and returns a response.\n\n## Components\n- **Trigger Node**\n- **Processing Node**\n- **Model Node**\n- **JSON to Object Node**\n- **Structured Response Node**\n- **Error Response Node**\n\n## Workflow Structure\n1. The chat trigger node receives incoming messages\n2. Messages are passed to the Basic LLM Chain\n3. The Ollama Model processes the input using Llama 3.2\n4. Responses are returned through the chain\n\n## Prerequisites\n- N8N installation\n- Ollama setup with Llama 3.2 model\n- Valid Ollama API credentials\n\n## Configuration\n1. Set up the Ollama API credentials in N8N\n2. Ensure the Llama 3.2 model is available in your Ollama installation\n\n"
},
"typeVersion": 1
},
{
"id": "64f60ee1-7870-461e-8fac-994c9c08b3f9",
"name": "๋ฉ๋ชจ์ง1",
"type": "n8n-nodes-base.stickyNote",
"position": [
340,
280
],
"parameters": {
"width": 560,
"height": 200,
"content": "## Model Node\n- Name: Ollama Model\n- Type: LangChain Ollama Integration\n- Model: llama3.2:latest\n- Purpose: Provides the language model capabilities"
},
"typeVersion": 1
},
{
"id": "bb46210d-450c-405b-a451-42458b3af4ae",
"name": "๋ฉ๋ชจ์ง2",
"type": "n8n-nodes-base.stickyNote",
"position": [
200,
-160
],
"parameters": {
"color": 6,
"width": 280,
"height": 400,
"content": "## Trigger Node\n- Name: When chat message received\n- Type: Chat Trigger\n- Purpose: Initiates the workflow when a new chat message arrives"
},
"typeVersion": 1
},
{
"id": "7f21b9e6-6831-4117-a2e2-9c9fb6edc492",
"name": "๋ฉ๋ชจ์ง3",
"type": "n8n-nodes-base.stickyNote",
"position": [
520,
-380
],
"parameters": {
"color": 3,
"width": 500,
"height": 620,
"content": "## Processing Node\n- Name: Basic LLM Chain\n- Type: LangChain LLM Chain\n- Purpose: Handles the processing of messages through the language model and returns a structured JSON object.\n\n"
},
"typeVersion": 1
},
{
"id": "871bac4e-002f-4a1d-b3f9-0b7d309db709",
"name": "๋ฉ๋ชจ์ง4",
"type": "n8n-nodes-base.stickyNote",
"position": [
560,
-200
],
"parameters": {
"color": 7,
"width": 420,
"height": 200,
"content": "### Prompt (Change this for your use case)\nProvide the users prompt and response as a JSON object with two fields:\n- Prompt\n- Response\n\n\nAvoid any preample or further explanation.\nThis is the question: {{ $json.chatInput }}"
},
"typeVersion": 1
},
{
"id": "c9e1b2af-059b-4330-a194-45ae0161aa1c",
"name": "๋ฉ๋ชจ์ง5",
"type": "n8n-nodes-base.stickyNote",
"position": [
1060,
-280
],
"parameters": {
"color": 5,
"width": 420,
"height": 520,
"content": "## JSON to Object Node\n- Type: Set Node\n- Purpose: A node designed to transform and structure response data in a specific format before sending it through the workflow. It operates in manual mapping mode to allow precise control over the response format.\n\n**Key Features**\n- Manual field mapping capabilities\n- Object transformation and restructuring\n- Support for JSON data formatting\n- Field-to-field value mapping\n- Includes option to add additional input fields\n"
},
"typeVersion": 1
},
{
"id": "3fb912b8-86ac-42f7-a19c-45e59898a62e",
"name": "๋ฉ๋ชจ์ง6",
"type": "n8n-nodes-base.stickyNote",
"position": [
1520,
-180
],
"parameters": {
"color": 6,
"width": 460,
"height": 420,
"content": "## Structured Response Node\n- Type: Set Node\n- Purpose: Controls how the workflow responds to users chat prompt.\n\n**Response Mode**\n- Manual Mapping: Allows custom formatting of response data\n- Fields to Set: Specify which data fields to include in response\n\n"
},
"typeVersion": 1
},
{
"id": "fdfd1a5c-e1a6-4390-9807-ce665b96b9ae",
"name": "๊ตฌ์กฐํ๋ ์๋ต",
"type": "n8n-nodes-base.set",
"position": [
1700,
60
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "13c4058d-2d50-46b7-a5a6-c788828a1764",
"name": "text",
"type": "string",
"value": "=Your prompt was: {{ $json.response.Prompt }}\n\nMy response is: {{ $json.response.Response }}\n\nThis is the JSON object:\n\n{{ $('Basic LLM Chain').item.json.text }}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "76baa6fc-72dd-41f9-aef9-4fd718b526df",
"name": "์ค๋ฅ ์๋ต",
"type": "n8n-nodes-base.set",
"position": [
1460,
660
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "13c4058d-2d50-46b7-a5a6-c788828a1764",
"name": "text",
"type": "string",
"value": "=There was an error."
}
]
}
},
"typeVersion": 3.4
},
{
"id": "bde3b9df-af55-451b-b287-1b5038f9936c",
"name": "๋ฉ๋ชจ์ง7",
"type": "n8n-nodes-base.stickyNote",
"position": [
1240,
280
],
"parameters": {
"color": 2,
"width": 540,
"height": 560,
"content": "## Error Response Node\n- Type: Set Node\n- Purpose: Handles error cases when the Basic LLM Chain fails to process the chat message properly. It provides a fallback response mechanism to ensure the workflow remains robust.\n\n**Key Features**\n- Provides default error messaging\n- Maintains consistent response structure\n- Connects to the error output branch of the LLM Chain\n- Ensures graceful failure handling\n\nThe Error Response node activates when the main processing chain encounters issues, ensuring users always receive feedback even when errors occur in the language model processing.\n"
},
"typeVersion": 1
},
{
"id": "b9b2ab8d-9bea-457a-b7bf-51c8ef0de69f",
"name": "JSON to Object",
"type": "n8n-nodes-base.set",
"position": [
1220,
60
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "12af1a54-62a2-44c3-9001-95bb0d7c769d",
"name": "response",
"type": "object",
"value": "={{ $json.text }}"
}
]
}
},
"typeVersion": 3.4
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "5175454a-91b7-4c57-890d-629bd4e8d2fd",
"connections": {
"b9676a8b-f790-4661-b8b9-3056c969bdf5": {
"ai_languageModel": [
[
{
"node": "c7919677-233f-4c48-ba01-ae923aef511e",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"b9b2ab8d-9bea-457a-b7bf-51c8ef0de69f": {
"main": [
[
{
"node": "fdfd1a5c-e1a6-4390-9807-ce665b96b9ae",
"type": "main",
"index": 0
}
]
]
},
"c7919677-233f-4c48-ba01-ae923aef511e": {
"main": [
[
{
"node": "b9b2ab8d-9bea-457a-b7bf-51c8ef0de69f",
"type": "main",
"index": 0
}
],
[
{
"node": "76baa6fc-72dd-41f9-aef9-4fd718b526df",
"type": "main",
"index": 0
}
]
]
},
"9560e89b-ea08-49dc-924e-ec8b83477340": {
"main": [
[
{
"node": "c7919677-233f-4c48-ba01-ae923aef511e",
"type": "main",
"index": 0
}
]
]
}
}
}์ด ์ํฌํ๋ก์ฐ๋ฅผ ์ด๋ป๊ฒ ์ฌ์ฉํ๋์?
์์ JSON ๊ตฌ์ฑ ์ฝ๋๋ฅผ ๋ณต์ฌํ์ฌ n8n ์ธ์คํด์ค์์ ์ ์ํฌํ๋ก์ฐ๋ฅผ ์์ฑํ๊ณ "JSON์์ ๊ฐ์ ธ์ค๊ธฐ"๋ฅผ ์ ํํ ํ, ๊ตฌ์ฑ์ ๋ถ์ฌ๋ฃ๊ณ ํ์์ ๋ฐ๋ผ ์ธ์ฆ ์ค์ ์ ์์ ํ์ธ์.
์ด ์ํฌํ๋ก์ฐ๋ ์ด๋ค ์๋๋ฆฌ์ค์ ์ ํฉํ๊ฐ์?
์ค๊ธ - ์ธ๊ณต์ง๋ฅ
์ ๋ฃ์ธ๊ฐ์?
์ด ์ํฌํ๋ก์ฐ๋ ์์ ํ ๋ฌด๋ฃ์ด๋ฉฐ ์ง์ ๊ฐ์ ธ์ ์ฌ์ฉํ ์ ์์ต๋๋ค. ๋ค๋ง, ์ํฌํ๋ก์ฐ์์ ์ฌ์ฉํ๋ ํ์ฌ ์๋น์ค(์: OpenAI API)๋ ์ฌ์ฉ์ ์ง์ ๋น์ฉ์ ์ง๋ถํด์ผ ํ ์ ์์ต๋๋ค.
๊ด๋ จ ์ํฌํ๋ก์ฐ ์ถ์ฒ
Joseph LePage
@joeAs an AI Automation consultant based in Canada, I partner with forward-thinking organizations to implement AI solutions that streamline operations and drive growth.
์ด ์ํฌํ๋ก์ฐ ๊ณต์