8
n8n 한국어amn8n.com

OpenAI와 Google 스프레드시트를 사용하여 LLM을 쉽게 비교

고급

이것은Engineering, AI분야의자동화 워크플로우로, 21개의 노드를 포함합니다.주로 Set, SplitOut, Aggregate, Summarize, GoogleSheets 등의 노드를 사용하며인공지능 기술을 결합하여 스마트 자동화를 구현합니다. Google 스프레드시트를 사용하여 다른 LLM의 응답을 비교합니다.

사전 요구사항
  • Google Sheets API 인증 정보
워크플로우 미리보기
노드 연결 관계를 시각적으로 표시하며, 확대/축소 및 이동을 지원합니다
워크플로우 내보내기
다음 JSON 구성을 복사하여 n8n에 가져오면 이 워크플로우를 사용할 수 있습니다
{
  "id": "",
  "meta": {
    "instanceId": "",
    "templateCredsSetupCompleted": true
  },
  "name": "Easily Compare LLMs Using OpenAI and Google Sheets",
  "tags": [],
  "nodes": [
    {
      "id": "--0",
      "name": "채팅 메시지 수신 시",
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "position": [
        -7400,
        3040
      ],
      "webhookId": "",
      "parameters": {
        "options": {}
      },
      "typeVersion": 1.1
    },
    {
      "id": "--1",
      "name": "항목 반복 실행",
      "type": "n8n-nodes-base.splitInBatches",
      "position": [
        -5960,
        3040
      ],
      "parameters": {
        "options": {
          "reset": false
        }
      },
      "typeVersion": 3
    },
    {
      "id": "--2",
      "name": "심플 메모리",
      "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
      "position": [
        -4880,
        3000
      ],
      "parameters": {
        "sessionKey": "={{$('Set model, sessionId, chatInput, sessionIdBase').item.json.sessionId}}",
        "sessionIdType": "customKey"
      },
      "typeVersion": 1.3
    },
    {
      "id": "--3",
      "name": "채팅 메모리 관리자",
      "type": "@n8n/n8n-nodes-langchain.memoryManager",
      "position": [
        -4980,
        3180
      ],
      "parameters": {
        "options": {}
      },
      "typeVersion": 1.1
    },
    {
      "id": "--4",
      "name": "스티키 노트",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -8120,
        2600
      ],
      "parameters": {
        "color": 5,
        "width": 640,
        "height": 1180,
        "content": "## Easily Compare LLMs Using OpenAI and Google Sheets\n\nThis workflow allows you to **easily evaluate and compare the outputs of two language models (LLMs)** before choosing one for production.\n\nIn the chat interface, both model outputs are shown side by side. Their responses are also logged into a Google Sheet, where they can be evaluated manually or automatically using a more advanced model.\n\n### Use Case\nYou're developing an AI agent, and since LLMs are non-deterministic, you want to determine which one performs best for your specific use case. This template is designed to help you compare them effectively.\n\n### How It Works\n- The user sends a message to the chat interface.\n- The input is duplicated and sent to two different LLMs.\n- Each model processes the same prompt independently, using its own memory context.\n- Their answers, along with the user input and previous context, are logged to Google Sheets.\n- You can review, compare, and evaluate the model outputs manually (or automate it later).\n- In the chat, both responses are also shown one after the other for direct comparison.\n\n### How To Use It\n- Copy this [Google Sheets template](https://docs.google.com/spreadsheets/d/1grO5jxm05kJ7if9wBIOozjkqW27i8tRedrheLRrpxf4/) (File > Make a Copy).\n- Set up your **System Prompt** and **Tools** in the **AI Agent** node to suit your use case.\n- Start chatting! Each message will trigger both models and log their responses to the spreadsheet.\n\n\n*Note: This version is set up for two models. If you want to compare more, you’ll need to extend the workflow logic and update the sheet.*\n\n### About Models\nYou can use **OpenRouter** or **Vertex AI** to test models across providers.  \nIf you're using a node for a specific provider, like OpenAI, you can compare different models from that provider (e.g., `gpt-4.1` vs `gpt-4.1-mini`).\n\n### Evaluation in Google Sheets\nThis is ideal for teams, allowing non-technical stakeholders (not just data scientists) to evaluate responses based on real-world needs.\n\nAdvanced users can automate this evaluation using a more capable model (like `o3` from **OpenAI**), but note that this will increase token usage and cost.\n\n### Token Considerations\nSince **each input is processed by two different models**, the workflow will consume more tokens overall.  \nKeep an eye on usage, especially if working with longer prompts or running multiple evaluations, as this can impact cost.\n\n"
      },
      "typeVersion": 1
    },
    {
      "id": "OpenRouter--5",
      "name": "OpenRouter 채팅 모델",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenRouter",
      "position": [
        -5180,
        3000
      ],
      "parameters": {
        "model": "={{$json.model}}"
      },
      "credentials": {
        "openRouterApi": {
          "id": "",
          "name": ""
        }
      },
      "typeVersion": 1
    },
    {
      "id": "-1-6",
      "name": "스티키 노트1",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -7220,
        2620
      ],
      "parameters": {
        "color": 7,
        "width": 360,
        "height": 580,
        "content": "## Define Models to Compare\n\nThis node defines the array of model IDs to be compared.\n\nIn this template, we compare two models using the OpenRouter API. You can modify the list by specifying the full model IDs you want to test.\n\nExample:\n**[\"openai/gpt-4.1\", \"mistralai/mistral-large\"]**\n\nIf you're using a different LLM provider (like OpenAI directly, or Google Vertex AI), make sure to update the model IDs according to that provider's naming conventions.\n\n*Note: This template is built for two models. For more, you’ll need to adjust the workflow logic and the Google Sheet structure.*\n"
      },
      "typeVersion": 1
    },
    {
      "id": "-2-7",
      "name": "스티키 노트2",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -6500,
        2620
      ],
      "parameters": {
        "color": 7,
        "width": 360,
        "height": 580,
        "content": "## Set model, sessionId, chatInput, sessionIdBase\n\nThis node prepares the variables used during the loop that queries each model.\n\n- **model**: The ID of the model being used in the current iteration.\n- **sessionId**: A unique session key combining the original session ID and model name. This ensures memory isolation per model.\n- **chatInput**: The user’s input message.\n- **sessionIdBase**: The original session ID without any model-specific suffix. Used in Sheets to group evaluations from the same session."
      },
      "typeVersion": 1
    },
    {
      "id": "model-sessionId-chatInput-sessionIdBase--8",
      "name": "model, sessionId, chatInput, sessionIdBase 설정",
      "type": "n8n-nodes-base.set",
      "position": [
        -6380,
        3040
      ],
      "parameters": {
        "options": {},
        "assignments": {
          "assignments": [
            {
              "id": "",
              "name": "model",
              "type": "string",
              "value": "={{ $json.models }}"
            },
            {
              "id": "",
              "name": "sessionId",
              "type": "string",
              "value": "={{ $('When chat message received').item.json.sessionId }}{{$json.models }}"
            },
            {
              "id": "",
              "name": "chatInput",
              "type": "string",
              "value": "={{ $('When chat message received').item.json.chatInput }}"
            },
            {
              "id": "",
              "name": "sessionIdBase",
              "type": "string",
              "value": "={{ $('When chat message received').item.json.sessionId }}"
            }
          ]
        }
      },
      "typeVersion": 3.4
    },
    {
      "id": "AI--9",
      "name": "AI 에이전트",
      "type": "@n8n/n8n-nodes-langchain.agent",
      "position": [
        -5480,
        3180
      ],
      "parameters": {
        "options": {
          "returnIntermediateSteps": false
        }
      },
      "typeVersion": 1.8
    },
    {
      "id": "-3-10",
      "name": "스티키 노트3",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -5600,
        3160
      ],
      "parameters": {
        "color": 7,
        "width": 540,
        "height": 520,
        "content": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## AI Agent\n\nThis AI Agent is connected to OpenRouter Models. The model is selected dynamically from the variable `{{$json.model}}`, defined earlier.\n\nMemory is isolated per model using the `{{$('Set model, sessionId, chatInput, sessionIdBase').item.json.sessionId}}` key.\n\n**⚠️ This agent currently has no system prompt or tools configured**. If you want to test specific tasks, you must define them yourself to reflect realistic use cases."
      },
      "typeVersion": 1
    },
    {
      "id": "-4-11",
      "name": "스티키 노트4",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -5040,
        3160
      ],
      "parameters": {
        "color": 7,
        "width": 380,
        "height": 520,
        "content": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Chat Memory Manager\n\nThis node handles retrieval of prior context for the chat session. It helps with qualitative evaluation by storing context that’s injected into the Google Sheet.\n\nIt shares memory with the AI Agent via the “Simple Memory” node.\n\n> You can switch to Redis or Postgres memory backends if needed."
      },
      "typeVersion": 1
    },
    {
      "id": "-5-12",
      "name": "스티키 노트5",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -4640,
        3160
      ],
      "parameters": {
        "color": 7,
        "width": 380,
        "height": 760,
        "content": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Prepare Data for Chat and Google Sheets\n\nThis node sets the following fields:\n\n- **output**: The model's response, formatted for chat display with visual separation to make comparison easier.\n- **chatInput**: The user input that will be recorded in Google Sheets.\n- **model_answer**: The actual answer from the model being evaluated.\n- **model**: The name or ID of the model providing the answer, used for identifying performance.\n- **context**: A history of the prior conversation (excluding the latest input). If it's the user's first message, a placeholder is used.\n- **sessionId**: A unique session identifier combining model name and session, ensuring separate context windows for each model.\n- **sessionIdBase**: The original user session ID (without model suffix), useful for grouping responses from different models in Sheets."
      },
      "typeVersion": 1
    },
    {
      "id": "--13",
      "name": "채팅 응답 연결",
      "type": "n8n-nodes-base.summarize",
      "position": [
        -5300,
        2620
      ],
      "parameters": {
        "options": {},
        "fieldsToSummarize": {
          "values": [
            {
              "field": "output",
              "separateBy": "\n",
              "aggregation": "concatenate"
            }
          ]
        }
      },
      "typeVersion": 1.1
    },
    {
      "id": "-6-14",
      "name": "스티키 노트6",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -5080,
        2120
      ],
      "parameters": {
        "color": 5,
        "width": 460,
        "height": 500,
        "content": "## Add Model Results to Google Sheet\n\nThis Google Sheets step records both model responses along for evaluation.\n\n⚠️ Depending on the length of model responses, you may need to adjust row height or column width.\n\nThe template includes basic evaluation fields (`model_1_eval`, `model_2_eval`) with a dropdown like:  \n**\"Good\", \"Correct\", \"Bad\"**, but feel free to customize with more granular rating criteria."
      },
      "typeVersion": 1
    },
    {
      "id": "--15",
      "name": "평가를 위한 모델 출력 그룹화",
      "type": "n8n-nodes-base.aggregate",
      "position": [
        -5300,
        2440
      ],
      "parameters": {
        "options": {},
        "fieldsToAggregate": {
          "fieldToAggregate": [
            {
              "fieldToAggregate": "model_answer"
            },
            {
              "fieldToAggregate": "context"
            },
            {
              "fieldToAggregate": "chatInput"
            },
            {
              "fieldToAggregate": "sessionIdBase"
            },
            {
              "fieldToAggregate": "model"
            }
          ]
        }
      },
      "typeVersion": 1
    },
    {
      "id": "Google--16",
      "name": "Google 시트에 모델 결과 추가",
      "type": "n8n-nodes-base.googleSheets",
      "onError": "continueRegularOutput",
      "position": [
        -4940,
        2440
      ],
      "parameters": {
        "columns": {
          "value": {
            "sessionId": "={{ $json.sessionIdBase[0] }}",
            "model_1_id": "={{ $json.model[0] }}",
            "model_2_id": "={{ $json.model[1] }}",
            "user_input": "={{ $json.chatInput[0] }}",
            "model_1_answer": "={{ $json.model_answer[0] }}",
            "model_2_answer": "={{ $json.model_answer[1] }}",
            "context_model_1": "={{ $json.context[0] }}",
            "context_model_2": "={{ $json.context[1] }}"
          },
          "schema": [
            {
              "id": "sessionId",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "sessionId",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "model_1_id",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "model_1_id",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "model_2_id",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "model_2_id",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "user_input",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "user_input",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "model_1_answer",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "model_1_answer",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "model_2_answer",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "model_2_answer",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "model_1_eval",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "model_1_eval",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "model_2_eval",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "model_2_eval",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "context_model_1",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "context_model_1",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "context_model_2",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "context_model_2",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            }
          ],
          "mappingMode": "defineBelow",
          "matchingColumns": [],
          "attemptToConvertTypes": false,
          "convertFieldsToString": false
        },
        "options": {},
        "operation": "append",
        "sheetName": {
          "__rl": true,
          "mode": "list",
          "value": "gid=0",
          "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1grO5jxm05kJ7if9wBIOozjkqW27i8tRedrheLRrpxf4/",
          "cachedResultName": "llms_eval"
        },
        "documentId": {
          "__rl": true,
          "mode": "list",
          "value": "1grO5jxm05kJ7if9wBIOozjkqW27i8tRedrheLRrpxf4",
          "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1grO5jxm05kJ7if9wBIOozjkqW27i8tRedrheLRrpxf4/",
          "cachedResultName": "Template - Easy LLMs Eval"
        },
        "authentication": "serviceAccount"
      },
      "credentials": {
        "googleApi": {
          "id": "",
          "name": ""
        }
      },
      "typeVersion": 4.5
    },
    {
      "id": "Chat-Google--17",
      "name": "Chat 및 Google 시트용 데이터 준비",
      "type": "n8n-nodes-base.set",
      "position": [
        -4500,
        3180
      ],
      "parameters": {
        "options": {},
        "assignments": {
          "assignments": [
            {
              "id": "",
              "name": "output",
              "type": "string",
              "value": "=### `{{ $('Set model, sessionId, chatInput, sessionIdBase').item.json.model }}` answered :\n\n\n{{ $('AI Agent').item.json.output }}\n\n----------\n"
            },
            {
              "id": "",
              "name": "chatInput",
              "type": "string",
              "value": "={{ $('Set model, sessionId, chatInput, sessionIdBase').item.json.chatInput }}"
            },
            {
              "id": "",
              "name": "model_answer",
              "type": "string",
              "value": "={{ $('AI Agent').item.json.output }}"
            },
            {
              "id": "",
              "name": "model",
              "type": "string",
              "value": "={{ $('Set model, sessionId, chatInput, sessionIdBase').item.json.model }}"
            },
            {
              "id": "",
              "name": "context",
              "type": "string",
              "value": "={{\n  (() => {\n    const history = $json[\"messages\"]; // ou adapter selon ton chemin réel\n    if (!Array.isArray(history) || history.length <= 1) {\n      return \"No prior context available — likely the user's first message or memory not yet initialized.\";\n    }\n\n    const truncated = history.slice(0, -1); // on enlève le dernier échange\n    return truncated.map(pair => `Human: ${pair.human}\\nAI: ${pair.ai}`).join('\\n');\n  })()\n}}\n"
            },
            {
              "id": "",
              "name": "sessionId",
              "type": "string",
              "value": "={{ $('Loop Over Items').item.json.sessionId }}"
            },
            {
              "id": "",
              "name": "sessionIdBase",
              "type": "string",
              "value": "={{ $('Loop Over Items').item.json.sessionIdBase }}"
            }
          ]
        }
      },
      "typeVersion": 3.4
    },
    {
      "id": "--18",
      "name": "비교할 모델 정의",
      "type": "n8n-nodes-base.set",
      "position": [
        -7100,
        3040
      ],
      "parameters": {
        "options": {},
        "assignments": {
          "assignments": [
            {
              "id": "",
              "name": "=models",
              "type": "array",
              "value": "=[\"openai/gpt-4.1\", \"mistralai/mistral-large\"]"
            }
          ]
        }
      },
      "typeVersion": 3.4
    },
    {
      "id": "--19",
      "name": "모델을 항목으로 분할",
      "type": "n8n-nodes-base.splitOut",
      "position": [
        -6760,
        3040
      ],
      "parameters": {
        "options": {},
        "fieldToSplitOut": "models"
      },
      "typeVersion": 1
    },
    {
      "id": "-UI--20",
      "name": "채팅 UI용 출력 설정",
      "type": "n8n-nodes-base.set",
      "position": [
        -4940,
        2620
      ],
      "parameters": {
        "options": {},
        "assignments": {
          "assignments": [
            {
              "id": "",
              "name": "output",
              "type": "string",
              "value": "={{ $json.concatenated_output }}"
            }
          ]
        }
      },
      "typeVersion": 3.4
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "executionOrder": "v1"
  },
  "versionId": "",
  "connections": {
    "AI--9": {
      "main": [
        [
          {
            "node": "--3",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "--2": {
      "ai_memory": [
        [
          {
            "node": "--3",
            "type": "ai_memory",
            "index": 0
          },
          {
            "node": "AI--9",
            "type": "ai_memory",
            "index": 0
          }
        ]
      ]
    },
    "--1": {
      "main": [
        [
          {
            "node": "--13",
            "type": "main",
            "index": 0
          },
          {
            "node": "--15",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "AI--9",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "--3": {
      "main": [
        [
          {
            "node": "Chat-Google--17",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "OpenRouter--5": {
      "ai_languageModel": [
        [
          {
            "node": "AI--9",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "--19": {
      "main": [
        [
          {
            "node": "model-sessionId-chatInput-sessionIdBase--8",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "--13": {
      "main": [
        [
          {
            "node": "-UI--20",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "--18": {
      "main": [
        [
          {
            "node": "--19",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "--0": {
      "main": [
        [
          {
            "node": "--18",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "--15": {
      "main": [
        [
          {
            "node": "Google--16",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Chat-Google--17": {
      "main": [
        [
          {
            "node": "--1",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "model-sessionId-chatInput-sessionIdBase--8": {
      "main": [
        [
          {
            "node": "--1",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}
자주 묻는 질문

이 워크플로우를 어떻게 사용하나요?

위의 JSON 구성 코드를 복사하여 n8n 인스턴스에서 새 워크플로우를 생성하고 "JSON에서 가져오기"를 선택한 후, 구성을 붙여넣고 필요에 따라 인증 설정을 수정하세요.

이 워크플로우는 어떤 시나리오에 적합한가요?

고급 - 엔지니어링, 인공지능

유료인가요?

이 워크플로우는 완전히 무료이며 직접 가져와 사용할 수 있습니다. 다만, 워크플로우에서 사용하는 타사 서비스(예: OpenAI API)는 사용자 직접 비용을 지불해야 할 수 있습니다.

워크플로우 정보
난이도
고급
노드 수21
카테고리2
노드 유형12
난이도 설명

고급 사용자를 위한 16+개 노드의 복잡한 워크플로우

저자
Dataki

Dataki

@dataki

I am passionate about transforming complex processes into seamless automations with n8n. My expertise spans across creating ETL pipelines, sales automations, and data & AI-driven workflows. As an avid problem solver, I thrive on optimizing workflows to drive efficiency and innovation.

외부 링크
n8n.io에서 보기

이 워크플로우 공유

카테고리

카테고리: 34