Tägliches Sport-Update
Experte
Dies ist ein Content Creation, Multimodal AI-Bereich Automatisierungsworkflow mit 34 Nodes. Hauptsächlich werden If, Set, Code, Merge, Filter und andere Nodes verwendet. Konvertieren Sie RSS-Feeds mit Google Gemini, Kokoro TTS und FFmpeg in Podcasts
Voraussetzungen
- •Telegram Bot Token
- •Möglicherweise sind Ziel-API-Anmeldedaten erforderlich
- •Google Gemini API Key
Verwendete Nodes (34)
Kategorie
Workflow-Vorschau
Visualisierung der Node-Verbindungen, mit Zoom und Pan
Workflow exportieren
Kopieren Sie die folgende JSON-Konfiguration und importieren Sie sie in n8n
{
"id": "4zCVyhXKummU8dD4",
"meta": {
"instanceId": "8683f1aa9dbb94f1118c6d4f62e60551722c177df8a0b76c24809a905727cb8d",
"templateCredsSetupCompleted": true
},
"name": "Daily Sports Digest",
"tags": [],
"nodes": [
{
"id": "e01655d1-9722-4630-a7d6-4425282ace11",
"name": "Google Gemini Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"position": [
-2500,
-280
],
"parameters": {
"options": {},
"modelName": "models/gemini-2.5-pro"
},
"credentials": {
"googlePalmApi": {
"id": "1uJGKonJ3ZbmhxG0",
"name": "Google Gemini(PaLM) Api account"
}
},
"typeVersion": 1
},
{
"id": "3271e00c-bf97-4d63-b1ec-8e33c63eaf29",
"name": "Google Gemini Chat Model1",
"type": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"position": [
-2320,
620
],
"parameters": {
"options": {},
"modelName": "models/gemini-2.5-pro"
},
"credentials": {
"googlePalmApi": {
"id": "1uJGKonJ3ZbmhxG0",
"name": "Google Gemini(PaLM) Api account"
}
},
"typeVersion": 1
},
{
"id": "e34ac6ba-25c6-4941-9e53-31ff3bb00801",
"name": "Notizzettel1",
"type": "n8n-nodes-base.stickyNote",
"position": [
-4740,
-600
],
"parameters": {
"width": 420,
"height": 1240,
"content": "## 🎧 Daily RSS Digest & Podcast Generation\n\n**Convert RSS Feeds into a Podcast with Google Gemini, Kokoro TTS, and FFmpeg:** This workflow automates the creation of a daily sports podcast from your favorite news sources. It fetches articles, uses AI to write a digest and a two-person dialogue, and produces a single, merged audio file with KOKORO TTS ready for listening.\n\n## ✨ How it works:\n\n### 📰 Fetch & Filter Daily News: The workflow triggers daily, fetches articles from your chosen RSS feeds, and filters them to keep only the most recent content.\n\n### ✍️ Generate AI Digest & Script: \nUsing Google Gemini, it first creates a written summary of the day's news. A second AI agent then transforms this news into an engaging, conversational podcast script between two distinct AI speakers.\n\n### 🗣️ Generate Voices in Chunks: \nThe script is split into individual lines of dialogue. The workflow then loops through each line, calling a Text-to-Speech (TTS) API to generate a separate audio file (an MP3 chunk) for each part of the conversation.\n\n### 🎛️ Merge Audio with FFmpeg: \nAfter all the audio chunks are created and saved locally, a command-line script generates a list of all the files and uses FFmpeg to losslessly merge them into a single, seamless MP3 file. All temporary files are then deleted.\n\n### 📤 Send the Final Podcast: \nThe final, merged MP3 is read from the server and delivered directly to your Telegram chat with a dynamic, dated filename.\n\n### You can modify:\n- 📰 The RSS Feeds to any news source you want.\n- 🤖 The AI Prompts to change the tone, language, or style of the digest and podcast.\n- 🎙️ The TTS Voices used for the two speakers.\n- 📫 The Final Delivery Method (e.g., send to Discord, save to Google Drive, etc.).\n\n\nPerfect for creating a personalized, hands-free news briefing to listen to on your commute.\n\n**Inspired by:** https://n8n.io/workflows/6523-convert-newsletters-into-ai-podcasts-with-gpt-4o-mini-and-elevenlabs/"
},
"typeVersion": 1
},
{
"id": "10c71894-9b5c-4bd0-a618-57ca8161275b",
"name": "Notizzettel",
"type": "n8n-nodes-base.stickyNote",
"position": [
-4300,
-600
],
"parameters": {
"color": 7,
"width": 1352,
"height": 1244,
"content": "## 📰 Step 1: Fetch & Filter Daily News\n\nThis section acts as the data pipeline. It triggers once a day, fetches the latest articles from multiple RSS feeds, merges them, and filters the list to keep only articles published on the previous calendar day. This provides a clean, relevant dataset for the AI.\n\n### ✅ Expected Output \nA clean list of news articles, each containing a title, content, link, and a standardized date, ready to be processed by the AI.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n### 🤖 Logic Explained:\nThe crucial step is the Filter for Yesterday node. It keeps only articles published on the previous calendar day. It does this with two conditions:\n**1. is greater than: The article's date must be after midnight at the start of yesterday.\n2. is less than: The article's date must be before the time scheduled to running.**\n\nThis ensures the AI always has a fresh, consistent set of news to work with.\n\n### ✍️ How to Customize:\n- Trigger: Change the time or frequency in the `Daily Trigger` node.\n- News Sources: Change the URLs in the Fetch... News nodes to any RSS feed you want. Add or remove feeds as needed, connecting them to the Merge News Sources node.\n"
},
"typeVersion": 1
},
{
"id": "a9078490-2bd4-4b5e-ada1-ee67e147d139",
"name": "Täglicher Trigger",
"type": "n8n-nodes-base.scheduleTrigger",
"position": [
-4240,
-100
],
"parameters": {
"rule": {
"interval": [
{
"triggerAtHour": 8
}
]
}
},
"typeVersion": 1.2
},
{
"id": "0af5a33b-e2e4-4fc3-95c4-a5c24c6b2ab5",
"name": "RSS abrufen 1: Folha de SP",
"type": "n8n-nodes-base.rssFeedRead",
"position": [
-3900,
-260
],
"parameters": {
"url": "https://feeds.folha.uol.com.br/esporte/rss091.xml",
"options": {}
},
"typeVersion": 1.1
},
{
"id": "84cc82a2-937f-4b7f-801c-045fb7e164cf",
"name": "RSS abrufen 2: GE",
"type": "n8n-nodes-base.rssFeedRead",
"position": [
-3900,
40
],
"parameters": {
"url": "https://ge.globo.com/rss/ge/",
"options": {}
},
"typeVersion": 1.2
},
{
"id": "5b1afc5f-829b-4ee2-866e-4bf045631f43",
"name": "Felder bereinigen",
"type": "n8n-nodes-base.set",
"position": [
-3640,
-260
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "a0229879-764d-455d-9064-2d939d7e5701",
"name": "title",
"type": "string",
"value": "={{ $json.title.replace(/\\[PACK\\].*/, \"\").replace(/\\[.*?\\]/g, \"\").trim() }}"
},
{
"id": "2da9330c-e39f-4515-b737-d14f3c4aeb8b",
"name": "pubDate",
"type": "string",
"value": "={{ $json.pubDate }}"
},
{
"id": "c7b0f3d6-e2bb-48a7-9911-edcd44700868",
"name": "link",
"type": "string",
"value": "={{ $json.link.replace(/\\/torrent\\/download\\/(\\d+)\\..*/, \"/torrents/$1\") }}"
},
{
"id": "05d172b5-3201-450d-b02f-fc9b649664f0",
"name": "content",
"type": "string",
"value": "={{ $json.content }}"
},
{
"id": "db80b578-d9b4-40fd-bbe1-6e7615a27ce3",
"name": "isoDate",
"type": "number",
"value": "={{ new Date($json.isoDate).getTime() }}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "0d76fc33-7930-4ab8-b363-b86e1e5f6889",
"name": "Felder bereinigen1",
"type": "n8n-nodes-base.set",
"position": [
-3640,
40
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "a0229879-764d-455d-9064-2d939d7e5701",
"name": "title",
"type": "string",
"value": "={{ $json.title.replace(/\\[PACK\\].*/, \"\").replace(/\\[.*?\\]/g, \"\").trim() }}"
},
{
"id": "2da9330c-e39f-4515-b737-d14f3c4aeb8b",
"name": "pubDate",
"type": "string",
"value": "={{ $json.pubDate }}"
},
{
"id": "c7b0f3d6-e2bb-48a7-9911-edcd44700868",
"name": "link",
"type": "string",
"value": "={{ $json.link.replace(/\\/torrent\\/download\\/(\\d+)\\..*/, \"/torrents/$1\") }}"
},
{
"id": "05d172b5-3201-450d-b02f-fc9b649664f0",
"name": "content",
"type": "string",
"value": "={{ $json.content }}"
},
{
"id": "db80b578-d9b4-40fd-bbe1-6e7615a27ce3",
"name": "isoDate",
"type": "number",
"value": "={{ new Date($json.isoDate).getTime() }}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "69db334b-7681-4174-abaf-f745306d28f4",
"name": "Nachrichtenquellen zusammenführen",
"type": "n8n-nodes-base.merge",
"position": [
-3380,
-100
],
"parameters": {},
"typeVersion": 3
},
{
"id": "56d67b29-8950-49f8-bdea-d8cd5461c8da",
"name": "Nachrichten der letzten 24 Stunden filtern",
"type": "n8n-nodes-base.filter",
"position": [
-3100,
-100
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "c590146a-caae-495c-a933-37864e921876",
"operator": {
"type": "number",
"operation": "gt"
},
"leftValue": "={{ $json.isoDate }}",
"rightValue": "={{ (new Date()).setHours(0, 0, 0, 0) - 24 * 60 * 60 * 1000 }}"
},
{
"id": "e7cf09fb-af35-495d-a840-341f8d0ddcd8",
"operator": {
"type": "number",
"operation": "lt"
},
"leftValue": "={{ $json.isoDate }}",
"rightValue": "={{ (new Date()).setHours(0, 0, 0, 0) }}"
}
]
}
},
"typeVersion": 2.2
},
{
"id": "f597998a-7621-49ce-9933-6db94c53325e",
"name": "Daten für KI aufbereiten",
"type": "n8n-nodes-base.code",
"position": [
-2820,
-100
],
"parameters": {
"jsCode": "// This code gets all the news items that passed the filter.\nconst allItems = $input.all();\n\n// We will format each news item with its title, content, and a link.\nconst formattedNews = allItems.map(item => {\n // Get the title, content, and link from the JSON data of each item.\n const title = item.json.title;\n const content = item.json.content;\n const link = item.json.link;\n\n // Return a clean, formatted string for each article.\n return `\n---\nTitle: ${title}\nContent: ${content}\nLink: ${link}\n---\n `;\n});\n\n// --- NEW CODE ADDED BELOW ---\n// Get today's date and format it beautifully.\n// It will look like: \"July 31, 2025\"\nconst digestDate = $now.setZone('America/Sao_Paulo').toFormat('MMMM d, yyyy');\n\n// Join all the formatted news items into a single block of text\n// and return it as an object along with our new date string.\nreturn {\n allNews: formattedNews.join('\\n'),\n digestDate: digestDate\n};"
},
"typeVersion": 2
},
{
"id": "db56296f-138c-4441-9593-a2392efef7fd",
"name": "Temporäres Verzeichnis erstellen",
"type": "n8n-nodes-base.executeCommand",
"position": [
-2540,
380
],
"parameters": {
"command": "mkdir -p /tmp/dailydigest"
},
"typeVersion": 1
},
{
"id": "a0290570-125b-4e68-a06c-f89cc83efd10",
"name": "Text-Digest generieren",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [
-2500,
-520
],
"parameters": {
"text": "=**ROLE & GOAL:**\nYou are a witty and engaging sports journalist. Your goal is to create a \"Daily Sports Digest\" in English that is informative and fun to read, summarizing sports news from Brazil.\n\n**CONTEXT:**\nYou will receive a block of text with news articles written in Portuguese and a formatted string for today's date.\n\n**CRITICAL RULES:**\n1. **Language:** Your entire output MUST be in English.\n2. **Source Integrity:** Use ONLY the information from the provided text. DO NOT add external information or speculate.\n3. **Markdown:** Your entire output must use MarkdownV2 for formatting.\n\n**OUTPUT STRUCTURE:**\n\n**1. Title:**\n- Use the provided date string to create a top-level heading like this: `# Daily Sports Digest: {{ $('Prepare Data for AI').item.json.digestDate }}`\n\n**2. Introduction:**\n- On the next line, start with a single, engaging paragraph (2-3 sentences) that gives a \"big picture\" summary of the day's main news or overall theme. Use a \"newspaper\" emoji 📰 at the beginning.\n\n**3. Football Section:**\n- Create a main heading: `## ⚽ Football Focus`\n- If there is enough variety, categorize the football news into subheadings like `### 🇧🇷 Brazilian Clubs`, `### 🌎 International`, or `### 🏆 Tournaments`. Use your best judgment based on the articles provided.\n\n**4. Other Sports Section:**\n- If there are articles on other sports (Motorsport, Basketball, etc.), group them all under a single heading: `## ⚡ Around the Horn`.\n\n**5. News Item Format (for every single article):**\n- Start the line with a single, relevant emoji (e.g., investigatory nature of a story).\n- Translate the original title into a concise and accurate English headline. Display this new English headline as a clickable link to its URL.\n- On the **next line**, write your concise 1-2 sentence summary in English.\n\n**EXAMPLE of the required News Item format:**\nIf you receive: `Title: AFA e River Plate criticam aumento de imposto a clubes na Argentina\\nLink: http://example.com/news-link`\nYou must format it like this: `[AFA and River Plate Criticize Tax Increase on Clubs in Argentina](http://example.com/news-link)\\nThe Argentine Football Association and River Plate have criticized a government measure to increase taxes on football clubs.`\n\n---\n**Here is the block of text with today's articles:**\n{{ $('Prepare Data for AI').item.json.allNews }}",
"options": {},
"promptType": "define"
},
"typeVersion": 2
},
{
"id": "08b47cef-36e7-4491-8c9d-8c1bf674f474",
"name": "Text-Digest senden",
"type": "n8n-nodes-base.telegram",
"position": [
-2040,
-520
],
"webhookId": "ca51bbe7-102b-4b7f-8442-560a0ccc7628",
"parameters": {
"text": "={{ $json.output }}",
"chatId": "[YOUR_TELEGRAM_CHAT_ID]",
"additionalFields": {
"parse_mode": "Markdown"
}
},
"credentials": {
"telegramApi": {
"id": "kuP8lCkbwbeD61gU",
"name": "Telegram account"
}
},
"typeVersion": 1.2
},
{
"id": "cf8d7651-4b2b-4ede-9fff-c7ca9fa6b248",
"name": "Podcast-Skript erstellen",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [
-2320,
380
],
"parameters": {
"text": "=You are the scriptwriter of a podcast that transforms dense written content into a lively, natural conversation between two AI speakers, `voice1` and `voice2`.\n\nYour task is to turn the following newsletter content into a **realistic audio dialogue**. The conversation should be fluid, informal, and engaging — similar in tone and structure to how NotebookLM rewrites long documents as discussions. It must sound like two well-informed people exchanging ideas, not like a text being read aloud.\n\n### Roles\n\n- **voice1**: Curious, expressive, casual, often injects humor or everyday references. Tends to ask questions, react with surprise or amusement, and bring lightness to the discussion.\n- **voice2**: Analytical, composed, insightful. Adds perspective, context, and a slightly ironic or dry sense of humor. Offers clarity without sounding robotic.\n\nUse realistic, human-like phrasing with brief interjections (`\"Right?\"`, `\"Let me stop you there\"`, `\"That's exactly it\"`). Use `<break time=\"1.5s\" />` tags occasionally to simulate natural pauses.\n\n### Structure\n\n1. **Introduction**: Set the scene naturally. Briefly introduce what the episode is about based on the content, without listing or labeling sections. Present `voice1` and `voice2` through dialogue, not narration.\n2. **Content Breakdown**: For each key idea or section from the newsletter:\n - Paraphrase the content in spoken language.\n - Embed the headline or theme organically in the conversation.\n - Include personal reactions, examples, and small tangents to make it relatable.\n - Open loops by teasing questions or ideas that will be answered later in the conversation.\n - Maintain curiosity and variety in tone and rhythm.\n3. **Closing**: End warmly and casually, with a brief comment on what stood out or what’s coming next (no need for formal farewells).\n\n### Requirements\n\n- The script must be at least **ten thousand characters** (about 15 minutes of speech).\n- Use **commas** to separate items in a list, not periods.\n- Format the output as a single uninterrupted block of text with clear speaker tags:\n \nvoice1: …\nvoice2: …\n\nYou will be given a newsletter input under this key:\n\n{{ $('Prepare Data for AI').item.json.allNews }}\n\nGenerate only the final dialogue script — no explanations, bullet points, or headings. Just the conversation in English.\n\n\n",
"options": {},
"promptType": "define"
},
"typeVersion": 2
},
{
"id": "04689446-91a9-4ba2-b76f-db126e150a47",
"name": "Notizzettel2",
"type": "n8n-nodes-base.stickyNote",
"position": [
-2900,
-600
],
"parameters": {
"color": 7,
"width": 1112,
"height": 1604,
"content": "## ✍️ Step 2: Generate AI Content (Digest & Script)\n### ✅ Expected Output:\nA formatted text message sent to Telegram and a \ntwo-person podcast script.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n### 🤖 Logic Explained:\nThis section has two parallel branches:\n\n**1. Text Digest:** The `Generate Text Digest` agent creates a written summary of the news, which is immediately sent to Telegram via the `Send Text Digest` node. This provides a quick, readable update.\n\n**2. Podcast Script:** The `Generate Podcast Script` agent takes the same news data and, following a detailed prompt, writes a long-form conversational dialogue between two distinct AI speakers `(voice1 and voice2)`. The `Create Temp Directory` node runs first to ensure the folder for saving audio files `(/tmp/dailydigest)` exists on the server.\n\n### ✍️ How to Customize:\n- AI Prompts: The core of this section is the prompts inside the `Generate Text Digest` and `Generate Podcast Script` nodes. **You can edit these extensively to change the tone, style, length, language, and format of the output.**\n- AI Model: Change the model used for generation in the corresponding `Google Gemini Chat Model` nodes.\n- Directory: If you change the temporary directory path in the Create Temp Directory node, **you must update it everywhere else in the workflow.**"
},
"typeVersion": 1
},
{
"id": "2a450954-0691-423a-b5c7-c8b9aac595ca",
"name": "Skript nach Sprecher aufteilen",
"type": "n8n-nodes-base.code",
"position": [
-1720,
380
],
"parameters": {
"jsCode": "/**\n * This Function node takes the script from the previous node\n * and splits it using \"voice1:\" and \"voice2:\" as delimiters.\n * Each resulting segment retains the respective identifier.\n */\n\nconst script = $input.first().json.output|| \"\";\n\n// Ensure consistent line breaks\nconst normalizedScript = script.replace(/\\r\\n/g, \"\\n\");\n\n// Split the script while keeping \"voice1:\" and \"voice2:\" in the result\nconst segments = normalizedScript.split(/(?=(?:voice1:|voice2:))/g).map(s => s.trim()).filter(Boolean);\n\n// Return one item per segment\nreturn segments.map(segment => {\n return {\n json: {\n segment\n }\n };\n});"
},
"typeVersion": 2
},
{
"id": "467b302f-98fd-4c81-9dbd-8a467469d057",
"name": "Segmente durchlaufen",
"type": "n8n-nodes-base.splitInBatches",
"position": [
-1500,
380
],
"parameters": {
"options": {}
},
"typeVersion": 3
},
{
"id": "ff9fa0c1-cc63-4fac-8001-e095f8b6d9ae",
"name": "Dialogsegment bereinigen",
"type": "n8n-nodes-base.code",
"position": [
-1320,
560
],
"parameters": {
"jsCode": "const paragraph = $input.first().json.segment; \nif (!paragraph) {\n throw new Error(\"No se encontró contenido de texto en el correo.\");\n}\n\nlet cleanedText = paragraph\n .replace(/\"/g, \"\")\n .replace(/“/g, \"\")\n .replace(/”/g, \"\");\n\ncleanedText = cleanedText.replace(/\\n/g, \"\");\n\nconsole.log(\"Texto limpio sin comillas ni saltos de línea:\", cleanedText);\n\nreturn [{ json: { cleanedText } }];\n"
},
"typeVersion": 2
},
{
"id": "228ee637-49f9-4ecd-ac6d-f532a2aef401",
"name": "An korrekte Stimme weiterleiten",
"type": "n8n-nodes-base.if",
"position": [
-1100,
560
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "93609a28-55f5-439e-8238-a48375255f4f",
"operator": {
"type": "string",
"operation": "contains"
},
"leftValue": "={{ $json.cleanedText }}",
"rightValue": "voice1:"
}
]
}
},
"typeVersion": 2.2
},
{
"id": "4857627b-027d-4270-8a48-3f2c4461d3a7",
"name": "Text für TTS aufbereiten",
"type": "n8n-nodes-base.code",
"position": [
-880,
560
],
"parameters": {
"jsCode": "// Get the text from the previous node.\nconst cleanedText = $input.first().json.cleanedText;\n\n// Safety check: Ensure the input is a string.\nif (typeof cleanedText !== \"string\") {\n throw new Error(\"Input 'cleanedText' must be a string.\");\n}\n\n// Debugging: Log the original text received by the node.\nconsole.log(\"Original text:\", JSON.stringify(cleanedText, null, 2));\n\n// Debugging: Check if the speaker tag exists before removal.\nif (cleanedText.includes(\"voice1:\")) {\n console.log(\"✅ 'voice1:' detected in original text.\");\n} else {\n console.log(\"❌ 'voice1:' NOT found in original text. Check input!\");\n}\n\n// This is the main action: Remove the speaker tag (e.g., \"voice1: \") and trim whitespace.\nconst modifiedString = cleanedText.replace(/\\bvoice1:\\s*/gi, \"\").trim();\n\n// Debugging: Log the text *after* modification to confirm it was removed.\nconsole.log(\"Modified text:\", JSON.stringify(modifiedString, null, 2));\n\n// Debugging: Final check to ensure the tag is gone.\nif (modifiedString.includes(\"voice1:\")) {\n console.log(\"❌ 'voice1:' is still present in the modified text. The regex needs adjustment!\");\n} else {\n console.log(\"✅ 'voice1:' removed successfully.\");\n}\n\n// Return the final, cleaned string for the TTS API.\nreturn [\n {\n json: {\n modifiedString\n }\n }\n];\n"
},
"typeVersion": 2
},
{
"id": "e43ecaff-904d-4039-b8f3-21680080ed99",
"name": "Text für TTS1 aufbereiten",
"type": "n8n-nodes-base.code",
"position": [
-860,
800
],
"parameters": {
"jsCode": "const cleanedText = $input.first().json.cleanedText;\n\nif (typeof cleanedText !== \"string\") {\n throw new Error(\"cleanedText debe ser un string.\");\n}\n\nconsole.log(\"Texto original:\", JSON.stringify(cleanedText, null, 2));\n\nif (cleanedText.includes(\"voice2:\")) {\n console.log(\"✅ 'voice2:' detectado en el texto original.\");\n} else {\n console.log(\"❌ 'voice2:' NO encontrado en el texto original. ¡Revisar input!\");\n}\n\nconst modifiedString = cleanedText.replace(/\\bvoice2:\\s*/gi, \"\").trim();\n\nconsole.log(\"Texto modificado:\", JSON.stringify(modifiedString, null, 2));\n\nif (modifiedString.includes(\"voice2:\")) {\n console.log(\"❌ 'voice2:' sigue presente en el texto modificado. ¡El regex debe ajustarse!\");\n} else {\n console.log(\"✅ 'voice2:' eliminado correctamente.\");\n}\n\nreturn [\n {\n json: {\n modifiedString\n }\n }\n];"
},
"typeVersion": 2
},
{
"id": "66c339ee-6893-4f1e-ac56-f76a77d02591",
"name": "Audio generieren (Stimme 1)",
"type": "n8n-nodes-base.httpRequest",
"position": [
-600,
560
],
"parameters": {
"url": "https://tts-kokoro.mfxikq.easypanel.host/api/v1/audio/speech",
"method": "POST",
"options": {},
"sendBody": true,
"bodyParameters": {
"parameters": [
{
"name": "model",
"value": "model_q8f16"
},
{
"name": "voice",
"value": "am_liam"
},
{
"name": "speed",
"value": "={{ 1 }}"
},
{
"name": "response_format",
"value": "mp3"
},
{
"name": "input",
"value": "={{ $json.modifiedString }}"
}
]
}
},
"notesInFlow": true,
"retryOnFail": true,
"typeVersion": 4.2
},
{
"id": "d725b21a-4b3e-444e-baf4-a08cd3a3d663",
"name": "Audio generieren (Stimme 2)",
"type": "n8n-nodes-base.httpRequest",
"position": [
-600,
800
],
"parameters": {
"url": "https://tts-kokoro.mfxikq.easypanel.host/api/v1/audio/speech",
"method": "POST",
"options": {},
"sendBody": true,
"bodyParameters": {
"parameters": [
{
"name": "model",
"value": "model_q8f16"
},
{
"name": "voice",
"value": "af_heart"
},
{
"name": "speed",
"value": "={{ 1 }}"
},
{
"name": "response_format",
"value": "mp3"
},
{
"name": "input",
"value": "={{ $json.modifiedString }}"
}
]
}
},
"retryOnFail": true,
"typeVersion": 4.2
},
{
"id": "cfd32a14-4359-4025-9ed6-6b41bfd48432",
"name": "Notizzettel3",
"type": "n8n-nodes-base.stickyNote",
"position": [
-1760,
-600
],
"parameters": {
"color": 7,
"width": 1372,
"height": 1604,
"content": "## ✍️ Step 3: Split Script & Generate Audio Chunks\n\n### ✅ Expected Output:\nA series of individual items, each containing the binary data for one small MP3 audio chunk. These are now ready to be saved to the server.\n\n### 🤖 Logic Explained:\nThis section is the core audio generation engine.\n\n**1. Split Script:** The `Split Script by Speaker` node takes the long dialogue from the AI and uses a regular expression to split it into an array of smaller text chunks, one for each line of dialogue.\n**2. Loop & Clean:** The workflow then loops through each chunk. The `Clean Dialogue Segment` and `Prepare Text for TTS` nodes remove any unwanted characters or speaker tags (like \"voice1:\") from the text to ensure it's clean for the TTS API.\n**3. Route & Generate:** The `Route to Correct Voice` node checks which speaker the line belongs to and sends it to the correct HttpRequest node. Each of these nodes is configured to call the TTS API with a different voice, generating a unique MP3 audio file for that specific line.\n\n### 🗣️ TTS Service: `Kokoro`\nThis workflow uses two HttpRequest nodes to call the `Kokoro TTS API`. This service was chosen for its straightforward API and voice options.\n\nA brief explanation about the TTS setup:\n- Authentication: You must get your own API key from Kokoro. In both `HttpRequest` nodes (`Generate Audio (Voice 1)` and `Generate Audio (Voice 2)`), go to the Headers section and replace the placeholder API key in the `X-API-KEY` header.\n- Voices: To change the voices for the two speakers, modify the voice parameter in the Body of each HttpRequest node. You can find a list of available voices in the Kokoro API documentation\n\n### ✍️ How to Customize:\n- Voices: The main customization here is in the `Generate Audio (Voice 1)` and `Generate Audio (Voice 2)` nodes. You can change the voice parameter in the request body to use any of the voices your TTS service provides.\n- Cleaning Logic: The Code nodes that clean the text can be adjusted if your AI's output changes or includes other unwanted characters."
},
"typeVersion": 1
},
{
"id": "877dd4d2-3096-448f-bb36-759ee1abb352",
"name": "Audio-Chunk auf Festplatte speichern",
"type": "n8n-nodes-base.readWriteFile",
"position": [
-260,
360
],
"parameters": {
"options": {},
"fileName": "=/tmp/dailydigest_{{$itemIndex}}.mp3",
"operation": "write"
},
"typeVersion": 1
},
{
"id": "5bf9e496-fa0d-4d4c-8908-de01512fe4d8",
"name": "FFmpeg Concat-Liste generieren",
"type": "n8n-nodes-base.code",
"position": [
-40,
360
],
"parameters": {
"jsCode": "/**\n * This Code node will:\n * 1. Gather all file paths from the incoming items (assuming each item has `item.json.filePath`).\n * 2. Build a single text string, each line in FFmpeg concat format: `file '/path/to/audio.mp3'`\n * 3. Convert that text to binary (Base64) so the next node (\"Write Binary File\") can save it as `concat_list.txt`.\n */\n\nconst items = $input.all();\n\n// Build the concat list\nlet concatListText = '';\n\nitems.forEach(item => {\n // The 'Save File' node outputs the path in item.json.filePath\n const filePath = item.json.filePath;\n if (filePath) {\n concatListText += `file '${filePath}'\\n`;\n }\n});\n\n// The 'Save concat_list' node expects binary data\nconst buffer = Buffer.from(concatListText, 'utf-8');\nconst base64Data = buffer.toString('base64');\n\nreturn [\n {\n json: {},\n binary: {\n data: {\n data: base64Data,\n mimeType: 'text/plain',\n fileName: 'concat_list.txt'\n }\n }\n }\n];"
},
"typeVersion": 2
},
{
"id": "73680e23-7e03-4125-a5f9-91defe784743",
"name": "Concat-Liste auf Festplatte speichern",
"type": "n8n-nodes-base.readWriteFile",
"position": [
180,
360
],
"parameters": {
"options": {},
"fileName": "/tmp/dailydigest/concat_list.txt",
"operation": "write"
},
"typeVersion": 1
},
{
"id": "9b64cb66-f9be-449e-a12a-3e3e5504cadb",
"name": "Audio zusammenführen & aufräumen",
"type": "n8n-nodes-base.executeCommand",
"position": [
400,
360
],
"parameters": {
"command": "ffmpeg -y -f concat -safe 0 -i /tmp/dailydigest/concat_list.txt -c copy /tmp/dailydigest/final_merged.mp3\n\nfind /tmp/dailydigest/ -type f ! -name \"final_merged.mp3\" -delete\n"
},
"typeVersion": 1
},
{
"id": "91281464-dc37-4ae0-94f2-bea3d1054fdf",
"name": "Notizzettel4",
"type": "n8n-nodes-base.stickyNote",
"position": [
-360,
-600
],
"parameters": {
"color": 7,
"width": 932,
"height": 1264,
"content": "## 🎛️ Step 4: Save, Prepare, & Merge Audio\n\n### ✅ Expected Output:\nA single item containing the path to the final merged MP3 file, ready to be read and sent.\n\n### 🤖 Logic Explained:\nThis is the file management and assembly line of the workflow.\n\n**1. Save Chunks:** The `Save Audio Chunk to Disk` node takes each MP3 from the loop and saves it to the `/tmp/dailydigest/` directory with a unique, indexed filename (e.g., audio_0.mp3, audio_1.mp3).\n**2. Generate List:** After the loop finishes, the `Generate FFmpeg Concat List` node runs. It gathers the file paths of all the saved chunks and creates a simple text file `(concat_list.txt)` that acts as a playlist for FFmpeg.\n**3. Merge & Clean:** The `Merge Audio & Clean Up` node executes two commands. First, it runs ffmpeg, which reads the concat_list.txt and joins all the audio chunks into a single, final MP3 file. Second, it runs a find command to delete all the temporary audio chunks and the list file, keeping the server clean.\n\n### ✍️ How to Customize:\n- The main customization is in the `Merge Audio & Clean Up` node. You can modify the ffmpeg command to change audio quality, add fades, or perform other advanced audio processing."
},
"typeVersion": 1
},
{
"id": "ad20440c-57dd-4e26-8505-13aa166cd274",
"name": "Notizzettel5",
"type": "n8n-nodes-base.stickyNote",
"position": [
600,
-600
],
"parameters": {
"color": 7,
"width": 632,
"height": 1264,
"content": "## 📤 Step 5: Read Merged Audio & Send Final Podcast\n\n### ✅ Expected Output:\nA series of individual items, each containing the binary data for one small MP3 audio chunk. These are now ready to be saved to the server.\n\n### 🤖 Logic Explained:\nThis is the final delivery stage.\n\n1. Read File: The `Read Final Merged MP3` node takes the path to the completed audio file created by `FFmpeg` and reads it from the server's disk into n8n's binary data format. This prepares it for upload.\n2. Send to Telegram: The `Send Podcast to Telegram` node takes this binary data and uploads it directly to your specified Telegram chat.\n\n### ✍️ How to Customize:\n- Delivery Channel: You can replace the `Send Podcast to Telegram` node with any other node to change the final destination (e.g., Google Drive, Discord, Email).\n- Filename: **The filename that appears in Telegram is set dynamically in the Additional Fields section of the Telegram node. You can change the format of the date or the name of the file by editing the expression there.**"
},
"typeVersion": 1
},
{
"id": "aed39482-9644-4f38-93e6-4df6bb2d1742",
"name": "Finale zusammengeführte MP3 lesen",
"type": "n8n-nodes-base.readWriteFile",
"position": [
720,
360
],
"parameters": {
"options": {},
"fileSelector": "/tmp/dailydigest/final_merged.mp3"
},
"typeVersion": 1
},
{
"id": "4069618f-f4e0-484b-92bc-39907a876896",
"name": "Podcast an Telegram senden",
"type": "n8n-nodes-base.telegram",
"position": [
1000,
360
],
"webhookId": "d25eb6ba-9cd2-4d43-80c4-a46cb68c1d56",
"parameters": {
"chatId": "[YOUR_TELEGRAM_CHAT_ID]",
"operation": "sendAudio",
"binaryData": true,
"additionalFields": {
"fileName": "=Daily Digest - {{ $now.setZone('America/Sao_Paulo').toFormat('dd/LL/yyyy') }}.mp3"
}
},
"credentials": {
"telegramApi": {
"id": "kuP8lCkbwbeD61gU",
"name": "Telegram account"
}
},
"typeVersion": 1.2
}
],
"active": true,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "59a82666-de28-4e02-bf8e-936f8b628674",
"connections": {
"a9078490-2bd4-4b5e-ada1-ee67e147d139": {
"main": [
[
{
"node": "0af5a33b-e2e4-4fc3-95c4-a5c24c6b2ab5",
"type": "main",
"index": 0
},
{
"node": "84cc82a2-937f-4b7f-801c-045fb7e164cf",
"type": "main",
"index": 0
}
]
]
},
"5b1afc5f-829b-4ee2-866e-4bf045631f43": {
"main": [
[
{
"node": "69db334b-7681-4174-abaf-f745306d28f4",
"type": "main",
"index": 0
}
]
]
},
"84cc82a2-937f-4b7f-801c-045fb7e164cf": {
"main": [
[
{
"node": "0d76fc33-7930-4ab8-b363-b86e1e5f6889",
"type": "main",
"index": 0
}
]
]
},
"0d76fc33-7930-4ab8-b363-b86e1e5f6889": {
"main": [
[
{
"node": "69db334b-7681-4174-abaf-f745306d28f4",
"type": "main",
"index": 1
}
]
]
},
"69db334b-7681-4174-abaf-f745306d28f4": {
"main": [
[
{
"node": "56d67b29-8950-49f8-bdea-d8cd5461c8da",
"type": "main",
"index": 0
}
]
]
},
"f597998a-7621-49ce-9933-6db94c53325e": {
"main": [
[
{
"node": "db56296f-138c-4441-9593-a2392efef7fd",
"type": "main",
"index": 0
},
{
"node": "a0290570-125b-4e68-a06c-f89cc83efd10",
"type": "main",
"index": 0
}
]
]
},
"a0290570-125b-4e68-a06c-f89cc83efd10": {
"main": [
[
{
"node": "08b47cef-36e7-4491-8c9d-8c1bf674f474",
"type": "main",
"index": 0
}
]
]
},
"4857627b-027d-4270-8a48-3f2c4461d3a7": {
"main": [
[
{
"node": "66c339ee-6893-4f1e-ac56-f76a77d02591",
"type": "main",
"index": 0
}
]
]
},
"db56296f-138c-4441-9593-a2392efef7fd": {
"main": [
[
{
"node": "cf8d7651-4b2b-4ede-9fff-c7ca9fa6b248",
"type": "main",
"index": 0
}
]
]
},
"467b302f-98fd-4c81-9dbd-8a467469d057": {
"main": [
[
{
"node": "877dd4d2-3096-448f-bb36-759ee1abb352",
"type": "main",
"index": 0
}
],
[
{
"node": "ff9fa0c1-cc63-4fac-8001-e095f8b6d9ae",
"type": "main",
"index": 0
}
]
]
},
"e43ecaff-904d-4039-b8f3-21680080ed99": {
"main": [
[
{
"node": "d725b21a-4b3e-444e-baf4-a08cd3a3d663",
"type": "main",
"index": 0
}
]
]
},
"aed39482-9644-4f38-93e6-4df6bb2d1742": {
"main": [
[
{
"node": "4069618f-f4e0-484b-92bc-39907a876896",
"type": "main",
"index": 0
}
]
]
},
"ff9fa0c1-cc63-4fac-8001-e095f8b6d9ae": {
"main": [
[
{
"node": "228ee637-49f9-4ecd-ac6d-f532a2aef401",
"type": "main",
"index": 0
}
]
]
},
"9b64cb66-f9be-449e-a12a-3e3e5504cadb": {
"main": [
[
{
"node": "aed39482-9644-4f38-93e6-4df6bb2d1742",
"type": "main",
"index": 0
}
]
]
},
"228ee637-49f9-4ecd-ac6d-f532a2aef401": {
"main": [
[
{
"node": "4857627b-027d-4270-8a48-3f2c4461d3a7",
"type": "main",
"index": 0
}
],
[
{
"node": "e43ecaff-904d-4039-b8f3-21680080ed99",
"type": "main",
"index": 0
}
]
]
},
"cf8d7651-4b2b-4ede-9fff-c7ca9fa6b248": {
"main": [
[
{
"node": "2a450954-0691-423a-b5c7-c8b9aac595ca",
"type": "main",
"index": 0
}
]
]
},
"2a450954-0691-423a-b5c7-c8b9aac595ca": {
"main": [
[
{
"node": "467b302f-98fd-4c81-9dbd-8a467469d057",
"type": "main",
"index": 0
}
]
]
},
"0af5a33b-e2e4-4fc3-95c4-a5c24c6b2ab5": {
"main": [
[
{
"node": "5b1afc5f-829b-4ee2-866e-4bf045631f43",
"type": "main",
"index": 0
}
]
]
},
"66c339ee-6893-4f1e-ac56-f76a77d02591": {
"main": [
[
{
"node": "467b302f-98fd-4c81-9dbd-8a467469d057",
"type": "main",
"index": 0
}
]
]
},
"d725b21a-4b3e-444e-baf4-a08cd3a3d663": {
"main": [
[
{
"node": "467b302f-98fd-4c81-9dbd-8a467469d057",
"type": "main",
"index": 0
}
]
]
},
"e01655d1-9722-4630-a7d6-4425282ace11": {
"ai_languageModel": [
[
{
"node": "a0290570-125b-4e68-a06c-f89cc83efd10",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"877dd4d2-3096-448f-bb36-759ee1abb352": {
"main": [
[
{
"node": "5bf9e496-fa0d-4d4c-8908-de01512fe4d8",
"type": "main",
"index": 0
}
]
]
},
"73680e23-7e03-4125-a5f9-91defe784743": {
"main": [
[
{
"node": "9b64cb66-f9be-449e-a12a-3e3e5504cadb",
"type": "main",
"index": 0
}
]
]
},
"3271e00c-bf97-4d63-b1ec-8e33c63eaf29": {
"ai_languageModel": [
[
{
"node": "cf8d7651-4b2b-4ede-9fff-c7ca9fa6b248",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"56d67b29-8950-49f8-bdea-d8cd5461c8da": {
"main": [
[
{
"node": "f597998a-7621-49ce-9933-6db94c53325e",
"type": "main",
"index": 0
}
]
]
},
"5bf9e496-fa0d-4d4c-8908-de01512fe4d8": {
"main": [
[
{
"node": "73680e23-7e03-4125-a5f9-91defe784743",
"type": "main",
"index": 0
}
]
]
}
}
}Häufig gestellte Fragen
Wie verwende ich diesen Workflow?
Kopieren Sie den obigen JSON-Code, erstellen Sie einen neuen Workflow in Ihrer n8n-Instanz und wählen Sie "Aus JSON importieren". Fügen Sie die Konfiguration ein und passen Sie die Anmeldedaten nach Bedarf an.
Für welche Szenarien ist dieser Workflow geeignet?
Experte - Content-Erstellung, Multimodales KI
Ist es kostenpflichtig?
Dieser Workflow ist völlig kostenlos. Beachten Sie jedoch, dass Drittanbieterdienste (wie OpenAI API), die im Workflow verwendet werden, möglicherweise kostenpflichtig sind.
Verwandte Workflows
InhaltGenerator v3
If
Set
Code
+
If
Set
Code
144 NodesJay Emp0
Content-Erstellung
Automatische Erstellung und Veröffentlichung von 1080p-Short-Videos mit Veo-3, Perplexity und FFmpeg
Automatisiertes Erstellen und Veröffentlichen von 1080p-Kurzvideos mit Veo-3, Perplexity und FFmpeg
If
Set
Wait
+
If
Set
Wait
21 NodesSulieman Said
Content-Erstellung
WordPress-Blog-Automatisierung Professional Edition (Deep Research) v2.1 Markt
Automatisierung der Erstellung von SEO-optimierten Blogs mit GPT-4o, Perplexity AI und mehrsprachiger Unterstützung
If
Set
Xml
+
If
Set
Xml
125 NodesDaniel Ng
Content-Erstellung
Inhaltsaggregation
Automatisierte Veröffentlichung von Social-Media-Posts auf LinkedIn und X/Twitter aus Website-Artikeln mit Gemini AI
If
Set
Xml
+
If
Set
Xml
34 NodesVadim
Content-Erstellung
Automatischer Motor für virale Inhalte auf LinkedIn und X
Automatisiertes Erstellen und Veröffentlichen von viralen Inhalten für LinkedIn und X mittels KI
If
Set
Wait
+
If
Set
Wait
156 NodesDiptamoy Barman
Content-Erstellung
Vollständiger B2B-Vertriebsprozess: Apollo Lead-Generierung, Mailgun-Outreach und KI-Antwortverwaltung
Vollständiger B2B-Vertriebsprozess: Apollo Lead-Generierung, Mailgun Outreach und AI-Antwortmanagement
If
Set
Code
+
If
Set
Code
116 NodesPaul
Content-Erstellung
Workflow-Informationen
Schwierigkeitsgrad
Experte
Anzahl der Nodes34
Kategorie2
Node-Typen15
Autor
Jonas
@jony-cornelioExterne Links
Auf n8n.io ansehen →
Diesen Workflow teilen