Back to blog

How to Create a Chatbot for Text Generation: A Step-by-Step Guide

https://s3.ascn.ai/blog/f155684d-625d-4561-a4c5-40bd6dd816bc.webp
ASCN Team
29 March 2026
Got questions about automations? Our manager is here to help.
Buy a subscription now and get 2x the subscription duration.
Contact manager

Do you want to automate your writing process? If so, you've probably spent hours reading many articles looking for a full solution and finding only bits and pieces without any real direction. We've experienced this ourselves while working with AI and automation for over 8 years, testing 43 different approaches for creating text robots, from simple python scripts to fully-fledged GPT-4 based AI agents. Consequently, we can tell you that creating a quality text generation chatbot, regardless of which approach you choose to implement, has little to do with the current trend of the moment, and everything to do with your understanding of what the bot is being asked to do, its architecture and ability to combine disparate tools into a cohesive work product that produces value for the user. In this post, we will show you how to create your own AI-based text bot that can produce high quality results that provide real solutions for real business needs instead of generating meaningless "fluff."


What Is A Text Generation ChatBot?

How to Create a Chatbot for Text Generation: A Step-by-Step Guide

A text generation chatbot is an LLM-based program that generates meaningful and original texts in response to user input. Unlike standard templates that only provide text when provided with specific information (e.g., 'please send me an email'), the bot does so in a more sophisticated manner by understanding the user's input, converting it into context, and allowing for a 'human' inspired response. A primary use of the chatbot would be for copywriting (creating content to promote products or services), personalising messages, and processing customer requests with a contextual understanding of their meaning.

The main technology that powers content bots includes:

  1. LLM Models (GPT-3/4, Claude, LLaMA) — the foundation for generating long-form written content.
  2. Vector Databases — where the business and contextual data is stored.
  3. API Connections — connect to CRM systems, social media/desks and Analytics systems.
  4. Prompt Engineering — have a specific way of working with LLMs using prompts to influence how the model will produce the output.
Overall, content bots are capable of generating written or digital content that is ready to publish. So, while both types of bots answer questions, the difference in what they provide compared to a traditional Q&A bot is significant.

GPT and AI in Automated Content Creation

GPT (Generative Pre-trained Transformer) LLM models have enabled users to create high-quality content by learning from billions of examples across multiple categories with respect to structure, style, tone, and organization. The growth in the use of ChatGPT among marketing teams in 2023 saw an increase of 340%, and this has been a significant acceleration in the ability to create content based on LLMs. The time spent drafting content has been reduced by over 10x thanks to AI, although much of the content created by AI still needs to be manually edited for accuracy and brand tone.

⚠️ Disclaimer: The 340% growth figure is cited by the authors — independent verification of this specific number is difficult. AI adoption in marketing has been confirmed by many studies, but we recommend checking the original source yourself.

The benefits of implementing AI in the process of creating text:

  • Speed: 1000 written words can be generated in between 30 and 60 seconds;
  • Scalability: one bot can handle hundreds of concurrent requests;
  • Consistency: every response will have a standard tone of voice across all communications;
  • Personalisation: responses can be tailored to specific audiences without the need for human input.

In a project like the ASCN.AI crypto analytics site, answers to complicated questions regarding tokens can be provided by an AI assistant within 10 seconds based on an analysis of blockchain data. LLMs that perform a web search are unable to do this — the ASCN.AI crypto analytics team indexed the Ethereum and Solana chains for two years in order to provide users with current on-chain answers. The result: instead of having to collect data from 20+ different sources, users receive a complete and organised picture on the spot.


Overview of Popular Technologies and Platforms

⚠️ Pricing disclaimer: API pricing changes frequently. The figures in the table below were current at the time of writing — always check official provider pages before making decisions.
Platform Type Advantages Disadvantages Cost
OpenAI GPT-3/4 Closed API High-quality text, large context window (128K tokens) Expensive and dependent on the provider Starts at $0.002 for 1K tokens (GPT-3.5)
DialoGPT Open-source No cost and complete control over your model Requires powerful GPUs; quality lower than GPT-4 Free
Claude (Anthropic) Closed API Large context window (up to 200K tokens and beyond), enhanced safety Limited creativity compared to GPT-4 Starts at $0.001 for 1K tokens
LLaMA 2 (Meta) Open-source Free to use and can be fine-tuned for a specific task Requires own hardware and strong technical expertise Free
ASCN.AI NoCode No-code platform No programming knowledge necessary, pre-built workflow and integrations Limited compared to fully customised solutions Free tier; from $29/month

How do I choose a platform?

  • If your goal is to have a quickly developed MVP with limited funds: DialoGPT or LLaMA 2 (however, you will require technical skills);
  • For quality content: OpenAI GPT-4 — keep in mind the costs are substantially higher;
  • If you plan to analyse long, complex documents: Claude, due to its significantly larger context window;
  • If you do not have an IT team: no-code solutions such as ASCN.AI.

Remember: before choosing a platform, you must first identify what you need to achieve. High-quality email newsletters are fine for GPT-3.5, but legal document content should be produced via GPT-4 and fact-checked in accordance with your organisation's standards.


Text Generation Chatbot Framework

Architecture: Layers of the Bot

The full system has four tiers of components:

  • User Interface Tier: This is where the user interacts with the bot (Telegram, web chat, etc.)
  • Logic Tier: This analyses the input request, finds the intent, and directs it.
  • Generation Tier: This is the Language Model (LLM) portion of the bot that creates the text based on a user's input and the context of the conversation.
  • Data Tier: A collection of knowledge bases, previous conversation history, and settings for the model.

The bot's workflow: the user sends a message → the interface forwards it to the Logic Tier → the Logic Tier creates an appropriate prompt with context → the LLM generates a response → the response is passed back to the user.

Without a Data Tier, the bot cannot consider your specifics and will provide a general response. By integrating with Vector Databases, the bot can significantly enhance the quality and accuracy of responses, allowing it to quickly access key information related to your organisation or industry.

Integrating the OpenAI API

To register with the OpenAI API, visit platform.openai.com to obtain an API key. You will also need to install the Python library:

pip install openai

Below is a basic example of how you would set up a request using OpenAI's API:

import openai

openai.api_key = 'YOUR_API_KEY'

response = openai.ChatCompletion.create(
    model='gpt-4',
    messages=[
        {'role': 'system', 'content': 'You are a copywriter for email newsletters.'},
        {'role': 'user', 'content': 'Write a welcome email to a new client'}
    ],
    max_tokens=500,
    temperature=0.7
)

print(response['choices'][0]['message']['content'])

Key parameters:

  • model — the version of the AI model being used;
  • temperature (0.0 to 2.0) — creativity level: 0 = strictly factual, 1.5 = creative;
  • max_tokens — the maximum length of the generated text.

Platforms like ASCN.AI enable development of flows for working with the API without writing code, using pre-built workflow blocks to create visual scripts to manage and execute API requests.

How to Process Requests and Generate Responses

There are three stages to request processing:

  1. Receive and Analyse: extract request text; classify the intent; check for toxic content.
  2. Formulate the Prompt: add system instructions; add context from the Knowledge Base; add conversation history (if available).
  3. Generate and Post-process: call the API; review the response for quality and content; send response to user.

Ways to improve the process:

  • Cache frequently used requests — reduces API load and speeds up response time;
  • Stream the response — users see the answer while it's still being generated, which is more user-friendly;
  • Batch process requests — group matching requests together for efficiency.
In one of my clients' crypto projects, they experienced delays of 15 seconds on long texts from the API. Their solution? Show short answers immediately and generate long answers asynchronously, providing a link to the completed document. This increased perceived response speed by three times.

Developing a Chatbot to Generate Text Step by Step

Step One — Choosing the Model and Platform

The most important factors are quality, budget, and security:

  • High-quality, high-budget option — GPT-4;
  • Mid-range budget — GPT-3.5-turbo, Claude Instant;
  • Privacy/security priority — locally hosted model such as LLaMA or Mistral;
  • No IT team — no-code builder such as ASCN.AI.

A hybrid model combining GPT-3.5 with manual edits has been found to result in a 70% cost reduction compared to GPT-4 while still providing acceptable quality and speed.

Creating a Basic Bot

Here is a basic example of a Telegram bot in Python with a connection to the OpenAI API:

import telebot
import openai

bot = telebot.TeleBot("TELEGRAM_TOKEN")
openai.api_key = "OPENAI_API_KEY"

@bot.message_handler(commands=['start'])
def start(message):
    bot.send_message(message.chat.id, "Hello! Send me a request and I'll create a text for you.")

@bot.message_handler(func=lambda message: True)
def generate_text(message):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a copywriter for businesses."},
            {"role": "user", "content": message.text}
        ],
        max_tokens=300,
        temperature=0.8
    )
    bot.send_message(message.chat.id, response['choices'][0]['message']['content'])

bot.polling()

You can also implement a similar bot in Node.js to handle a greater volume of requests. The basic skeleton is ready — from here you tune it to your specific task.

Training and Tuning a Model

OpenAI provides a way to fine-tune your model using their platform:

  1. Create a dataset in JSONL format with 100 to 200 examples (request-response);
  2. Use the API to start fine-tuning (approximately $0.008 per 1K tokens);
  3. You will receive an adapted custom model suited to your business needs.

You can fine-tune local models using Hugging Face and a GPU with at least 24GB of VRAM. A simpler and equally effective approach is prompt engineering — writing clear instructions and providing multiple examples for the model to learn from, pulling knowledge from vector databases such as Pinecone and Weaviate.

Here are standard model settings for specific tasks:

Task Temperature Max Tokens Frequency Penalty Presence Penalty
FAQ Bot 0.2 200 0.3 0.0
Blog Post 0.7 2000 0.7 0.5
Creative Advertisements 1.3 300 1.0 0.8
Analytical Reports 0.4 3000 0.5 0.3
Email Newsletters 0.8 500 0.8 0.5

Temperature settings for ASCN.AI Crypto-Academy projects were set at around 0.8 for beginners to make the language more accessible, and at 0.3 for expert projects — for precision and strictness.

Testing and Debugging

Primary testing steps for a chatbot:

  • Unit Testing: prompt logic; API error handling; answer parsing;
  • Integration Testing: full request-response cycles; integration with messaging platforms;
  • Quality Assessment: checking relevance, grammar and general tone of voice.

Recommended tools for monitoring:

  • Log all requests and responses;
  • Metrics: response times, error rate, quality;
  • Prometheus + Grafana for visualisation; Sentry for tracking errors; OpenAI Usage Dashboard;
  • A/B testing for multiple prompt versions and settings.
In the Arbitrage Scanner project, we implemented retry logic for API requests which increased the rate of successful responses by 98%.

Use Cases and Case Studies

Automated Ad Text Creator

Case — QuickShock: an advertising agency running campaigns in the crypto space, where copywriters were unable to deliver on time and needed to produce 20–30 variations of ad text.

Solution: a GPT bot with prompt structure "hook + pain + solution + CTA" and Google Sheets integration. The manager inputs source data, and the bot returns 10 variants in 20 seconds — copywriters then pick the best ones and perfect them.

Outcomes: 5x faster than before, cost reduced from $50 to $10 per creative, conversion rates maintained after manual editing.

Tip: develop a catalogue of prompts based on ad format and experiment with temperature settings (1.2–1.5 works well for advertising).

Using an AI Assistant for Content Marketing

Case — ASCN.AI: analytical blog focusing on cryptocurrencies. The AI agent generates ideas, creates templates, and drafts articles.

Method: a no-code AI agent with the prompt "You are a crypto expert, create a template with H2/H3 and examples." The editor enters a topic, receives a ready template, then builds out the body content.

Results: time to prepare decreased from 3 hours to 30 minutes; templates are now more logical; editors spend more time fact-checking.

Generating News and Blog Posts with an AI Agent

Case — crypto media: automated creation of short news from Telegram channels and RSS feeds.

Method: connect RSS and Telegram APIs to a workflow; an AI agent analyses incoming news and generates a 150–200 word note (3 paragraphs) for an editor to review.

Results: time to publication decreased from 4 hours to 15 minutes, 25% increase in audience reach, editors spend more time on analysis.

Real-world examples: The Washington Post uses its AI bot Heliograf to publish over 850 articles without human journalists; Reuters cut financial report turnaround time by three times.

Best Practices and Tips to Improve Generation Quality

How to Create a Chatbot for Text Generation: A Step-by-Step Guide

Model Parameter Settings

Parameter settings significantly influence generation quality:

  • Temperature: 0.0–0.3 for factual and technical texts; 0.5–0.8 for blog articles; 1.0–1.5 for creative content;
  • Max Tokens: 1 token ≈ 0.75–1 word; 150 tokens ≈ 110–150 words; 1500–2000 tokens for long-form articles;
  • Frequency Penalty (0–1.0): reduces repeated content in your text;
  • Presence Penalty: increases variety of themes — especially useful for longer articles.

Ways to adapt a pre-trained model to your needs:

  • Prompt engineering: build a system prompt with detailed instructions and few-shot examples;
  • RAG (retrieval-augmented generation): a vector knowledge base feeds relevant fragments into the prompt;
  • Fine-tuning: retrain the model on domain-specific data.
On ASCN.AI we chose not to fine-tune but instead created a single on-chain database of all generated content — allowing us to write fresh content in real time. More cost-effective, and it works.

Controlling Quality and Filtering Results

How to avoid unwanted content:

  • Automatic toxicity detection using OpenAI Moderation API and Perspective API; plagiarism and grammar checks;
  • Manual review of the first generations, building an error database for future reference;
  • Human-in-the-loop — human participation in reviewing and correcting prompts.

The OpenAI Moderation API significantly reduces the volume of potentially dangerous or unacceptable responses generated.


Frequently Asked Questions (FAQ)

Security and Ethics of AI Chatbots

  • Confidentiality: never pass personal data without permission; use local models for sensitive information;
  • Prompt injection: filter all input, limit its length and validate commands;
  • Access control: manage access to the internet and your databases carefully.

Don't let the bot run up costs unchecked — implement rate limiting. Be transparent: always let users know they're talking to a bot. Make sure responses are free of discrimination. And remember — AI is a tool to enhance professionals, not replace them.

Guaranteeing Uniqueness and Avoiding Errors

  • Check uniqueness using Copyscape, Advego, or Text.ru;
  • A temperature above 0.8 reduces the chance of verbatim content generation;
  • Always include a specific instruction for unique content in your prompt;
  • Check grammar and facts;
  • Monitor tone of voice and logical flow;
  • For example, in the crypto blog we added a rule to use only information from our own knowledge base.

Integrations with Messengers and Services

  • Telegram: create a bot via @BotFather, get your token, use python-telegram-bot or node-telegram-bot-api;
  • WhatsApp: via Twilio or the WhatsApp Business API;
  • Discord: Discord Developer Portal, discord.py or discord.js;
  • No-code integrations via ASCN.AI and similar platforms.
In ASCN.AI, for example, the Telegram integration allows fast delivery of token analytics with on-chain metrics.

Final Thoughts and Recommendations for Further Development

Many will tell you that GPT chatbots have a bright future ahead — and honestly, it's hard to argue. Here's where things are heading:

  • Multimodality: future models will combine text, images, audio and video in a single flow;
  • Personalisation via RAG: the bot will have access to your entire company knowledge base and respond with truly unique answers;
  • AI Agents: autonomous agents will handle full task chains — data collection, analysis, reporting and distribution, all without human involvement;
  • Local models: growing interest in self-hosted solutions for privacy and regulatory compliance;
  • Voice assistants: GPT integration with TTS will be a natural next step for many use cases.

Keep an eye on GPT-5, Claude releases, and Gemini Ultra. Try open-source alternatives. And teach your team to treat AI like an editor and assistant — not a magic button.

Get ready-made automations now
Today, we launched approximately 149 ready-made automations from our ready-made automation marketplace. 100+ solutions have been assembled, configured, and are ready to use. Get access to automations such as Content Factories, Premium Chatbots, Automated Sales Funnels, SEO Article Generators, and more with an ASCN.AI subscription.
Try for free
MainNo code blog
How to Create a Chatbot for Text Generation: A Step-by-Step Guide
By continuing to use our site, you agree to the use of cookies.