

Do you want to automate your writing process? If so, you've probably spent hours reading many articles looking for a full solution and finding only bits and pieces without any real direction. We've experienced this ourselves while working with AI and automation for over 8 years, testing 43 different approaches for creating text robots, from simple python scripts to fully-fledged GPT-4 based AI agents. Consequently, we can tell you that creating a quality text generation chatbot, regardless of which approach you choose to implement, has little to do with the current trend of the moment, and everything to do with your understanding of what the bot is being asked to do, its architecture and ability to combine disparate tools into a cohesive work product that produces value for the user. In this post, we will show you how to create your own AI-based text bot that can produce high quality results that provide real solutions for real business needs instead of generating meaningless "fluff."

A text generation chatbot is an LLM-based program that generates meaningful and original texts in response to user input. Unlike standard templates that only provide text when provided with specific information (e.g., 'please send me an email'), the bot does so in a more sophisticated manner by understanding the user's input, converting it into context, and allowing for a 'human' inspired response. A primary use of the chatbot would be for copywriting (creating content to promote products or services), personalising messages, and processing customer requests with a contextual understanding of their meaning.
The main technology that powers content bots includes:
Overall, content bots are capable of generating written or digital content that is ready to publish. So, while both types of bots answer questions, the difference in what they provide compared to a traditional Q&A bot is significant.
GPT (Generative Pre-trained Transformer) LLM models have enabled users to create high-quality content by learning from billions of examples across multiple categories with respect to structure, style, tone, and organization. The growth in the use of ChatGPT among marketing teams in 2023 saw an increase of 340%, and this has been a significant acceleration in the ability to create content based on LLMs. The time spent drafting content has been reduced by over 10x thanks to AI, although much of the content created by AI still needs to be manually edited for accuracy and brand tone.
⚠️ Disclaimer: The 340% growth figure is cited by the authors — independent verification of this specific number is difficult. AI adoption in marketing has been confirmed by many studies, but we recommend checking the original source yourself.
The benefits of implementing AI in the process of creating text:
In a project like the ASCN.AI crypto analytics site, answers to complicated questions regarding tokens can be provided by an AI assistant within 10 seconds based on an analysis of blockchain data. LLMs that perform a web search are unable to do this — the ASCN.AI crypto analytics team indexed the Ethereum and Solana chains for two years in order to provide users with current on-chain answers. The result: instead of having to collect data from 20+ different sources, users receive a complete and organised picture on the spot.
⚠️ Pricing disclaimer: API pricing changes frequently. The figures in the table below were current at the time of writing — always check official provider pages before making decisions.
| Platform | Type | Advantages | Disadvantages | Cost |
|---|---|---|---|---|
| OpenAI GPT-3/4 | Closed API | High-quality text, large context window (128K tokens) | Expensive and dependent on the provider | Starts at $0.002 for 1K tokens (GPT-3.5) |
| DialoGPT | Open-source | No cost and complete control over your model | Requires powerful GPUs; quality lower than GPT-4 | Free |
| Claude (Anthropic) | Closed API | Large context window (up to 200K tokens and beyond), enhanced safety | Limited creativity compared to GPT-4 | Starts at $0.001 for 1K tokens |
| LLaMA 2 (Meta) | Open-source | Free to use and can be fine-tuned for a specific task | Requires own hardware and strong technical expertise | Free |
| ASCN.AI NoCode | No-code platform | No programming knowledge necessary, pre-built workflow and integrations | Limited compared to fully customised solutions | Free tier; from $29/month |
Remember: before choosing a platform, you must first identify what you need to achieve. High-quality email newsletters are fine for GPT-3.5, but legal document content should be produced via GPT-4 and fact-checked in accordance with your organisation's standards.
The full system has four tiers of components:
The bot's workflow: the user sends a message → the interface forwards it to the Logic Tier → the Logic Tier creates an appropriate prompt with context → the LLM generates a response → the response is passed back to the user.
Without a Data Tier, the bot cannot consider your specifics and will provide a general response. By integrating with Vector Databases, the bot can significantly enhance the quality and accuracy of responses, allowing it to quickly access key information related to your organisation or industry.
To register with the OpenAI API, visit platform.openai.com to obtain an API key. You will also need to install the Python library:
pip install openai
Below is a basic example of how you would set up a request using OpenAI's API:
import openai
openai.api_key = 'YOUR_API_KEY'
response = openai.ChatCompletion.create(
model='gpt-4',
messages=[
{'role': 'system', 'content': 'You are a copywriter for email newsletters.'},
{'role': 'user', 'content': 'Write a welcome email to a new client'}
],
max_tokens=500,
temperature=0.7
)
print(response['choices'][0]['message']['content'])
Key parameters:
model — the version of the AI model being used;temperature (0.0 to 2.0) — creativity level: 0 = strictly factual, 1.5 = creative;max_tokens — the maximum length of the generated text.Platforms like ASCN.AI enable development of flows for working with the API without writing code, using pre-built workflow blocks to create visual scripts to manage and execute API requests.
There are three stages to request processing:
Ways to improve the process:
In one of my clients' crypto projects, they experienced delays of 15 seconds on long texts from the API. Their solution? Show short answers immediately and generate long answers asynchronously, providing a link to the completed document. This increased perceived response speed by three times.
The most important factors are quality, budget, and security:
A hybrid model combining GPT-3.5 with manual edits has been found to result in a 70% cost reduction compared to GPT-4 while still providing acceptable quality and speed.
Here is a basic example of a Telegram bot in Python with a connection to the OpenAI API:
import telebot
import openai
bot = telebot.TeleBot("TELEGRAM_TOKEN")
openai.api_key = "OPENAI_API_KEY"
@bot.message_handler(commands=['start'])
def start(message):
bot.send_message(message.chat.id, "Hello! Send me a request and I'll create a text for you.")
@bot.message_handler(func=lambda message: True)
def generate_text(message):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a copywriter for businesses."},
{"role": "user", "content": message.text}
],
max_tokens=300,
temperature=0.8
)
bot.send_message(message.chat.id, response['choices'][0]['message']['content'])
bot.polling()
You can also implement a similar bot in Node.js to handle a greater volume of requests. The basic skeleton is ready — from here you tune it to your specific task.
OpenAI provides a way to fine-tune your model using their platform:
You can fine-tune local models using Hugging Face and a GPU with at least 24GB of VRAM. A simpler and equally effective approach is prompt engineering — writing clear instructions and providing multiple examples for the model to learn from, pulling knowledge from vector databases such as Pinecone and Weaviate.
Here are standard model settings for specific tasks:
| Task | Temperature | Max Tokens | Frequency Penalty | Presence Penalty |
|---|---|---|---|---|
| FAQ Bot | 0.2 | 200 | 0.3 | 0.0 |
| Blog Post | 0.7 | 2000 | 0.7 | 0.5 |
| Creative Advertisements | 1.3 | 300 | 1.0 | 0.8 |
| Analytical Reports | 0.4 | 3000 | 0.5 | 0.3 |
| Email Newsletters | 0.8 | 500 | 0.8 | 0.5 |
Temperature settings for ASCN.AI Crypto-Academy projects were set at around 0.8 for beginners to make the language more accessible, and at 0.3 for expert projects — for precision and strictness.
Primary testing steps for a chatbot:
Recommended tools for monitoring:
In the Arbitrage Scanner project, we implemented retry logic for API requests which increased the rate of successful responses by 98%.
Case — QuickShock: an advertising agency running campaigns in the crypto space, where copywriters were unable to deliver on time and needed to produce 20–30 variations of ad text.
Solution: a GPT bot with prompt structure "hook + pain + solution + CTA" and Google Sheets integration. The manager inputs source data, and the bot returns 10 variants in 20 seconds — copywriters then pick the best ones and perfect them.
Outcomes: 5x faster than before, cost reduced from $50 to $10 per creative, conversion rates maintained after manual editing.
Tip: develop a catalogue of prompts based on ad format and experiment with temperature settings (1.2–1.5 works well for advertising).
Case — ASCN.AI: analytical blog focusing on cryptocurrencies. The AI agent generates ideas, creates templates, and drafts articles.
Method: a no-code AI agent with the prompt "You are a crypto expert, create a template with H2/H3 and examples." The editor enters a topic, receives a ready template, then builds out the body content.
Results: time to prepare decreased from 3 hours to 30 minutes; templates are now more logical; editors spend more time fact-checking.
Case — crypto media: automated creation of short news from Telegram channels and RSS feeds.
Method: connect RSS and Telegram APIs to a workflow; an AI agent analyses incoming news and generates a 150–200 word note (3 paragraphs) for an editor to review.
Results: time to publication decreased from 4 hours to 15 minutes, 25% increase in audience reach, editors spend more time on analysis.
Real-world examples: The Washington Post uses its AI bot Heliograf to publish over 850 articles without human journalists; Reuters cut financial report turnaround time by three times.

Parameter settings significantly influence generation quality:
Ways to adapt a pre-trained model to your needs:
On ASCN.AI we chose not to fine-tune but instead created a single on-chain database of all generated content — allowing us to write fresh content in real time. More cost-effective, and it works.
How to avoid unwanted content:
The OpenAI Moderation API significantly reduces the volume of potentially dangerous or unacceptable responses generated.
Don't let the bot run up costs unchecked — implement rate limiting. Be transparent: always let users know they're talking to a bot. Make sure responses are free of discrimination. And remember — AI is a tool to enhance professionals, not replace them.
In ASCN.AI, for example, the Telegram integration allows fast delivery of token analytics with on-chain metrics.
Many will tell you that GPT chatbots have a bright future ahead — and honestly, it's hard to argue. Here's where things are heading:
Keep an eye on GPT-5, Claude releases, and Gemini Ultra. Try open-source alternatives. And teach your team to treat AI like an editor and assistant — not a magic button.