Back to templates

AI Agent Web Search Using SearchAPI & LLM: A Complete Guide

Standard Large Language Models are limited by their training data cut-off dates, making them unreliable for real-time market analysis or current events. To build a truly intelligent AI agent, you need to bridge the gap between "stagnant knowledge" and "real-time data." In this comprehensive guide, we explore the architecture of AI agents that utilize SearchAPI and LLMs, provide actionable Python code examples, and explain how to optimize performance and ensure data privacy in your AI stack.

Created by:
Author
James
Last update:
19 March 2026
Categories
Turnkey
Exclusive for new users
With your first payment for any subscription for any period, you get x2 subscription time. Only if you pay today!

Web search capability has evolved from mere conjecture to be an essential part of any AI program designed to support current operations based on current information instead of stagnant, outdated knowledge from the past. For example, if you created an artificial intelligence assisted-agent to support a trader and decided to use ChatGPT as your source of information, ChatGPT would provide current information about what was happening with Bitcoin, but its dataset only goes back to April 2023. In contrast, if you were building your AI based on SearchAPI and a Large Language Model (LLM), you would retrieve the most current data from both SearchAPI (e.g., CoinDesk for news) and other trusted sites, as well as access the latest on-chain metrics (i.e., transaction history) to build the most current report possible.

ASCN.AI has followed the evolution of SearchAPI and LLMs for more than two years, working through various iterations until we developed our own customized SearchAPI/LLM combination using APIs. During this time, we have indexed thousands of blockchain transactions, built multiple types of models, and have come to the conclusion that all standard solutions will ultimately become "non-standard." In order to achieve the greatest levels of speed, accuracy and flexibility with analysis you need to utilize both SearchAPI and an LLM. The combination of SearchAPI and an LLM will serve as the foundation for building the ASCN Crypto Assistant. The ASCN Crypto Assistant can process a query in approximately 10 seconds as opposed to an hour of manual work to analyze a trade.

The following sections will describe how web searches are utilized by AI Agents and an explanation of the benefits of using SearchAPI and APIs in contrast to the built-in web search tools offered by various companies. Further descriptions will outline step by step instructions for building a potential AI Agent that would be able to carry out a web search using SearchAPI and LLM technology.


An AI Agent with Web Search Capability

AI Agent Web Search Using SearchAPI & LLM: A Complete Guide

An AI agent with a web search capability makes its own decisions about how to find the data needed. It isn't just a sequential process to gather information; it also allows for some autonomy in determining which paths to pursue for finding answers to queries.

When you do a web search, the results are always updated with new content created daily. On the other hand, a Large Language Model (LLM) only has the ability to retrieve from what it has been trained on. The timeframe for an LLM's ability to answer questions is generally limited. An LLM created a lot of great information up to and including April 2023. So, for example, if you were to ask an LLM what today's dollar exchange rate is against the euro, the LLM would either guess a number or say it doesn't know since the exchange rates beyond April 2023 have not yet been included in the LLM's training data set.

In ASCN.AI's case, we experienced the same challenges when we began working with ChatGPT. When asked questions on tokens from last week, ChatGPT did not recognize them. Therefore, we needed to connect the Ethereum and Solana blockchain nodes to index the on-chain data coming from our API and integrate the API for Search for News Aggregators and Social Networks. Therefore, now we can obtain all the elements necessary to provide a 10-second picture of the rate, volume of trading occurring, whale activity, and Telegram messages. Connecting to a web-based search gives our agent the ability to go from a calculator (on historical information) to an active real-time decision-making tool.


Web Searching and The Main Tasks and Values of Using AI Agents With Web Searching

An AI agent with a web-search capability will perform three types of tasks.

  • Informational tasks: gathering information from many sources (news, exchange rates, analytics, etc.) instead of searching through and reading multiple tabs, you can now get a summary of all of that information in just one minute.
  • Analytical tasks: an AI agent analyzing text data and determining a sentiment related to the news, determining if there are positive or negative signals for growth and decline, and then generating a hypothesis. When people ask, "Why did token X rise so quickly?", the agent gathers information that was announced through several different forms of communication, including social media, news outlets, and on-chain tracking of all the transactions related to that token.
  • Operational tasks: the agent operates by taking action from the data it has gathered (e.g., monitoring price thresholds, alerting for potential attacks/rug pulls, and notifying or executing based on actions identified via the agent).

Overall, the benefits are as follows:

  • Speed — Instead of hours, this process occurs within seconds.
  • Relevance — As of the last five minutes, data is fresh.
  • Completeness — All requested data has been sourced from multiple different sources.
  • Scalability — Hundreds of requests may be processed in parallel.

ASCN.AI has created an agent that collects data related to tokenomics, its founders/team, any audit reports completed on the token, social media and other communication regarding the token, and any on-chain tracking data directly from each token. The agent is able to do this in a matter of 10–30 seconds compared to the average 2–3 hours it takes a person to manually perform the same research. Clients of ASCN.AI utilize this solution for scoring and making quick buying/selling decisions (aka scalping) with regard to crypto, where speed of reaction is needed and adds a competitive advantage for the user.


Overview of SearchAPI and LLM (Large Language Models)

The SearchAPI allows programmatic access to search engines. Instead of searching manually, the user submits an HTTP request containing a URL, and the SearchAPI returns a structured JSON file with the relevant headline, snippet and URL. The LLM processes the JSON information.

The SearchAPI has made it easier to collect large amounts of content automatically without having to do all the HTML parsing and have to pass through Captchas or being rejected. The SearchAPI provides users with access to a high volume of searchable content, and, therefore, you pay for the number of requests, so you can receive clean data and have reliable information much faster than if you were to create your own custom scraper.

The LLM is a neural network such as GPT-4, Anthropic Claude, or LLaMA that has been trained on an extensive library of text documents. With this type of model, you can generate text based on specific prompts, answer questions, summarise articles, and classify content. The LLM in the ASCN.AI agent is the "brain" of the agent, processing the collected data from the SearchAPI to provide users with meaningful responses.

The SearchAPI + LLM connection works as follows:

  1. User submits a request
  2. The agent creates a search query for the SearchAPI
  3. SearchAPI returns structured results
  4. LLM processes the results, extracts key information and builds a response
  5. The user receives a response

ASCN.AI uses proprietary Machine Learning technology, trained on data collected in Web3, mainly with cryptocurrency and related technologies. The SearchAPI's scope has also been expanded beyond public APIs, such as Telegram and Forum Data and Blockchain Nodes that are private. The agent is not just providing a summary; it also provides in-depth insights based on the agent's analysis of the most relevant market-based data.

Simply put, the SearchAPI is a means of accessing information from other sources, while the LLM is able to process that information and create actionable outputs for users.

Web Search as a Key Tool in AI System Architecture

Web Search Tools are key components of AI System Architecture. Web Search Tools send requests to external sources and retrieve data for later use. A web search tool may be a SearchAPI that connects to news sites, databases, or to blockchain nodes directly through HTTP Requests.

A major need of Web Search Tools is to allow an LLM to "learn" beyond its current dataset, including being able to find new information on a continual basis. Web Search Tools serve this purpose by providing users with a new context of "now" vs. "then" for every data point retrieved through the tool.

Examples of common types of sources used in web search tools:

  • Search Engine APIs from Google, Bing, or DuckDuckGo via SearchAPI
  • Specialized APIs like news aggregator sites or Financial Platforms
  • Internal Databases which hold index information related to the type of information being requested

A Hybrid Approach at ASCN.AI allows general inquiries to be completed using Google and Bing through SearchAPI and specific inquiries in the Cryptocurrency Market to be completed through ASCN.AI with the addition of its own Ethereum & Solana Nodes, Fund Databases as well as its own Telegram parsing technology. When determining what source to use for the answer to a particular query, the agent must analyze the type of question in order to select the appropriate source for that specific question type.

For example, if an individual were to ask, "What is the current BTC price?", the agent would send the request directly to the exchange API to get a real-time BTC price. However, if an individual were to ask, "Why did BTC prices fall?", then the agent would gather information from three different types of data sources (news sites, social media sites, and on-chain analytical sources) all at the same time.

How the Web Search Process Works

AI Agent Web Search Using SearchAPI & LLM: A Complete Guide

  1. Determine whether external information is needed: The LLM determines whether to use the web to find additional information. For simple queries, the response will come back immediately, whereas more complex queries will first require web searching.
  2. Create a query for the web searching agent: The agent creates the optimal web search query for the question being asked. For instance, if the question were, "What is the reason for the increase in SOL price?", the agent's web search query would become: "SOL price increase reason December 2024".
  3. Request to perform a web search via the API: The agent takes the query generated by the web search agent and submits it to the SearchAPI (and other sources) for the structured results based on that specific query.
  4. Filter out unwanted items from the search results: The web search agent will filter out any advertisements, duplicate results, or results from sources that are outdated, as well as sort the remaining results according to their relevance to the query and recency of posting.
  5. Extract the facts from the search results: The LLM examines the search results and identifies the snippets of data that contain the most pertinent facts. The LLM may also click on links to further investigate to gather additional related data.
  6. Create an answer to the query: The agent will compile a complete answer to the question, as well as provide all sources of information and the date of publication for the information that was used by the LLM for the answer provided above.

At ASCN.AI, we have implemented an additional process whereby the agent can evaluate the validity of the publications and compare the information provided in the publications for inconsistencies. If discrepancies are found, the agent will indicate that the information presented in the publications does not definitively confirm the cause of the price rise or fall.


The Role of the Web Search API

The Web Search API is a programming interface (API) that will take a search query sent by an agent and return structured results, in the form of an HTTP response. Instead of taking time to search manually and copy/paste links from various sources for each article of interest, submit a request to an API and as a result you receive back a JSON containing all of the current headlines, snippets, and URLs for those articles.

The major providers of APIs are Google Custom Search API and Bing Web Search API along with other aggregators like SerpAPI and ScaleSerp. They have solved all of the issues associated with independently collecting your data like dealing with changing layouts, CAPTCHAs, usage limits, etc.

For an AI agent, an API serves as a source of current data. An LLM will never go directly to the internet to get its answer. Instead, the API accesses the data on behalf of the LLM. The agent creates a query, receives the result and analyses it for the LLM.

At ASCN.AI, we use Web Search API for general information and specific APIs for Cryptocurrencies and Blockchain. This has allowed us to provide faster, more accurate information with an API.

Why Would You Want To Use A Web Search API Instead Of Your Platform's Built-in Tools?

While the platform's built-in search tools may be convenient for initial look-up purposes, they all have inherent limitations:

  • Lack of Control: The query parameters and sites are not visible to you. You cannot customize your filter or highlight particular queries.
  • Limited Customization: You can't add your own sources or change the way that the logic works.
  • Dependent On The Platform: If the platform makes policy changes, it could negatively affect your results.
  • Lower Efficiency: When using a platform's built-in search functionality you have no access to cached versions of previous searches, which could cost you both time and money.

By using Web Search API, you will gain total control over the information that you receive. You choose your source(s), create your queries, filter your results and cache your data and can even add your logic to the API results.

After migrating to our own API stack at ASCN.AI, we were able to reduce the average response time from 30 seconds to 10 seconds, increase accuracy and reliability and reduce our costs all by taking advantage of cache storage. If you are serious about building an AI this is the best way to get access to high quality, efficient results using your own infrastructure.


Advantages of SearchAPI Integration with LLM

Control over Your Search Logic and Workflow

With API access to the SearchAPI directly, it is possible to manage in detail things such as: query building, number of sources to include in a search, how to filter and rank results as well as how to cache and reuse data from previous searches.

For example: When ASCN.AI answers questions such as "Why does token X have a lower price now than its opening value?" they run parallel news queries across multiple sources including news outlets, Twitter, Telegram and an analysis of on-chain activity via our own nodes. Thus providing the lowest latency for access to the most relevant and high quality information.

Consistent and Repeatable Results

Tools such as the built-in search may often return inconsistent results for the same input query due to hidden settings. Conversely, the API provides deterministic results in the JSON format, and the user can repeat the same query to ensure that results are consistently produced.

ASCN.AI moved from using built-in search with an unordered return set to using the API with filtering by publication date. As a result, ASCN.AI users now receive relevant news articles returning consistent answers.

Flexibility to Use and Combine LLMs

ASCN.AI has complete control over which models are integrated into the API. Users can integrate any of OpenAI's models (GPT-4, GPT-3.5), Anthropic Claude, LLaMA (open source), and/or Mistral. Finding the right price to performance balance and speed for a specific model is easy.

ASCN.AI employs a hybrid of three models: GPT-3.5 for general queries, GPT-4 for more in-depth analyses and proprietary fine-tuned models specifically built for cryptocurrencies, resulting in significant savings and improved quality of results.

Scalability and Cost Effectiveness

There are no 3rd party intermediaries between users and the SearchAPI, which results in reduced costs to consumers; users pay only for actual API query requests and LLM token usage. ASCN.AI additionally caches commonly queried information (including BTC price) for a short duration (1 minute) to save on the number of API calls made for the most recent data.

Increased Data Ownership and Visibility

With complete control of your stack, ASCN.AI provides visibility into the following operations:

  • Logging of all requests made to the API
  • Analysis of all data source(s) used in generating each request
  • Tracking of the time taken to execute each request

Error and performance optimization can be accomplished quickly using ASCN.AI. ASCN.AI keeps all debugging logs and serves as an archive for both legal and support purposes (in case of disputes).

Customisation and Expansion of Solutions

Workstreams designed for your specific tasks. Some examples include:

  • Legal — Specialised APIs (LexisNexis), filters by jurisdiction, and summaries of relevant precedents.
  • Marketing — Monitoring of mentions in social and print media, analysing sentiment, and producing trend reports.
  • Trading — Processing simultaneous requests from news and social media, on-chain records and verifying validity of all three sources.

Industry-specific configuration of sources and algorithms adapted to your own industry.

Integration with Third-party APIs and Services

  • Internal source databases: CRM, ERP, custom indices
  • Specialised APIs: Alpha Vantage, CoinGecko, Etherscan, NewsAPI
  • Social Network APIs: Telegram, Twitter, Reddit

ASCN.AI has also integrated our own Ethereum and Solana nodes, exchange APIs, news aggregators and social media parsing engines into one platform where each agent selects a source based on its assigned task.

Advanced SearchAPI Features and Settings

  • Date, content type, and domain filters
  • Search operators — specific phrases, exclusions, and OR
  • Regional and Language-based localisation parameters
  • Dedicated Search Engines for News, Videos and Academic Articles

Combining the filter criteria makes it possible to create searches with the greatest level of relevance.


Practical Setup Guide: Getting Started with SearchAPI and LLM

Choose your providers: SearchAPI (SerpAPI, ScaleSerp), LLM models (OpenAI GPT-4, Anthropic Claude, LLaMA). Register, obtain your API keys, and review your rate limits.

Design your architecture: frontend (bot, web), backend (server) that calls the Search API and LLM, returning responses to the user.

Example of simple Python code:

user_question = get_user_input()
search_query = generate_search_query(user_question)
search_results = call_search_api(search_query)
context = extract_relevant_snippets(search_results)
llm_prompt = f"Question: {user_question}\nContext: {context}\nAnswer:"
answer = call_llm(llm_prompt)
send_answer_to_user(answer)

The idea is one request, processing, answer. Advanced logic is built on top of this MVP.

Step-by-Step Setup of Web Search for AI Agents

  1. Register and obtain API keys for SearchAPI and LLM.
  2. Set up your development environment and configure environment variables with your keys.
  3. Write a function to call SearchAPI, example for SerpAPI:
import requests

def search_web(query, api_key):
    url = "https://serpapi.com/search"
    params = {
        "q": query,
        "api_key": api_key,
        "engine": "google",
        "num": 10
    }
    response = requests.get(url, params=params)
    return response.json()
  1. Create a function to call the LLM (OpenAI):
import openai

def ask_llm(prompt, api_key):
    openai.api_key = api_key
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content
  1. Connect search and response generation:
user_question = "Why did Bitcoin fall?"
search_results = search_web(user_question, SEARCH_API_KEY)
snippets = [item['snippet'] for item in search_results.get('organic_results', [])]
context = "\n".join(snippets)
prompt = f"Question: {user_question}\nContext:\n{context}\nAnswer:"
answer = ask_llm(prompt, OPENAI_API_KEY)
print(answer)

At ASCN.AI we use Telegram Bot API for the frontend and Python with FastAPI for the backend. We combine multiple APIs (news, exchanges, on-chain) for maximum completeness.

How to Add and Assign a Web Search Tool in an AI Agent

If you use LangChain, AutoGPT, n8n or ASCN.AI NoCode, the setup becomes much simpler.

from langchain.agents import initialize_agent, Tool
from langchain.llms import OpenAI
from langchain.utilities import SerpAPIWrapper

search = SerpAPIWrapper(serpapi_api_key=SERPAPI_KEY)
tools = [
    Tool(
        name="Search",
        func=search.run,
        description="Useful for finding current information"
    )
]
llm = OpenAI(temperature=0, openai_api_key=OPENAI_KEY)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description")
agent.run("Why did Bitcoin fall?")

In ASCN.AI NoCode this is done through the visual Workflow editor by adding HTTP Request nodes for SearchAPI and an AI Agent node with context passing.

Adding Custom Parameters (e.g. Geolocation)

SearchAPI supports additional parameters to refine your search:

  • Geolocation: gl=us (United States)
  • Interface language: hl=en
  • Time filter: tbs=qdr:w (last week)
  • Content type: news, video, images

Example request for US news from the past week:

params = {
    "q": "Bitcoin price",
    "api_key": SERPAPI_KEY,
    "engine": "google",
    "gl": "us",
    "hl": "en",
    "tbs": "qdr:w"
}

At ASCN.AI these parameters help target the right sources and make queries more precise.

Configuring Source Display and Citations

Displaying sources increases trust and legal safety. Options:

  • Inline links (list at the end)
  • Interactive footnotes in the interface
  • Metadata in JSON (answer + list of sources with dates and URLs)

At ASCN.AI we use the third option — the frontend has buttons to navigate directly to sources.


Use Cases in Various AI Applications

Commonly Encountered Scenarios and Tasks

  • Real-time Market: searching rates, the latest news, and analysing social media engagement.
  • Fact-checking: verifying credibility of reported news.
  • Personalisation: content selected to match users' preferences using both current and historical information.

Detailed Examples of Successful Use Cases

Case 1: Crypto Virtual Assistant ASCN.AI
Issue: Manually analysing tokens takes a significant amount of time — hours.
Solution: Agent with access to on-chain data, current news and social media.
Result: Task completed in 10 seconds instead of several hours. Used for scalping and other directions. In the first month the agent processed more than 10,000 requests.
Case 2: Legal Company
Issue: Searching for legal precedents manually takes several days.
Solution: Use LexisNexis API to search for precedents using AI agents.
Result: 10 minutes to search for precedents versus several days, with improved accuracy.
Case 3: Marketing Agency
Issue: Manual monitoring of news and social media requires dedicated staff.
Solution: A 24/7 monitoring agent scans news and social media.
Result: Notifications received within hours, monitoring staff requirements eliminated.

Limitations and Recommendations for Use

  • Source quality directly impacts your answer — confirmed sources or cross-verification are better.
  • LLM Hallucinations have occurred due to a lack of supporting context — you can eliminate this issue by adding instructions and checks to your prompt.
  • Cost management requires caching frequently used requests, utilizing cheaper API services where possible, optimizing your queries, and limiting the volume of user queries.
  • Legal risks associated with AI must be minimized by having appropriate disclaimers and maintaining logging capabilities.

Frequently Asked Questions (FAQ)

How to ensure the accuracy and relevance of data?
Use only reliable sources, filter by date range, and use cache for short periods. All articles older than 7 days at ASCN.AI are considered outdated, and rates are updated every minute.

What LLMs are best for Web Search?
GPT-3.5 — fast and low-cost for easier questions. GPT-4 and Claude 2 — for the best analytics. Fine-tuned or open-source LLMs allow for custom models and data management.

When scaling, how do I manage costs?
Cache requests frequently, optimize prompts, use cheaper models and batch process requests. Limit how many requests users can submit to manage your budget, and continue to monitor and analyze costs.

What should I do when I receive errors or exceptions while searching?
Retry when you cannot access information, correct your queries, revalidate formats of responses, and refer back to an alternate source once the limit has been reached. At ASCN.AI, our system has been set up to ensure a retry of three attempts and will use other data sources if necessary.


Common Mistakes to Avoid When Setting Up AI Agents with Web Search

  • Not filtering sources — use whitelists or date filters to keep track of trending topics.
  • Not caching frequent queries — caching saves money and eliminates unnecessary requests.
  • Not monitoring for LLM hallucinations — include specific instructions in your query to avoid similar issues.
  • Not logging your requests and responses — keeping track will allow you to troubleshoot issues more efficiently.
  • Not testing with actual users — beta testing and collecting user feedback will help you continuously improve your products.

Optimizing for Performance and Quality

  • Process requests in parallel.
  • Decrease the token size of the prompts.
  • Use LLMs to reformulate questions prior to conducting searches.
  • Utilize LLMs to rank results from a search query.
  • Remove irrelevant and promotional material from search results.
  • Continuously monitor the quality of LLMs and their answers.

Security and Privacy Best Practices

  • Do not share any personal details with public LLMs — stick to local models to protect your identity.
  • Encrypt your logs and only allow authorized individuals to view them.
  • Have mechanisms in place to restrict unauthorized access to your system via prompt injection.
  • Comply with all regulations regarding data privacy, including GDPR and other applicable law.
  • At ASCN.AI, we protect user data by utilizing anonymization, encryption, and deletion policies.
FAQ
Still have a question
Do I need coding skills to set up this template?
No coding skills required! This template is designed for no-code users. Simply follow the step-by-step setup guide, connect your accounts, and you're ready to go.
How does this template help maintain data security?
All data is processed securely through official APIs with OAuth authentication. Your credentials are never stored in the workflow, and you maintain full control over connected accounts and permissions.
What is a module?
A module is a single building block in the workflow that performs a specific action — like sending a message, fetching data, or processing information. Modules connect together to create the complete automation.
Can I customize the template to fit my organization's specific needs?
Absolutely! You can modify triggers, add new integrations, adjust AI prompts, and customize responses to match your organization's workflow and branding requirements.
How customizable are the AI responses?
Fully customizable. You can edit the AI system prompt to change the tone, language, response format, and behavior. Add specific instructions for your use case or industry terminology.
Will this template work with my existing IT support tools?
This template integrates with popular tools like Gmail, Google Calendar, Slack, and Baserow. Additional integrations can be added using available API connectors or webhooks.
What if my FAQ knowledge base is empty?
No problem! The template includes setup instructions to help you populate your FAQ database with commonly asked questions and answers. Start small. As new questions arise, you can easily add more FAQs over time.
Is there a way to track unresolved issues that require follow-up?
Yes! You can configure the workflow to log unresolved queries to a database or spreadsheet, send notifications to your team, or create tickets in your issue tracking system for manual follow-up.
What if I want to switch from Slack to Microsoft Teams (or another chat tool)?
Simply replace the Slack module with a Microsoft Teams or other chat integration module. The core logic remains the same — just reconnect the input and output to your preferred platform.
If you have questions about the template or want to launch it for the best results, contact us and we'll help you set it up quickly
message
By continuing to use our site, you agree to the use of cookies.