We’ve noticed your AI chatbot is having trouble fetching real-time info from the web. Even for simple public facts (like Boeing’s CEO or Gulfstream’s address), it’s giving a “hard refusal” or sticking only to internal links rather than searching the live web as instructed. Examples below.
Could you help us look into these four areas?
Search Connection: Is the Web Search/Google API active and authenticated?
Grounding Settings: Is the “Strictness” set too high, preventing it from looking outside uploaded files or other internal training information?
Tool Access: Is “Function Calling” enabled so the bot can actually trigger a search?
Safety Filters: Is public data (e.g., corporate, government) being accidentally flagged as restricted or private?
Above areas are not all inclusive.
Our goal is to make sure the bot can supplement our internal data with fresh industry info when needed.
Is the chatbot currently operating in a “closed knowledge” configuration where it can only respond using uploaded (or limited) content?
If so, please enable live external web retrieval for public factual information (corporate leadership, company addresses, regulatory updates, etc.), with fallback only when verification truly fails.
To help further, I used Elfsight’s AI Chatbot (site demo) and it behaved the same way – as expected and as reported above. So, there’s definitely an issue with your LLM’s ability to perform a safe, live web search and provide the needed answers. See below.
So, yes, your developers definitely need to look into this and fix it - if at all possible.
You were right, the AI Chatbot is currently strictly tied to the knowledge base and can’t search the info in the external sources. This approach is necessary to ensure that the chatbot provides accurate, verified information and minimizes errors, allowing each user to customize the widget for their specific case
At the moment, there is no way to change this behavior and allor external web search. However, I totally get your idea and I’ve added it to the Wishlist on your behalf - Answer questions beyond the knowledge base.
If more users support this idea, we might consider it in the future updates
Respectfully, I disagree with the rationale provided.
Yesterday, prior to your above response, I re-tested your AI Chatbot and it was responding quite well. 99% of answers requiring an external source were accurate. Today, your AI Chatbot fell back to limiting answers that require the use of external sources. Some accurate, others blocked. Example below (tested today).
What’s going on? Your developers must have changed something. Not sure what. Why the difficulty? You’re welcome to check our AI Chatbot Training Instructions. They have been optimized to use reliable, reputable external sources.
If helpful, I can give you a complete list of technical areas and commands that your develoeprs can use to improve and allow the use of external sources.
In short, your AI Chatbot set-up needs to be reviewed and improved. This is not an issue that merits a decision based on “votes.”
Elfsight is using ChatGPT as its LLM. When I query ChatGPT, the answers it provides are completely different than those provided by Gemini (for identical queries). ChatGPT’s answers are inaccurate or stale, Gemini’s answers are accurate. See below. Ouch!
So, it appears the issue I reported above is caused in part by both the LLM (ChatGPT) and the internal directives and/or commands Elfsight developers are using.
I would also like to bet that Google has blocked or restricted ChatGPT (OpenAI) bots and scrapers from accessing Google’s search engine results.
You were actually right in your assumptions. The AI Chatbot does not answer questions beyond the knowledge base for two main reasons:
Internal instructions
In most cases the widget is meant to answer questions related to one specific business or area only.
If we loosen that connection, it could affect the quality of answers to questions that are directly related to the business. Since those are the chatbot’s main priority, we need to be careful here.
That said, I completely understand your point, and I agree that the widget would become much more powerful if it could also answer general questions. We’ll definitely keep this idea in mind for the future
Limitations of the current LLM
The current language model, ChatGPT 5-mini, cannot handle publicly available general information reliably enough. That is why the widget is currently tied so strictly to the knowledge base. This helps avoid false or inaccurate answers.
A more advanced model, ChatGPT 5, handles this much better. In fact, when your chatbot briefly started answering general questions, our developers were testing your widget and had locally switched it to GPT-5.
At the moment, we cannot switch all users to a more advanced model. However, we are considering offering it as a paid add-on in the future. If there are any updates, we will share them in the Wishlist thread