Great topic. Elfsight’s AI Chatbot is awesome. It responds well provided it’s trained properly by both Elfsight and the site Administrator.
Observations:
The AI chatbot can be quite verbose at times. Meaning, once it gives the answer the customer is seeking, many times the response includes additional questions or provides too much information (uncommanded). Is their any way to get the chatbot to “shorten” its answers?
Many times, the AI chatbot provides answers the lead the customers to say “Yes”, but the chatbot does not follow-thru properly, thus impacting the customer’s confidence in the chatbot. A frustrating moment as well. How can we fix this? For example:
Chatbot: “Would you like me to schedule an appointment for you?” Customer: “Yes” Chatbot: “Currently, I’m unable to schedule the appointment, but you can always call (phone number) to schedule an appointment.”
@Petar_Dietrich Hi Petar, thanks a lot for sharing these examples - super helpful use cases!
You’re absolutely right saying that the chatbot’s behavior depends a lot on how it’s trained and what instructions it receives.
If the chatbot feels too verbose, you can add a note to its instructions to keep responses brief and focused. For example: “Keep responses concise and focused. Avoid unnecessary questions or extra details.”
Great example of the bot offering actions it can’t actually complete The best fix is to be explicit in the instructions, for example: “Only offer actions that you can fully complete. If you cannot complete a requested action, do not suggest that you can. Instead, clearly explain the limitation and guide the user to the correct channel (phone, email, live agent, etc.).”
Have you tried to adjust the instruction to correct the chatbot’s behaviour?
If you have other questions or thoughts, you’re most welcome
I added your instructions, but the AI Chatbot is still verbose (better though). Also, it still provides questions or recommendations for which it cannot comply with. See below screenshot for another example (post-training with your suggested instructions).
In any case, rather than trying to fix everything with “training instructions,” is there another, more robust backend solution that your developers can explore (not for my case, but in general)?
Thank you!
Curious: Can you developers add to your AI Chatbot the ability to route customer requests to a given or specified email address (internal, for the business hosting the AI Chatbot)?
To fix the issue when the AI agent offers to complete the booking right in the widget, please try to use a more specific prompt:
If a user asks to schedule an appointment or booking, provide only relevant booking links or contact details. Do not offer to complete booking right in the chat.
Regarding the verbose replies, could you please share an example of such a response after adding Helga’s prompt to the Instruction? I’ll be happy to look into this
Thank you. I noticed you (or your developers) made the changes for us. Much appreciated!
Note: If you don’t mind, please DO NOT change our AI Chat bot configurator without our permission or request. This will help our company manage, control, and track better our chatbot’s content configuration.
Concerning your question, I’ll share a couple of good examples (i.e., verbose answers) when they cross my desk.
And last, thanks for the new Skills feature. I’ll explore it and give it a spin (or someone else in our team).
I completely understand that changes like moving settings to different sections and adding new tabs in the configurator can be a bit confusing. However, this is part just of our regular feature release process.
Making adjustments to each widget individually would be challenging, as it would make impossible to maintain different widget versions. Plus, it would significantly slow down our ability to develop new features, which is our top priority.
However, I also understand your point and the good news is that we always provide detailed information about these changes in our Changelog.
And if you ever miss anything or have questions, feel free to reach out to us. We’ll be happy to help
To clarify, I was not referring to updates (new settings) added by your developers to the AI Chatbot configurator. I was referring to adding custom CSS/JS code or training information to our configurator settings (like you did in the above case) without advance notice, approval, or request. If you or your developers need to add CSS/JS code or training information to our configurator for testing purposes, it’s OK to do that, but please return our configurator back to its pre-testing configuration. Makes sense now?
Thank you. As always, great widget. Keep up the great work!
Hmm. we haven’t added any CSS/JS codes to your widget recently. As for this prompt suggested, I’ve tested it in your widget, but haven’t saved it in the settings.
Do I get it right that this prompt was saved in the Instructions?
Anyways, I totally get your idea and apologize for the inconvenience. We’ll just provide the codes you need in the future and you’ll be able to publish them yourself if needed
No worries, Max. All good. If possible, simply honor my above request. Perhaps share it with your team (as an internal, company guideline or policy)?
Suggested Guideline or Policy: “It is acceptable to modify a customer’s widget configurator for testing purposes provided the customer’s widget configurator is returned back to its pre-testing configuration after testing has been completed.”
And hey, where is everybody else? Am I going to be the only person answering Helga’s topic question? I hope not!
I really appreciate your posts, they’re always full of very helpful details!
I’ve checked the thread you mentioned and see what you mean about the chat being “verbose”, and I’m really sorry about that. Worry not, we’ve already reported this behaviour to the devs - I hope we’ll have a reassuring update for you soon
One challenge I’ve personally noticed when working with an AI chatbot on a website is getting responses that feel both accurate and engaging at the same time. The bot can often answer correctly, but sometimes the tone feels a bit too robotic or generic.
For example, on one project where I share content about puzzle games, I wanted the chatbot to respond more like a helpful guide rather than just giving short factual answers. Visitors usually ask things like how the puzzle works, tips for forming longer words, or strategies for solving daily challenges.
The difficult part was making sure the chatbot:
Explains things clearly for beginners
Keeps the tone friendly and conversational
Still gives accurate puzzle rules and tips
At first, the responses were technically correct but felt a bit stiff. What helped was adding more detailed prompt instructions and example responses so the chatbot understood the style I wanted. I also included a few sample Q&A scenarios about the game, which improved the consistency a lot.
So in my case, the hardest part wasn’t just accuracy — it was balancing accuracy with a natural tone that fits the topic and audience.