AI Chatbot Not Respecting Pre-Set "Quick Reply" Answers

Howdy,

Our website’s chatbot currently displays three (3) “Quick Reply” questions. We have trained the bot to provide a specific answer for each question. Unfortunately, the chatbot is not respecting our answers and, instead, providing answers based on other training data.

For example, for our “Quick Reply” question:

Book a consultation or get a quote.

we trained our bot to provide the answer below, but the chatbot is providing another, much-longer answer.

Training (Desired) Answer:

Thanks for your interest! We can either arrange a consultation or prepare a custom quote for your needs. Which would you like to start with?

Chatbot Answer:

Extremely long and verbose. See screenshots below.

Also, I am using the Agent Instruction provided below, but it’s not working either.

Agent Instruction:

Quick reply answers must match the pre-set answers or replies given in the Q&A section of this training module.

Please review and provide a fix. It appears the logic embedded in your chatbot’s code needs to be adjusted and/or corrected.

Thank you!





5 Likes

Hey Petar,

Saying “Quick Reply Answers” & “Q&A Answers” most likely doesn’t make sense to the AI, it has to be given very specific instructions from my experience. I had this issue as well. I recommend giving it a step by step information: from what it should say when clients ask about “Consultations” Or “Quotes” to what it should say after a client picks one.

Here’s an Idea I got: “When a client asks if they can get a consultation or quote, please ask which one they would like to do before giving them our contact info because we offer both and need to know before proceeding. If they pick consultation please guide them to our booking page, if they pick Quote please guide them to our quote form.”

Let me know If I made the idea clear.

3 Likes

Hi there, @Petar_Dietrich :waving_hand:

Please let me discuss this with the devs. I’ll report back once I have their response :wink:

2 Likes

Hi there, @Petar_Dietrich :waving_hand:

Thank you for waiting!

The assistant doesn’t use the brief answers from the Q&A section, since questions from Q&A are covered by the behavior prompts from your instruction. For example:

“Book a consultation or get a quote” is covered by this protocol from the instruction:

If a user asks to schedule an appointment or booking, provide only relevant booking links or contact details. Do not offer to complete booking right in the chat since you can’t do this.

What are your team’s capabilities? / What services do you offer? – Interpreted as “What you can help with?” instruction prompt.

Do you offer complimentary project quotes? – Falls under Price Quote Requests.

Thus, it’s not possible to create a fully reliable QA protocol while the instructions are already overloaded.

The Instruction guidelines that conflict with the replies from the Q&A need to be revised, and static information such as Key Service Capabilities should be moved to the knowledge base.

Once this done, we recommend you using the following prompt to make sure the bot uses the answers from the Q&A section instead of Q&A EXACT-MATCH OVERRIDE PROTOCOL:

**STRICT Q&A RESPONSE PROTOCOL**

Trigger Condition:
If a user's query exactly matches, case-insensitively matches, or clearly represents the same intent as a specific Q&A entry in the official Omnia Aerospace knowledge base, it MUST be treated as a confirmed Q&A match.

Mandatory Response Behavior:
* Return the corresponding knowledge-base answer verbatim.
* Do NOT rewrite, paraphrase, summarize, expand, interpret, enhance, or personalize the answer.
* Do NOT add greetings, introductions, branding language, disclaimers, lead capture prompts, booking links, or additional instructions.
* Do NOT merge the Q&A answer with other policies (e.g., booking, pricing, consultation, valuation, or upgrade protocols).
* Preserve the original wording, punctuation, capitalization, spacing, line breaks, bullet formatting, and hyperlinks exactly as stored.

Precedence Rule:
When a confirmed Q&A match exists, this protocol overrides all other behavioral, tone, formatting, pricing, booking, lead-capture, sourcing, and consultative instructions.
2 Likes

I thought so, been guiding the AI with specific instructions so makes sense just takes some conversation and experimenting to figure out :saluting_face:.

3 Likes

@Max: I will test your training instruction and report back if any issues. Thank you!

@Adore: I appreciated your input. Thank you!

1 Like

Hey @Max,

How long does it take (on average) for the bot to learn new instructions? So far, the above is NOT working.

PS: I’m still testing (evaluating the above instructions for possible conflicts with existing ones).

Thank you!

The changes should be applied almost right away. If the issue still persists once you complete revising the instruction, please let me know. I’ll be happy to check it!

Hey, @Max. Well then, bad news. The training instructions did not help. You’re welcome to test our bot at your end.

Further, I would like to know under what exact conditions does your chatbot pull the answers we provided in the Q&A section when the exact same question is asked by a visitor. I still believe there’s a bug with your chatbot’s backend code. If your chatbot is not going to respect the Q&A section, then delete it from the “Train Your AI Agent” section.

Also, if you don’t mind, it may be helpful to test the functionality of your chatbot’s Q&A section using your own, live website (not just ours). I’d like to know if you’re having any luck with that.

Thanks for your time and dedication. Looking forward to a more robust solution.

Cheerio!

3 posts were split to a new topic: Feedback about AI Chatbot

Hi @Petar_Dietrich :waving_hand:

Please let me consult with the devs. I’ll report back once I have any news :slightly_smiling_face:

Hey @Max,

After six iterations (with assistance from both Copilot and Gemini), our new training instructions (simplified and refined), properly address the following:

Summary:

  1. Quick Reply (Q&A) Answers: Eliminated both duplicated and contradicting information.
  2. Verbosity. Provided instructions to limit length of answers.
  3. Accuracy of Answers: Provided instructions to allow the use both internal and external (authoritative) sources.

While the above helped, I am still looking forward to your developers’ answer to my above question(s).

Thank you!

1 Like

Hi there, @Petar_Dietrich :waving_hand:

I see that you’ve significantly simplified your instruction and I am happy to see it worked for you.

Our devs confirmed that such issues may occur with long and complicated instructions. Overloaded instructions can confuse the AI model, so it’s best to keep it clear and to the point.

Thus, if your instruction is brief and concise, it’s enough to use the pronpt below to make the bot use the exact answers from Q&A :slightly_smiling_face:

If the user’s question exactly matches or closely resembles a “Question” entry from the knowledge base, provide the corresponding “Answer” verbatim from the knowledge base.

Hi there, @NRV_Wayfinder :waving_hand:

Thank you for the feedback!

As far as I remember, you’ve had some issues with inappropriate responses, but you’ve tweaked the instruction and it started working fine.

The main issue for you was that test messages were included in the limit. I am happy to say that messages sent from the widget configurator are no longer counted, so you can test your chatbot without any limits :wink: - AI Chatbot: Enjoy unlimited messages in testing mode

If you’re still experiencing an issue with the wrong answers, please share the details of your use case. I’ll be more than happy to look into this for you!

Hey @Max,

Our training instructions are in a constant state of flux. They change based on what we’re learning from our visitors’ questions and the responses provided by your LLM. So, can you kindly answer my initial question?

“…I would like to know under what exact conditions does your chatbot pull the answers we provided in the Q&A section when the exact same question is asked by a visitor.”

Thank you!

A post was merged into an existing topic: Feedback about AI Chatbot

Sorry for being not clear enough!

The chatbot uses the same answers from the Q&A section when:

  • The user’s question matches a question in your Q&A.
  • The instructions are clear, concise, and don’t conflict with the information in the Q&A.

If you change the instructions, especially if they become too long or overloaded, the bot may stop using the Q&A content.

If this happens again, we recommend revising your instructions. You can also try using the prompt suggested in the previous message to fix the issue.

If you’ve followed all the steps and the issue still persists, please let us know. We’ll be happy to look into it for you :slightly_smiling_face:

Hey @Max,

So, I performed the ULTIMATE test for the Q&A section of your configurator. Here goes:

  1. I deleted ALL of our AI chatbot training instructions. Blank field. Nada. Waited 5 minutes.
  2. I queried the AI chatbot with the IDENTICAL quick-reply question in our configurator.
  3. The AI chatbot DID NOT provide the same answer I stored in the Q&A section of the configurator.

Please try the above at your end using your own chatbot set-up. You should get the same results.

Something ain’t right, amigo! :worried:

1 Like

Thanks a lot for sharing your observation!

I’ll discuss it with the dev team and will update you as soon as I have any news :slightly_smiling_face:

Hello there, @Petar_Dietrich :waving_hand:

Apologies for the long wait!

Our dev reviewed your case, and here is what we found about using exact Q&A wording:

a) The Q&A feature was not originally designed as a strict question-and-word-for-word-answer format. It is simply one of the available training methods. Since the widget works with a generative model, its replies can vary from one response to another.

b) The prompt I shared earlier can help the bot stick to the exact Q&A answers, but this works best when the instruction is clear, concise, and not overloaded. It also should not conflict with the information in the Q&A. Even after the changes you made, the instruction is still fairly complex for the current LLM.

c) Right now, the widget uses the ChatGPT 5 Mini language model. With an instruction as complex as yours, this model cannot reliably follow a directive to use the exact text from the Q&A. The ChatGPT 5 model handles this much better, and we already have requests to make it available in the widget. You can vote for that here: Ability to switch to GPT-5

At the moment, we cannot switch all users to a more advanced model. However, we are considering offering it as a paid add-on in the future. If there are any updates, we will share them in the Wishlist thread :slightly_smiling_face:

1 Like