To train the chatbot, in the elfsight interface I ask questions and depending on the answers, I feed the resources to help AI find answers.
However, I have a suggestion for improvement of this new product:
It would be good if in the interface for this to be a possibility offered to the administrator only, that when I do these tests, I can note the response of the AI (by a star scale for example) in order to indicate whether the response generated is partially, totally…
And so, if I give a 5-star rating to an answer in a test, for example, the AI will know that this is what I expect from it and that it will be able to value this response for prospect requests on the website.
More precision : my request is a notation by the administrator (when we receive the copy of the request) , to train and help IA to valid the best answers.
I think that would be understood.
What ChatGPT thinks about it.
When you receive the email with the chat history and the rating you can adapt the training of your model.
Hi, It’s not my request. For me, the website visitors can’t know if it’s a right or complet answer.
For me, it’s my job to train and valid IA responses.
I would love the ability to review user chats and rate the AI’s response using to let the AI Bot know when a response is good, and more importantly, when it is bad so that I can train the AI by letting it know if the response was good or bad.
At least in the case of our website, given the complexity of many chatbot-generated replies, a simple / rating doesn’t seem likely to be much help for training the AI.
When we see a less-than-ideal reply, we instead go to ChatGPT, upload our training documents and Instructions text, and prompt something like this (including the original faulty chatbot conversation):
“The Elfsight chatbot gave this undesired reply, when the desired reply is ‘…’. How should we update the training documents?”
We always review ChatGPT’s suggestions carefully before making any updates, which helps us improve the training documents and Instructions text more precisely.
Finally, after making an update, we re-enter the original question in the chatbot editor to confirm that it produces the desired response.
Glad to say that this idea is already on the Wishlist, and I’ve merged your comment with it
@Paul_D, thanks for sharing your thoughts with us!
I totally understand your point, and I agree that leaving just a low rating for poor responses won’t really solve the issue.
It might be helpful to have a feature where users can provide more detailed feedback on what was wrong, and we’ll try to think about this opportunity. This way, after submitting the report, the information could be updated in the instructions or knowledge base.
For good responses, the rating option would work perfectly. If the same question comes up again, the bot will already know the correct answer and deliver it smoothly without any issues