Changes in AI Chatbot responses

Forget incorrect answers…I just had an anonymous user submit a question that should not have had an answer, and yet the chatbot replied in the affirmative! My services directory DB does not contain a reference to Lyft or Amtrack anywhere. I doesn’t list them as a service location, agency, nor are a name contained in any service description. Your bot had no data to rest its response on, and yet it answered anyway. Misinformation based on my data is one thing, but complete fabrication? It did something similar with the response to Amtrak, but at least it prefaced it by stating that there is no specific information in the data. Still, with no source data, where is it pulling info from for these responses?

I can appreciate it providing info not found in my directory, but the implications are profound. In addition, if a visitor uses the chatbot, gets responses like this, and then goes to the app itself to manually search for information, they will not be able to find Lyft and then it makes my app/directory look incompetent and untrustworthy since they’d expect to find an entry. The bot is offering info that I have no control over (or can vet for accuracy) since its being pulled from outside my uploaded DB.

After talking to CS and you, I even revised my instructions to account for hallucinations. Apparently, it didn’t help. In fact, this is worse. It’s referring to data not provided. Here are my instructions. I originally started with the Elfsight AI analysis and suggested instructions. Then I adjusted it to meet my specific needs. Then, I used another AI (Perplexity) to refine the instructions so they were customized for AI interpretation:

1 Like