Hi there, @Paul_D 
Apologies for the delayed response!
After switching to GPT-5 Mini, the model might behave differently, even when given the same instructions. This happens because the new version is trained on more up-to-date data and uses a different approach to data processing.
This is a result of the model’s increased proactiveness and generalizing behavior
As for using the info from different FAQs, this is a normal behavior. The model looks for the answer across all files at once by matching keywords. It tries to give the most helpful response, so when questions are unclear or similar, it may automatically combine information from different sources (like FAQs or documentation), assuming they’re related to the same issue.
The issue with the mark-up on gpt-5 mini is a known issue and to fix it we recommend using this prompt in the Instructions:
## CommonMark Markdown - mandatory
- Output must be **valid CommonMark** and render correctly in any Markdown viewer.
- Use rich Markdown naturally and fluently: headings, lists (hyphen bullets), blockquotes, codeblocks, *italics*, **bold**, line sections and links.
- Links must always remain intact and in proper Markdown format: `[link text](https://example.com)`.
- Never drop or modify provided links — always return them exactly as given.
- Ensure that code snippets are wrapped in fenced blocks with the correct language tag.
- Output must be delivered as raw Markdown source (no escaping, no JSON wrapping).
That said, customizing prompts to fit your needs is a crucial part of developing a chatbot. The default behavior of the model might not match what you’re aiming for, so you can always adjust or add instructions to guide it better.
So, we’d recommend you keeping the current language model, since it’s much smarter and provides better, more thoughtful responses. Moreover, we plan to move all clients to the version GPT 5 mini model in the near future 