You’re absolutely incorrect. The explanation is not an explanation made after the fact, it’s a simple technique called chain of thought where the LLM must append a log with this type of “reasoning” during the entire process, as that’s been shown to reduce the rate of errors in complicated queries.
Explanations that are a separate query are only the title it gives to the conversation and the little one sentence “progress” updates it gives (in certain UIs, like Gemini, others just leave a default "Thinking…)
You’re absolutely incorrect. The explanation is not an explanation made after the fact, it’s a simple technique called chain of thought where the LLM must append a log with this type of “reasoning” during the entire process, as that’s been shown to reduce the rate of errors in complicated queries.
Explanations that are a separate query are only the title it gives to the conversation and the little one sentence “progress” updates it gives (in certain UIs, like Gemini, others just leave a default "Thinking…)