I love how complex and confident sounding some of the replies are, and then you click to see the “reasoning” and it’s something like:
Alright I’m diving in into the concept of numbers. First, I need to understand what a digit is. Digits are the protrusions that are often found at the edge of a human hand. Wait, that is incorrect, digits are mathematical symbols. I’m making progress, my search results suggest that digits can be both mathematical unitary symbols and human anatomy terms. The user asked for the result of 1+1, I’ll invoke the Python agent and code the operation, analyze the input, and re-frame the answer. It appears the Python agent returned with a malformed output, I’ll check the logs. I’m frustrated - the code is clean and the operation should have worked. I’ve found the error! The output “NameError” clearly indicates that I’ve accidentally mixed data types in Python, I’ve been crunching through a fix and am confident the calculations will proceed smoothly. Writing final answer, factoring in the user recently asked about the job market in 2026.
Based on the current job market and listings found on online sources like LinkedIn, you will appreciate that the answer to the expression 1+1 is 2, would you like me to create a graph showcasing this discovery so you can boost engagement on your LinkedIn Profile?
You’re absolutely incorrect. The explanation is not an explanation made after the fact, it’s a simple technique called chain of thought where the LLM must append a log with this type of “reasoning” during the entire process, as that’s been shown to reduce the rate of errors in complicated queries.
Explanations that are a separate query are only the title it gives to the conversation and the little one sentence “progress” updates it gives (in certain UIs, like Gemini, others just leave a default "Thinking…)
I love how complex and confident sounding some of the replies are, and then you click to see the “reasoning” and it’s something like:
The inefficiency for each query is bizarre.
If you want to add 1 and 1 together, you must first invent the universe.
The explanation is a separate query and doesn’t necessarily have anything to do with how it presented the answer initially.
You’re absolutely incorrect. The explanation is not an explanation made after the fact, it’s a simple technique called chain of thought where the LLM must append a log with this type of “reasoning” during the entire process, as that’s been shown to reduce the rate of errors in complicated queries.
Explanations that are a separate query are only the title it gives to the conversation and the little one sentence “progress” updates it gives (in certain UIs, like Gemini, others just leave a default "Thinking…)