There is definitely more going on with LLMs than just direct parroting.
However, there is also an upper limit to what an LLM can logic/calculate. Since LLMs basically boil down to a series of linear operations, there is an upper bound on all of them as to how accurately they can calculate anything.
Most chat systems use python behind the scene for any actual math, but if you run a raw LLM you can see the errors grow faster as you look at higher orders of growth (addition, multiplication, powers, etc.).
Yes, exactly. It can do basic math and also doesn’t really matter because it is really good at interfacing with tools/functions for calculation anyway
However, there is also an upper limit to what an LLM can logic/calculate. Since LLMs basically boil down to a series of linear operations, there is an upper bound on all of them as to how accurately they can calculate anything.
Also this is only true when LLMs are not using Chain of Thought.
There is definitely more going on with LLMs than just direct parroting.
However, there is also an upper limit to what an LLM can logic/calculate. Since LLMs basically boil down to a series of linear operations, there is an upper bound on all of them as to how accurately they can calculate anything.
Most chat systems use python behind the scene for any actual math, but if you run a raw LLM you can see the errors grow faster as you look at higher orders of growth (addition, multiplication, powers, etc.).
Yes, exactly. It can do basic math and also doesn’t really matter because it is really good at interfacing with tools/functions for calculation anyway
Also this is only true when LLMs are not using Chain of Thought.