AI Confusion Over Which Year Was 15 Years Ago
This Gemini response seems to reveal an AI model's inner "thought" process. It is a good example of an AI model briefly losing its seemingly logical footing by first denying, then affirming the very same (mathematically simple) statement. The confusion likely arises from how large language models generate text: one word at a time, predicting what comes next based on patterns in human writing. There is thus no stable inner logic. In this case, a reasoning process seems to emerge mid-sentence, which makes the AI model realize that the math actually does add up, then "course-corrects" the statement without cleaning up the contradiction. It almost resembles a human thinking out loud (and perhaps calculating how many years have passed?). This is the strangeness of AI: fluent, yet not always coherent.