Where is Waldo? When AI Creates, Not Conceals
This AI-generated image of "Where's Waldo?" is almost fascinating: so many Waldos that one might ask where he *isn't*. Apparently, the AI model interpreted "Where's Waldo?" as an instruction to *produce* Waldo, rather than to hide him within a larger scene. It seems to prioritize visual replication (yes, it has been trained on countless instances of Waldo's distinctive red-and-white form) from its training data to an almost absurd degree over the semantic intent. A prompt asking to conceal someone or something may fail due to a lack of understanding of negativity (similar to our cognition when we are asked to not think about, say, elephants, we will probably think about one). A prompt that implicitly asks to hide Waldo leads to an output where Waldo is everywhere.