In fact, at some point Google and OpenAI will have to face that the actual best uses for chatbots (summarizing chunks of text and some easy programming tasks, so far?) diverges from what many enthusiasts want the chatbots to be (computer demi-gods). If Gemini won’t tell you that Hitler is worse than Elon Musk, is it a failure of the chatbot that needs to be fixed, a failure of the user for prompting it to the wrong purpose, or a failure of the chatbot’s owners for trying to have their cake and eat it too? Is it a precise creative tool, a well-sourced search engine, an accurate encyclopedia, a magical scrying ball, a silly parlor trick? Google and OpenAI and their peers and boosters have marketed A.I. chatbots as all of the above--do-anything miracle tools--but these models manifestly can’t do “anything,” as John Herrman writes at Intelligencer:The best defense the AI firms have — our products aren’t as good as we’ve implied, they reproduce and exaggerate problems that exist in the real world, they might be irresolvably strange as a concept, and no matter how we configure them, we’re going to be making unilateral and arbitrary decisions about how they should represent the world — is one that they can’t really articulate, not that it would matter much if they could. Image generators are profoundly strange pieces of software that synthesize averaged-out content from troves of existing media at the behest of users who want and expect countless different things. They’re marketed as software that can produce photos and illustrations — as both documentary and creative tools — when, really, they’re doing something less than that.
Thursday, February 29, 2024
Trapped With A Chatbot
It'll change everything!