How an embarrassing U-turn exposed a concerning truth about ChatGPT | Chris Stokel-Walker

An update was reversed that made the chatbot too ‘sycophantic’: always remember that it’s designed not to answer your question, but to give you the answer you wanted

Nobody likes a suck-up. Too much deference and praise puts off all of us (with one notable presidential exception). We quickly learn as children that hard, honest truths can build respect among our peers. It’s a cornerstone of human interaction and of our emotional intelligence, something we swiftly understand and put into action.

ChatGPT, though, hasn’t been so sure lately. The updated model that underpins the AI chatbot and helps inform its answers was rolled out this week – and has quickly been rolled back after users questioned why the interactions were so obsequious. The chatbot was cheering on and validating people even as they suggested they expressed hatred for others. “Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly said, in response to one user who claimed they had stopped taking their medication and had left their family, who they said were responsible for radio signals coming through the walls.

Chris Stokel-Walker is the author of TikTok Boom: The Inside Story of the World’s Favourite App

Continue reading...