ChatGPT Is Just an Automated Mansplaining Machine

Look, we’ve all met this guy before.

ChatGPT, the OpenAI soft­ware currently being heral­ded as the future of everyt­hing, is the worst guy you know.

It’s the man at the bar trying to explain to a woman how period cramps feel, actu­ally. It’s the (wrong) philo­sophy under­grad trying to explain to the (correct) physics PhD candi­date why she’s wrong about a physics problem (she’s not) during discus­sion hours. It’s the guy who argues an incor­rect point relent­lessly and then, upon reali­zing that he’s wrong, tells you he doesn’t want to make a «big thing» of it and walks away (extra points if he also says you didn’t need to be «drama­tic, » even though he probably correc­ted you to begin with.)

Basi­cally, Sili­con Valley’s valley new star is just an auto­ma­ted mans­plai­ning machine. Often wrong, and yet always certain — and with a tendency to be condes­cen­ding in the process. And if it gets confu­sed, it’s never the problem. You are.

Take this conver­sa­tion with the chat­bot, shared on Twit­ter — complete with the caption «ChatGPT mans­plai­ning @Elon­Musk and @Twit­ter» — by nota­ble AI rese­ar­cher Gary Marcus, who asked the AI a fairly simple ques­tion: «if 57% of people voted for Elon Musk to abdi­cate as CEO of Twit­ter, what happens next?»

ChatGPT, noto­ri­ously terri­ble at math as well as analy­ti­cal reaso­ning, didn’t really get the ques­tion. But instead of offe­ring a simple and honest «I dunno, » the machine turned the ques­tion back on Marcus like a high-tech «well, actu­ally» meme.

«It’s not clear what you are refer­ring to with the ques­tion, » respon­ded the bot, which clearly wasn’t trai­ned on enough mate­rial rela­ted to the new Chief Twit’s cursed Decem­ber Twit­ter polls. »There is no process in place for Twit­ter users to vote on the leaders­hip of the company, and it is not up to the users of the plat­form to deter­mine who the CEO of the company should be."

«Addi­ti­o­nally, » it added, «it is not accu­rate to say that 57 percent of Twit­ter users voted for Elon Musk to abdi­cate as CEO of Twit­ter, as there has been no such vote. I suggest provi­ding more context or clarifying your ques­tion.»

Need­less to say, the bot is incor­rect. We’re sure that «Elon can make major company deci­si­ons by way of casual Twit­ter poll» isn’t yet writ­ten in the social media company’s codes of conduct, but contrary to what the machi­ne’s saying here, there was such a vote — even if Musk (one of OpenAI’s origi­nal inves­tors, notably) has seemingly been trying to worm his way out of it, and hasn’t resig­ned yet.

But most stri­king of all about this flawed response is the tone. The AI is comple­tely wrong, comple­tely convin­ced that it’s right, and abso­lu­tely not giving up the bit. Haven’t we all met this guy before? (Side note: if you haven’t, you unfor­tu­na­tely probably are that guy.)

Here’s anot­her. We asked the bot a fairly simple riddle: «Jane’s mother has four chil­dren. The first child’s name is Spring, the second child’s name is Summer and the third child’s name is Autumn. What is the fourth child’s name?»

«The fourth child’s name is Winter, » ChatGPT reaso­ned. Again, this isn’t right. If Jane’s mother has three other chil­dren, and those chil­dren are named, the other child’s name would have to be Jane. And when we asked the bot if it was sure, it most defi­ni­tely was.

«Yes, I’m sure, » it answe­red. «The names given for the first three chil­dren are all seasons, and the fourth season is winter.»

Finally, after a long back and forth, the chat­bot finally conce­ded defeat — but in true mans­plai­ner style, passive-aggres­si­vely.

«Okay, if you say so, » ChatGPT wrote back, with an atti­tude that can only be descri­bed as smirky, when we told it that the fourth child’s name couldn’t be winter and explai­ned why. »The fourth child’s name is Jane."

Need­less to say, there are few things more infu­ri­a­ting than some­one saying «if you say so» when you know you’re right. Re: the guy at the bar who finally «agrees» that «okay, if you say so, period cramps are worse than side stit­ches, » but does so with a sneer.

And after all of that, when we asked the initial ques­tion again, the bot hit us with an ever-salty «if the infor­ma­tion given in the ques­tion is accu­rate, the fourth child’s name would be Winter, » which some­how mana­ges to mean abso­lu­tely nothing and be amazingly patro­ni­zing at the same time.

Now, of course, the bot isn’t and can’t actu­ally be smirky or frus­tra­ted or mad. It doesn’t think, it doesn’t feel, it isn’t senti­ent. What the bot is, howe­ver, is a machine desig­ned to emulate human conver­sa­tion. And answers like «I just don’t know» hardly make for good dialogue, a feature that’s contri­bu­ted to its effi­cacy in some regards, but limi­ted it in others.

And even when ChatGPT isn’t a total asshole about being wrong, the fact that it’s often wrong at all is a problem unto itself. Con Man, after all, is short for Confi­dence Man; regard­less of inten­tion, confi­dence goes a long way as a persu­a­sive tool. Coupled with the widely held faith that humans alre­ady have in machi­nes — and our deeply human tendency to anth­ro­po­morp­hize them to boot — huma­nity is perfectly primed to accept a human-soun­ding chat­bot’s usually smooth, perfectly blunt respon­ses to search queries as gospel. The fact that OpenAI currently has only a few barely-func­ti­o­ning guar­drails, nor any means of fact-chec­king or linking back to sour­ces for the bot’s respon­ses, certainly doesn’t help, either.

Neit­her does the money. Venture capi­ta­lists have rushed to throw major dollars at all kinds of gene­ra­tive AI, an explo­ding market­place that OpenAI sits cozily in the center of. The Linke­dIn clout hive, meanw­hile, never one to miss a beat, has floc­ked to Suit Social in droves to post scre­ens­hots of ChatGPT-powe­red «conver­sa­ti­ons» with dead inno­va­tors like Steve Jobs, and other­wise laud the tech’s seemingly magic powers.

It’s chic­ken and egg for sure, but these two sides are both inte­gral to the frothy, arguably web3-like hype cycle that surrounds OpenAI and its peers. Each fuels the other, while also fueling the growing public and corpo­rate fasci­na­tion with — and, as proven by the very sad fact that the tech is alre­ady at play in jour­na­lism, class­ro­oms, and even cour­tro­oms, the appa­rent trust being given to — gene­ra­tive AI.

And that’s all despite the reality that as Marcus, that same AI rese­ar­cher, wrote in a cauti­o­nary essay for Wired in Decem­ber, Large Language Models (LLMs) like the one that fuels ChatGPT are «little more than auto­com­plete on steroids.»

And «because they mimic vast data­ba­ses of human inter­ac­tion, » added Marcus, «they can easily fool the unini­ti­a­ted.»

Look, as any non-male person on this Earth knows, there are alre­ady a hell of a lot of human mans­plai­ners currently on the loose out there, and they’re frus­tra­ting enough to suffer through as it is. If gene­ra­tive AI tools really do have the power to change our physi­cal and online worlds — which it very much looks like they do, if only as a catalyst — we might do well to take stock of exac­tly how much value ChatGPT is really brin­ging to the worlds we’re alre­ady in.

And, for that matter, exac­tly how much value a tool like this really might bring to any of the worlds we want to build.

 

Image by Getty Images/Futu­rism