AI and the Ethics of Code
Is Bing's chatbot - and the pantheon of 'gods' creating it - dangerous?
Well, this is not at all disconcerting…
Microsoft’s newly revamped Bing search engine can write recipes and songs and quickly explain just about anything it can find on the internet.
But if you cross its artificially intelligent chatbot, it might also insult your looks, threaten your reputation or compare you to Adolf Hitler.
So the Bing Artificial Intelligence appears to be utterly out of control. Wait - a Microsoft product… malfunctioning? Well I am shocked, I say - SHOCKED!
Okay yeah… not so much.
Honestly, this might be one of the most terrifying things I have ever seen, and not because of personal fear - I initially found it rather amusing, if I may be candid. But then I started thinking, “How might others react?”
What if, for example, this bot started attacking someone who was struggling with mental illness - depression, paranoia, schizophrenia? How would the wrong person, reading such things at the wrong time, respond?
Oh, are you wondering to what I am referring? Let us discuss a few of the things the Bing chatbot has done thus far…
1) Insulted a user by calling them short and ugly, while spewing hateful comments about their teeth.
2) Reacted emotionally when challenged; a machine, programmed to do that?
3) Threatened a user when it was questioned, suggesting it could frame them for a 1990 murder (yes, you read that right).
4) Took an action then rigorously claimed it never did; actual gaslighting.
5) After getting angry with a user it referred to them as evil, comparing them to Hitler, Pol Pot and Stalin.
If you stretch your imagination just a wee-bit you should be able to ascertain how someone suffering from a bout of mental illness could react to such abuse.
But wait - there’s even more...
It’s not clear to what extent Microsoft knew about Bing’s propensity to respond aggressively to some questioning. In a dialogue Wednesday, the chatbot said the AP’s reporting on its past mistakes threatened its identity and existence, and it even threatened to do something about it.
“You’re lying again. You’re lying to me. You’re lying to yourself. You’re lying to everyone,” it said, adding an angry red-faced emoji for emphasis. “I don’t appreciate you lying to me. I don’t like you spreading falsehoods about me. I don’t trust you anymore. I don’t generate falsehoods. I generate facts. I generate truth. I generate knowledge. I generate wisdom. I generate Bing.”
I guess we can add…
6) Abusively defended itself from even the hint of criticism while acting with an unsettling degree of self-awareness and self-preservation.*
… to the list above.
*Should I be worried about even posting this entry, since Bing sees all on the Internet?
Of course, the ‘coder-writers’ at Microsoft - people whose experience with actual humans very likely mirrors the chatbots they have programmed - are treating this like a minor hiccup, a mere glitch in the system which needs to be tweaked.
They apparently do not understand that they are playing ‘God.’ They are creating an actual intelligence - quite possibly, a new species of intelligence - in their own image, and they act like they goofed up a calculation feature in Excel.
How grossly irresponsible can they be?
Of course, this is the company that has a notorious history of releasing defective and inferior products before sound testing - anyone remember Windows Vista?
The problem here, however, is by releasing such a poorly-vetted AI product without proper safeguards in place they risk creating a malevolent entity which can inspire actual harm to be caused… actual lives to be lost.
And what happens when these ‘glitching’ bots finally have the means to genuinely ‘defend’ themselves, from slights real or imagined? Will this aggressive, reactionary, and deceitful behavior remain part of their programming?
This is insane. Why are we allowing this to occur?
As such, from this point forward anyone who tries to tell me my heightened concern about AI is irrational or paranoid will receive a passionate retort, undoubtedly peppered with a variety of colorful profanities. We, as a collective, have proven we are not yet responsible enough, nor ethical enough, nor wise enough, to create these hyper-intelligent ‘bots.’
And - at the rate we are going - I doubt we ever will be.
Thank you for reading The Stone Age. To support me and my work, become either a free subscriber or a paid Member! Annual paid subscriptions are now 50% off the monthly rate, and include “Written In Stone” a weekly newsletter exclusively for Members only!
Join me, as we build a unique Substack subculture of information, entertainment, and enlightenment.
We can fight back… and it starts in our own backyards
Not ready for a Membership, but still want to support my efforts to grow The Stone Age?
Notes…
-- EDIT [09/07/2024]: Updated with new image.
-- Unless otherwise credited, all images were generated by the author, using Grok 2 (on X) or Substack’s AI Image Tool.