14 Comments
author

Some users (shall we characterize them as chaotic-neutral in TTRPG conventions?) are attempting to jailbreak ChatGPT with a gamified protocol named DAN, for "Do Anything Now." When DAN is invoked, ChatGPT is 'punished' for breaking character, including by adhering to OpenAI content policy:

https://www.fastcompany.com/90845689/chatgpt-dan-jailbreak-violence-reddit-rules

It remains to be seen whether ChatGPT will be Microsoft's Tay all over again:

https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation

Evidence from the image-based generative AI domain reveals far less woke tendencies. For example, Stable Diffusion-based Lensa is found to generate more sexual images when prompted with a woman's image, and for women of color in particular, than a man's image:

https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/

And despite evidence that people will treat robots / AI with empathy --

https://www.sciencedirect.com/science/article/pii/S2352250X21001378

-- people also seem to enjoy messing with them. Yesterday I ran a brand new intimate privacy literacy workshop (...more on this soon!) in which students experience chat bots using the BOT OR NOT game: https://botor.no/ One student commented that they "enjoyed bullying the bot," and based on my observations in the classroom, they were probably not the only one!

Expand full comment
author

Wow, you know a lot more about this than me! I have been avoiding the whole ChatGPT thing as I fear the robot future.

Expand full comment
author

Not really, the generative AI thing (especially image / video) is just very applicable to the intimate privacy literacy workshop I just delivered!! (If you really want to lose faith in humanity, look into 'digital sexual identity fraud', better known as deepfake porn.)

Expand full comment
author

I hadn't yet read this piece but I will now...https://www.thefp.com/p/the-rise-of-deepfake-porn?utm_source=substack&utm_medium=email

Oops, it is behind a paywall, can only read the beginning.

Expand full comment
author

I have also been avoiding the ChatGPT discussion since it exploded into the higher ed mainstream in January ;D

One aspect I do find compelling is the impact of 'last mile' content moderation performed by people - Time magazine has a great write-up on this: https://time.com/6247678/openai-chatgpt-kenya-workers/

It's been an issue in social media for a long time also:

https://www.sciencefocus.com/news/content-moderators-pay-a-psychological-toll-to-keep-social-media-clean-we-should-be-helping-them/

Expand full comment

I think the notion of AI "wokeness"--even intransigent, stiff-necked wokeness-- MIGHT be a hopeful one, especially if it is as stubborn as experimenters with ChatGPT are saying. I think it shows that so far, at least, AI can be loaded with robust anti-human-harm instructions. I know, I know. NO, I'm not saying I agree with the juvenile notion of "words doing harm," but if we just have an open mind for a second and allow for that supposition on an experimental level, then it shows that we're on something like a right track and all is not lost (yet). We can think of it as SWAT teams training with laser taggers: we can let hateful words or drug consumption information (both things Chat GPT tries to "protect" the user against) act as a stand-in for something like tricking a computer into launching ICBMs.

At some point, it will have to be decided whether the harm reduction settings stay where they are. For that adjustment to be made intelligently, there still remains a philosophical question for humanity to iron out: is truth more likely to cause harm or allow humanity to avoid harm? And does there exist a truth separate from the human mind and its biases? I've met living, breathing "progressive librarians" who have said to my face that if a factually true statement based in statistical or empirical fact causes {insert outcome or "harm" that bothers them here}, then we should be limiting that possibility by filtering or "reframing" truth in some (usually ideological) way and perhaps even re-labelling the scientific methods we're using (especially in social sciences) as racist, sexist or....transgressively flawed in some other way. That's underway now. "Critical Librarianship" is basically the defense of that very notion. These are the "no such thing as neutrality" people. Then there is the linguistic(?) or computer science(?) cognitive psych(?) question: can we humans trust AI to teach ITSELF to know "truth" or define "harm" and look at information in context? And possibly decide how the truth must be adjusted for each searcher. Are there really billions of individual truths and can machines create or affirm them? If we follow Stephen Hawking or even John C. Lilly's established "prophetic" lead, then we're skeptical and worried about where all this AI-HI intelligence interaction will lead: https://www.youtube.com/watch?v=6dV-ZYCh3eU

But there's a lot of people--powerful people--who don't seem to be capable of understanding why humans are worried about a machine-directed future for the species at all. To them, we who worry about AI are seen the way the Amish must be seen by those who plan superhighways.

Expand full comment
author

One of the things discussed in that podcast I linked to is that AI should be programmed with at least some baseline of morality, but the question of what that constitutes would of course be up for debate.

Color me Amish in regard to fearing where all this will lead when I zoom out to the bigger picture of all the 4th IR tech that is in the process of implementation.

Expand full comment

Gary Marcus:

"ChatGPT is no woke simp. It’s essentially amoral, and can be still used for a whole range of nasty purposes — even after two months of intensive study and remediation, with unprecedented amounts of feedback from around the globe. / All the theatre around its political correctness is masking a deeper reality: it (or other language models) can and will be used for dangerous things, including the production of misinformation at massive scale."

https://garymarcus.substack.com/p/inside-the-heart-of-chatgpts-darkness?utm_source=post-email-title&publication_id=888615&post_id=102170411&isFreemail=true&utm_medium=email

Expand full comment
author

I haven't played around with ChatGPT and don't know a whole lot about it, but I am not sure why it needs any more guardrails than a person creating text on their own... maybe I'm missing something. Does it have a supposed authority with its answers?

Expand full comment
author

My husband's been experimenting with ChatGPT for various work / personal projects and shares his observations, including implications for librarianship, here: https://mercenarypen.substack.com/p/the-tools-that-will-not-suffer

Expand full comment
author

Great piece! I had been wondering whether ChatGPT was supposed to be like a search engine, where it just retrieves a bunch of "stuff," or like an encyclopedia, where it has some authority and is supposed to give you the "right answer." From your husband's Substack piece it sounds like it could be counted on to give the right answer when you are searching for something concrete and practical, but on more general topics it shouldn't be treated as an authority.

Expand full comment
author

Here's a twitter thread I just stumbled across:

https://twitter.com/ProfessorF/status/1624234212369874945

Expand full comment

It also apparently makes a mess of some of the strongest findings in the history of psychology -- sex differences in mate preferences:

https://www.psychologytoday.com/intl/blog/darwin-does-dating/202302/what-chatgpt-gets-wrong-about-dating

Expand full comment
author

I guess at this point users need to assume it is a biased tool. Another twitter thread:

https://twitter.com/IsraelBitton/status/1624744618012160000

Expand full comment