Brainiac On Banjo: Dues For Artificial Intelligence

“And now you dare to look me in the eye. Those crocodile tears are what you cry. It’s a genuine problem, you won’t try to work it out at all, you just pass it by.” Substitute, written by Pete Townshend

Image created by Jay Vollmar for The Washington Post

I’m about to ask a serious question that should, and eventually will, become central to the artificial intelligence story. It has to do with the conflation of reality and the effluvia of computer-created content.

First, I need to report the backstory that generated my concerns. It’s a tough story revolving around one of the societal taboos that most certainly should be taboo — but it’s not the actions of the perpetrator with which I take issue. This is a closed case: the criminal pleaded guilty and was sentenced.

This is a discussion topic, not an analysis of disgusting acts that the defendant says he committed. I’m discussing a point that rests at a legal and a moral juncture, at least in my mind. Here’s the news story, as reported in The Guardian last Friday.

CONTENT WARNING –  A text version of a news report concerning images of child abuse follows.

A Tasmanian man has been jailed for at least 10 months after police found hundreds of files depicting child abuse – including content generated using artificial intelligence – on his computer.

The 48-year-old Gravelly Beach man was jailed for two years, with a non-parole period of 10 months, in the supreme court in Tasmania on Tuesday.

Police raided his home in the state’s Tamar Valley region in May and found hundreds of files depicting child abuse on his computer.

A significant amount of it was generated using artificial intelligence, marking the first time police had located and seized AI-generated child abuse material in Tasmania, the Australian federal police (AFP) said on Saturday.

The raids came after multiple reports from the US National Center for Missing and Exploited Children about an Australian downloading child abuse material from a website and social media platform…

The question of sanctioning child pornography is not at issue here, and besides, as I noted the prisoner pleaded guilty. The above excerpt from The Guardian states a “significant amount” of child pornography was generated using artificial intelligence. Artificial intelligence is, by definition, not real: it’s a sham. As The Who told us decades ago, it’s “a substitute for the real guy.” Pornography adapts to whatever media approaches it. Porn produced with artificial intelligence is still porn, for good and/or for bad.

But… is AI-generated child pornography a crime?

Not to be glib, but no children are harmed in the production of AI kiddie porn. You might perceive anybody who creates child pornography is, well, let’s say greatly disturbed and we should be wary of such people, but should the production and/or ownership of that material a crime?

If it is a crime, is artwork created by humans that depict naked children child pornography? You’ve got to draw some very wavy lines to navigate through that morass. Cherubs and all sorts of cute little naked babies are depicted in all sorts of media and have been since Gutenberg learned how to read backwards. But, is AI generated sick fantasy actually kiddie porn? Which children’s naughty bits are not allowed? Why would it be okay to publish in medical textbooks or religious publications, but not okay to possess in your library? This is tricky.

Let me offer an example that is much closer to the hearts and minds of the usual readers of Brainiac On Banjo. In Superman, The Movie (1978) baby Kal-El crash lands on Earth, his rocket ship (or whatever that thing was) pops open and an unclothed proto-Clark Kent stands up, revealing the only package he took with him in space was the one between his legs. Is that not child pornography? The photo of nude Kal is right here, it’s not AI it’s real, and now it is on your computer. (You’re welcome.) Quantity aside, how is that different from the behavior of the incarcerated Tasmanian pervert?

Given the current and growing actions of the Christian Nationalists and other religious fundamentalists here in America, this is a real issue. Our laws tell us what is acceptable, but things get messy when you get into court and try to establish purpose as exculpatory. Again, it gets messy in appeals. When it hits the Supreme Court, well, given the nature of the current Supreme Court and the majority of its wardens, considerations of “purpose,” “context” and “applied use” interfere with their uncompromising lust for totalitarian religious subjugation.

I don’t yet have a position on the matter. This is very complicated and we might as well start the discussion. We cannot un-invent artificial intelligence, and my guess is the next time you’re on an airplane and the guy in the middle seat is wearing AppleVision goggles, you’re going to wonder what he’s looking at.

Again with the Who lyric: It’s a genuine problem you won’t try to work it out at all, you just pass it by.

One thought on “Brainiac On Banjo: Dues For Artificial Intelligence

  1. Yes, AI-generated pornography should be a crime because it DOES harm children. AI learns how to create child porn by viewing huge amounts of actual photos of actual children who were abused. In that way it is different from a human artist who could conceivably create an image from imagination without ever having seen a photo or video of a child being abused. AI has no imagination. It only has the capacity to process specific things it has “seen” and generate similar images. And children who were actually abused is the basis of the process.

Thoughts?