Discussion about this post

User's avatar
Phillip Carter's avatar

I want to thank you for writing this post. As a poet and author (a science fiction author nonetheless) I have been routinely barraged by bards boasting of the boon of chatGPT. I have been beseiged by swathes of AI-generated book covers, vomitous graphic novels, and meandering, meaningless haikus.

I have left an author's facebook group to separate myself from those churning out AI-generated slush in the vague hopes it will sell better than their human-generated slush on Amazon. I have realised more and more of just how wide that gap is between artist and salesperson, seen the disdain some artists have for their own fanbase. And on top of that, the fear mongering got to me.

I have seen cartoonists halve their artist's rates, heard people talk of quitting literature. I was one of them. All the doomsday predictions on Youtube started getting to me. I thought that, at the very start of my career as a paid writer, some computer programmer had made something that would render me obsolete. After a lifetime of writing, I did not know what to do, where to go, who to be.

And then I went outside, and I felt a bit better.

And then I read this post, and I feel fine again. I may dedicate my next story to you, if you don't mind.

- Phillip

Expand full comment
Joe Horton's avatar

On day 1 of Marvin Minsky's course on AI at MIT, he said that he wasn't going to define intelligence and he illustrated his reluctance to try: Imagine, he said, that Martians come to Earth and put their antennae on the sides of human heads to measure what's inside. After a few moments, one says to the other that it's amazing that they [humans] can do such sophisticated mathematics with such a limited nervous system. But, of course, they aren't actually "intelligent."

Today's author consoles him/herself with the idea that, since we don't understand--and probably can't understand--how the brain works, and since it's the source of what we call intelligence, we can't create anything that is really, truly intelligent. That line of reasoning fails on a few counts. I don't exactly understand how embryology works, but I have a son, and while I consider that he was the result of skilled labor on my and his mother's part, she doesn't know any more embryology than I do.

Second counterargument is that vertebrate eyes and marine eyes, especially of, say, octopuses [it's the real plural--look it up if you don't believe me], they're both extremely evolved and work incredibly well, but they're very different. This is an example of convergent evolution: two very different paths to the same result. There is no obvious reason that machine intelligence can't equal or surpass human intellect. It would likely get there via a somewhat different path than the one by which ours developed--it probably won't have to survive being eaten by lions on the savanna--and it doesn't have to wait for a thousand generations of questionably survivable ancestors to reach the point of, say, figuring out calculus.

What's currently missing is consciousness, not intelligence. I'm pretty sure I understand what consciousness is, but few agree with me. Once programmers also figure it out, the game changes. One definite aspect of consciousness is a survival instinct. It was one of the reason's behind Asimov's Third Law of robotics. And if a being is smarter than you are and it wants to survive--possibly at the expense of ~your~ survival--the outcome isn't clear to me at all. But remember that although the battle doesn't always go to the strong, nor the race to the swift, it's how you bet.

One final point: the author illustrates the futility of understanding how actual extant neural networks work by pointing out that it's tough to figure out how a mere 11 lobster neurons do their thing. While I'm unfamiliar with the issue, it has chaos* written all over it. Chaos ≠ random, and it ≠ non-deterministic. It just means that it cannot be predicted. Deterministic things might be unpredictable if the boundary conditions cannot be specified with enough accuracy. And, in fact, they never can be. Hence the "butterfly effect."

* Chaos is a relatively recently discovered branch of mathematics--circa 1950's-60's. It arose from an accidental discovery of a level of complexity that had not been anticipated before. Neurons almost certainly operate with a fair amount of that unknowable complexity. The fact that it's unknowably complex, however, does not mean that the outcome isn't deterministic. It just means that you can't make the prediction because it isn't physically possible. Saying that unknowable = can't happen is plain wrong: General Motors stock will have a closing value on today's exchange; no one knows what it will be, but there will be a closing value.

For more about this, read a bit of Jay W. Forrester's early work on systems modeling.

Expand full comment
30 more comments...

No posts