32 Comments

Thank you for writing this piece. It’s simple clarity is the best part. Anybody who reads it should feel much better - on many levels - about what AI is and can (and cannot) do without having to be a scientist. It also provides a simple basis for u deters ding why AI cannot proceed to become AGI and take over the world. I only wish this piece were reproduced on the front pages of every major media outlet, to calm the current worries and misunderstandings amongst journalists and ordinary people. Nicely done!

Expand full comment
May 3, 2023·edited May 3, 2023Liked by Frederick R Prete

On day 1 of Marvin Minsky's course on AI at MIT, he said that he wasn't going to define intelligence and he illustrated his reluctance to try: Imagine, he said, that Martians come to Earth and put their antennae on the sides of human heads to measure what's inside. After a few moments, one says to the other that it's amazing that they [humans] can do such sophisticated mathematics with such a limited nervous system. But, of course, they aren't actually "intelligent."

Today's author consoles him/herself with the idea that, since we don't understand--and probably can't understand--how the brain works, and since it's the source of what we call intelligence, we can't create anything that is really, truly intelligent. That line of reasoning fails on a few counts. I don't exactly understand how embryology works, but I have a son, and while I consider that he was the result of skilled labor on my and his mother's part, she doesn't know any more embryology than I do.

Second counterargument is that vertebrate eyes and marine eyes, especially of, say, octopuses [it's the real plural--look it up if you don't believe me], they're both extremely evolved and work incredibly well, but they're very different. This is an example of convergent evolution: two very different paths to the same result. There is no obvious reason that machine intelligence can't equal or surpass human intellect. It would likely get there via a somewhat different path than the one by which ours developed--it probably won't have to survive being eaten by lions on the savanna--and it doesn't have to wait for a thousand generations of questionably survivable ancestors to reach the point of, say, figuring out calculus.

What's currently missing is consciousness, not intelligence. I'm pretty sure I understand what consciousness is, but few agree with me. Once programmers also figure it out, the game changes. One definite aspect of consciousness is a survival instinct. It was one of the reason's behind Asimov's Third Law of robotics. And if a being is smarter than you are and it wants to survive--possibly at the expense of ~your~ survival--the outcome isn't clear to me at all. But remember that although the battle doesn't always go to the strong, nor the race to the swift, it's how you bet.

One final point: the author illustrates the futility of understanding how actual extant neural networks work by pointing out that it's tough to figure out how a mere 11 lobster neurons do their thing. While I'm unfamiliar with the issue, it has chaos* written all over it. Chaos ≠ random, and it ≠ non-deterministic. It just means that it cannot be predicted. Deterministic things might be unpredictable if the boundary conditions cannot be specified with enough accuracy. And, in fact, they never can be. Hence the "butterfly effect."

* Chaos is a relatively recently discovered branch of mathematics--circa 1950's-60's. It arose from an accidental discovery of a level of complexity that had not been anticipated before. Neurons almost certainly operate with a fair amount of that unknowable complexity. The fact that it's unknowably complex, however, does not mean that the outcome isn't deterministic. It just means that you can't make the prediction because it isn't physically possible. Saying that unknowable = can't happen is plain wrong: General Motors stock will have a closing value on today's exchange; no one knows what it will be, but there will be a closing value.

For more about this, read a bit of Jay W. Forrester's early work on systems modeling.

Expand full comment

Good read. My husband enjoyed it as well…

Coming from the tech field, and being part of the original “pioneer class” of firmware and software, I can still see where AI can go horribly wrong. HAL 9000 is possible in my opinion.

In the early days of computing, there was a term we used: GIGO. Garbage in, garbage out (GIGO) is that flawed, or nonsense (garbage) input data produces nonsense output. With as many pieces I have read about these AI “CHAT” “things”, there’s a lot of “garbage” out.

What do you think it would say if one asked it “why is the orange man bad?”.

And I bet it would fail at a Vesper martini recipe.

Expand full comment
May 2, 2023Liked by Frederick R Prete

So, I disagree on some of this, and I think you're misunderstanding where the alarm comes from, but I do think this is a useful post to make. To be clear, GPT-4 itself is fine, probably not extinction-causing, et cetera, et cetera: it's the trajectory that is (and has been) worrying. GPT-5, GPT-6, GPT-7, combined with whatever they end up doing in that time to add features to AutoGPT? *That*, even without it ever becoming sapient, gets you into nightmare territory. The arms race between malware and antivirus software comes to mind as a minimum, only instead of crashing your computer it crashes the power grid. Conversely, the potential *upside* (i.e., one of the reasons GPT-5+ will eventually get made) is basically "new scientific golden age" - consider someone asking GPT "what is the best way to alleviate poverty?": this is a very good question to ask (or even have AutoGPT-version-X start doing for you), *if* you can trust its answer.

Also, the hope is that this is a self-defeating prophecy. To the extent that there *is* hope, anyway; some are more optimistic than others.

Expand full comment

Really disappointing about the self-driving cars, but I agree with everything you said. In fact, as I'm typing this, I get those prompted words to follow because the computer is anticipating what I might say next. It's right about half the time, and it only guesses about a quarter of the time. Overall, that's a failing grade! Computers are an amazing tool, but they lack self-awareness. I'm not sure how we go from CHAT GPT to Data from Star Trek Next Generation, but we are nowhere near the creation of a Data (and he loves humans so there's no need to fear him).

Expand full comment

Brilliant!!

Thank you!

Expand full comment

I want to thank you for writing this post. As a poet and author (a science fiction author nonetheless) I have been routinely barraged by bards boasting of the boon of chatGPT. I have been beseiged by swathes of AI-generated book covers, vomitous graphic novels, and meandering, meaningless haikus.

I have left an author's facebook group to separate myself from those churning out AI-generated slush in the vague hopes it will sell better than their human-generated slush on Amazon. I have realised more and more of just how wide that gap is between artist and salesperson, seen the disdain some artists have for their own fanbase. And on top of that, the fear mongering got to me.

I have seen cartoonists halve their artist's rates, heard people talk of quitting literature. I was one of them. All the doomsday predictions on Youtube started getting to me. I thought that, at the very start of my career as a paid writer, some computer programmer had made something that would render me obsolete. After a lifetime of writing, I did not know what to do, where to go, who to be.

And then I went outside, and I felt a bit better.

And then I read this post, and I feel fine again. I may dedicate my next story to you, if you don't mind.

- Phillip

Expand full comment
deletedMay 2, 2023Liked by Frederick R Prete
Comment deleted
Expand full comment