32 Comments

Thank you for writing this piece. It’s simple clarity is the best part. Anybody who reads it should feel much better - on many levels - about what AI is and can (and cannot) do without having to be a scientist. It also provides a simple basis for u deters ding why AI cannot proceed to become AGI and take over the world. I only wish this piece were reproduced on the front pages of every major media outlet, to calm the current worries and misunderstandings amongst journalists and ordinary people. Nicely done!

Expand full comment
author

Thank you… that was very kind of you!

Expand full comment
May 3, 2023·edited May 3, 2023Liked by Frederick R Prete

On day 1 of Marvin Minsky's course on AI at MIT, he said that he wasn't going to define intelligence and he illustrated his reluctance to try: Imagine, he said, that Martians come to Earth and put their antennae on the sides of human heads to measure what's inside. After a few moments, one says to the other that it's amazing that they [humans] can do such sophisticated mathematics with such a limited nervous system. But, of course, they aren't actually "intelligent."

Today's author consoles him/herself with the idea that, since we don't understand--and probably can't understand--how the brain works, and since it's the source of what we call intelligence, we can't create anything that is really, truly intelligent. That line of reasoning fails on a few counts. I don't exactly understand how embryology works, but I have a son, and while I consider that he was the result of skilled labor on my and his mother's part, she doesn't know any more embryology than I do.

Second counterargument is that vertebrate eyes and marine eyes, especially of, say, octopuses [it's the real plural--look it up if you don't believe me], they're both extremely evolved and work incredibly well, but they're very different. This is an example of convergent evolution: two very different paths to the same result. There is no obvious reason that machine intelligence can't equal or surpass human intellect. It would likely get there via a somewhat different path than the one by which ours developed--it probably won't have to survive being eaten by lions on the savanna--and it doesn't have to wait for a thousand generations of questionably survivable ancestors to reach the point of, say, figuring out calculus.

What's currently missing is consciousness, not intelligence. I'm pretty sure I understand what consciousness is, but few agree with me. Once programmers also figure it out, the game changes. One definite aspect of consciousness is a survival instinct. It was one of the reason's behind Asimov's Third Law of robotics. And if a being is smarter than you are and it wants to survive--possibly at the expense of ~your~ survival--the outcome isn't clear to me at all. But remember that although the battle doesn't always go to the strong, nor the race to the swift, it's how you bet.

One final point: the author illustrates the futility of understanding how actual extant neural networks work by pointing out that it's tough to figure out how a mere 11 lobster neurons do their thing. While I'm unfamiliar with the issue, it has chaos* written all over it. Chaos ≠ random, and it ≠ non-deterministic. It just means that it cannot be predicted. Deterministic things might be unpredictable if the boundary conditions cannot be specified with enough accuracy. And, in fact, they never can be. Hence the "butterfly effect."

* Chaos is a relatively recently discovered branch of mathematics--circa 1950's-60's. It arose from an accidental discovery of a level of complexity that had not been anticipated before. Neurons almost certainly operate with a fair amount of that unknowable complexity. The fact that it's unknowably complex, however, does not mean that the outcome isn't deterministic. It just means that you can't make the prediction because it isn't physically possible. Saying that unknowable = can't happen is plain wrong: General Motors stock will have a closing value on today's exchange; no one knows what it will be, but there will be a closing value.

For more about this, read a bit of Jay W. Forrester's early work on systems modeling.

Expand full comment

You don't need to know how embryology works to reproduce. But you cannot simply give birth to AI as if by accident. Even human beings are a billion years in the making.

Expand full comment

Well, yeah. But now that AI has been created, its evolution begins. I don't know how long it took early protobcteria to divide, but most current ones do it roughly twice/hour, so that's nearly 50 generations/day: they do in a day what humans take at minimum half a millennium to do. Think about that. Now consider that even the slowest computers make bacteria look like glaciers.

AI didn't happen by accident. Not by a long shot. In fact, the idea is at least a century and a half old: Ada Lovelace [Lord Byron's daughter and only legitimate child] was the first I know of to posit the idea that machines could do very complex tasks, even to the point of thinking. Not a billion years, but over a century and a half. We hit the knee [d/dt = 1] of the exponential curve recently. The outcome isn't easily predictable, but given how governments and the media work, there are definitely scary possible scenarios.

Expand full comment
author

I do agree with your point about governments and the media. That is a concern. Thanks for the reply!

Expand full comment
author

good point.

Expand full comment
author

Thanks for the thoughtful and thought-provoking response. I really appreciate it. You made several interesting (and very good) points.

My main point is that while an artificial intelligence system can do things that mimic what appears to be "intelligence" at the surface (output) level, that does not mean that they are doing the same things in the same ways that natural systems do. It's important to make that distinction. (Few do.) Otherwise, people will treat artificial intelligence as if it is something that it clearly is not (i.e., intelligent) . That is, if we see the systems as "intelligent", we would be more likely to take their advice at face value, correct? Conversely, if we think that people's brains work the way computers (or software) work, we will be misguided in how we deal with biological systems like our spouses, our partners, our children, and our pets. They are different. I see this mistake played out every day in how people treat each other based on complete misunderstandings about how brains create behavior. Brains are not computer programs, and they don't do things like computer programs.

Also, I'm not saying that it's not theoretically possible (at sometime in the future) to understand how brains work. I didn't mean to give the impression that we might never know. We just don't have the technology to get down to the details at this moment. Again, I see this misunderstanding played out every day when people (i.e., scientists) try to explain behavior by looking at a CAT scan, for instance, or an fMRI. We have to understand the limitations (especially the predictive and explanatory limitations) of these technologies. Although, some mathematicians and physicists would argue that non-ergodic processes simply can't be thoroughly understood in terms of the analytical tools we now possess.

I do agree with you that non-ergodic systems are unpredictable. But, you may be using the term "deterministic" tautologically... In which case I would agree with you. Although, I'm not sure how you mean the "butterfly" effect.

Your point about convergent evolution is interesting and thought-provoking. I need to think about that one for a while. However, I am agnostic about the consciousness issue. Even if the software said "Hey, I'm conscious!", we would have no way of knowing. And, I'm still wrestling with exactly what the word consciousness means and what the neural (information processing) underpinnings of it actually are.

My final point is one that is little addressed in the AI debate. It's the issue having to do with the biological instantiation of what we call "intelligence." It is an emergent property of biological systems with epistemological and evolutionary histories. I don't see that being duplicated artificially. However, perhaps it's output could be convincingly imitated. But, there is a universe of difference between those two. Many people take issue with my point of view regarding AI — and other people's points of views — and the debate is fascinating. But, usually, the biology is left out. I think it needs to be considered.

Thanks for giving me a lot to think about. I do appreciate it. Sincerely, Frederick

Expand full comment
May 5, 2023Liked by Frederick R Prete

A few thoughts about your comments...I've been thinking about the brain all my life, and especially for my career: I retired from doing neurointerventional surgery in 2013, so that's pretty much the world I've inhabited for most of my professional life. Before I went to medical school, I graduated from MIT in chemistry, so I got exposed to that environment during formative years. But even before then, when I was an early teenager, I wondered about a lot of things, among them how you get from a blob of matter to free will and consciousness. It took me the better part of 25 years to get answers that make sense to me, but I'm nothing if not persistent.

First to free will... Just about no one doubts that he has it. We decide to do something, and we do it, or conversely, ~not~ to do something, and so we don't. But where does this being that I call "me" live in the brain? It's not like the being in Men In Black operating the levers of the alien's body. CT and MR scanning--and every other imaging modality up to and including brain autopsies--shows that that's plainly not what's happening. We sleep. We dream. We have zero control over those brain activities. They just...happen. We awaken and somehow our brain function sorta knits itself back into the real material (and mental/emotional) world and we go along in a more or less rational way until we get sleepy and return to a dream state.

(Parenthetically, I also wondered how we know that we aren't dreaming. Short answer: we can't know that. All we can KNOW is the Cartesian conclusion: since I'm thinking, I am. But I digress.)

Anyway, the brain's a busy thing, and it's been busy since before birth. AT birth is when the real action starts to happen, though. Consider a newborn: it moves randomly, cries, eats, sleeps, and excretes. Not much of a repertoire of actions, but that's all it's got. Over a few weeks to months it discovers things, like being able to move an arm and a leg. And how to turn over in bed. Instead of just waiting around for a bottle or breast to appear, it can reach out for it with more or less--but increasing--dexterity. Bambino might associate a food source with alleviation of a feeling that happens to be hunger, but it only knows that association exists; it doesn't understand it beyond experiencing hunger and slaking it. As it gets a little older, movement gets more refined, appetites become more complex, but the arm still goes out to get food and brings it to the mouth so ingestion can happen.

I have yet to meet a normal person who consciously wills all these actions while eating. We just do it. If anyone asked you, you'd say that you chose to pick up your fork and knife, slice off a piece of steak or cauliflower, put it into your mouth, chew it, and swallow it. It's not something you dissect into pieces; you're just aware of the aggregate behavior, which you generally want to do. And you have the idea that you've chosen it.

I'd argue that the brain is doing most of these actions in the background. You become aware of an action that's about to happen just before it happens. And well over 99% of the time, that's exactly what does happen. Since you nearly always* experience the thought immediately before the action, there is an overwhelming sense that the thought ~caused~ the action. But in reality, it probably didn't: they're epiphenomenal, meaning that they're both effects of the same cause.

So, free will? Probably not. The real question isn't whether it exists; it's why we believe we have it when we can't possibly.

What's consciousness? Like time, we know it, but it's really tough to define. [Ever try to define the word "time"? If you haven't, you should.]. Assuming you have normal, stereoscopic vision, do the following experiment: cover one eye and look around for maybe a minute. You see the world in 3D, but it's a sort of flat 3D. Perceptual psychologists call this "cyclopic depth perception," and they've done a lot of research on it. There are many cues that tell us about where things are in 3-space, but they can fool you. A fatter pencil will seem to be close to you than a thinner one. Something that appears to be obscuring part of something else will seem to be closer. But knowing that, you can rig the system so that you put an actually fat pencil farther away than an actually thin one. Your brain assumes that they're the same size in reality and does the calculation.

After that minute, uncover the covered eye and look around again. The stereoscopic 3D world virtually shimmers with depth. Keep this idea in mind.

The brain does things over a very wide range of speeds. Visual and sound processing happens at unimaginably high FLOP [floating point operations] rates in order for us to perceive a smoothly continuous world. Everything has to happen faster than dream rates, and it does. At the other end of the speed spectrum are things like deep emotions, deeper thoughts and problems, and the like. These are things that might take years, even decades to resolve. And everything else happens at intermediate speeds.

To me, consciousness is the brain's experience of the range of interactions with the world, both the internal and external worlds, simultaneously. It's the experiential version of stereoscopic depth perception applied to all your experiences and senses.

And this tells us how to make computers conscious.

Back in the '60s at MIT, someone wrote a program called "Doctor," which you could interact with from a terminal. It would have a realtime written conversation with you. And at first, it could pass Turing's test: it seemed to be an intelligent, gently probing psychologist. I figured out a way to expose its inner workings by repeatedly typing in the same input, only to find that each group of inputs triggered a specific set of responses that were little more than a list. If there were ten entries on the list of responses to, say, asking it how it felt, you'd get those ten responses. On the 11th ask, you'd get the first one again, and so on. It quickly became no more than a "magic 8-ball." https://magic-8ball.com

So, you say, see? I told you computers couldn't be intelligent. Wait, I say: try the same thing with a human. Most won't last as long as the computer. Most people aren't particularly intelligent either. I see nothing especially holy about human thought, such as it is. AI scares the cookies out of me, but only because it's likely to be more competent than average human thought. When AIs figure out that the solution to a lot of problems is that there are too many people, the outcome isn't going to be pretty.

Lastly, the "butterfly effect." Mathematical chaos has shown that infinitesimally tiny differences between two initial conditions can result in widely divergent downstream consequences after even short time periods. If you're unfamiliar with this, I recommend James Gleick's highly readable and immensely educational book, Chaos: the birth of a new science (or words to that effect). When I read it, there were parts that I flat didn't believe. They made no sense at all. But I wrote a little code myself to debunk it and found that everything in it is exactly correct. It's very sobering.

JH

* Why just almost 100% ? There have been a few times in my life, usually when playing table tennis, when my body made shots before I was aware what it was doing. And they were always great shots! I admired what it had done, but they hadn't been planned consciously. Similarly, in difficult surgeries, sometimes solutions to highly complex problems appeared top me unbidden.

Emotions are, pretty much by definition, irrational things. They just happen. You can't choose them. No one chooses to love someone or to hate them; those are brain's responses to stimuli that are either intensely pleasant or unpleasant. In the former instances, perhaps helped along with the secretion of a bit of oxytocin.

Expand full comment
author
May 5, 2023·edited May 5, 2023Author

This was very enjoyable and thought-provoking, indeed. Thank you very much for your thoughts (and your time). I fundamentally agree with you on most points; I do have some (small) differences of opinion as to the way that cognition and behavior develop in people... You seem to be a bit more of a behaviorist that I am. But that's neither here nor there. As I said, we're in fundamental agreement. Regarding the issue of free will and consciousness, I'm agnostic on many points. However, I can't help thinking that if a question has gone unsolved for millennia, maybe it's the wrong question. I've come to think that "free will versus no free will" is a false dichotomy (like the nature-nurture question). But, from a neurobiological point of view, I'm still struggling to crystallize my thinking. Likewise, with consciousness. When people talk about consciousness, they generally mean reflexive self-consciousness. That's the more difficult issue to think about neurobiologically, and I'm still trying to crystallize my thoughts on that issue, too.

I certainly understand your point of view on emotions. I don't know that I would categorize them as irrational… Again, rational/irrational don't seem like categories of thought that have much external validity. I think they are great metaphors, but much of our thinking is not "rational" in the classic sense. And, I think that the term begs the question.

I understand the butterfly effect within the context of your comments. Thanks for clarifying what you meant. I heard it originally in a philosophy course long ago in which the professor claimed that by throwing a piece of chalk over his shoulder, he changed the shoreline in China. A bit of hyperbole, don't you think? I do, however, understand your use of the term and agree with it.

I'm also not sure precisely what I think about the relationship between chaos and non-ergodic systems as they apply to natural neural systems like brains.... I'm still thinking about that one, too.

In any event, I would enjoy continuing the conversation. Thanks for sharing your thoughts!

Expand full comment
May 5, 2023Liked by Frederick R Prete

Grab a copy of Isaac Asimov's novel, The End of Eternity, for more about the butterfly effect. In it, "Technicians" are people who travel through/outside of time, and make minuscule changes at earlier times that affect wholesale changes later. A change might be something as mundane as putting a brush on a countertop in a slightly different position than the one in which they found it. Hyperbolic, to be sure, but it's been said that while "eureka!" is a famous exclamation after a success, a lot more scientific insights have been gotten after someone said, "hmm...that's odd."

Your prof couldn't make the prediction that he'd change the shoreline of China (with any assurance) by throwing the chalk over his shoulder, but that's not the same as saying that it wouldn't happen. It might even yet. When the chalk struck the floor, its vibrations might have been the straw that broke the camel's back and triggered an underwater earthquake. The resultant tsunami could easily have changed a coastline. Anywhere. Unlikely in the extreme, but entirely possible.

Expand full comment
author

I understand your argument. I just think that the laws of physics get in the way of the chalk example...

Expand full comment

Think about it from a Heisenbergian perspective: you never know where an electron is exactly. And looking at the probability density function, it can be almost anywhere in the universe, albeit most locations have vanishingly small probabilities. So, it's reasonable to think that sooner or later, an event, perhaps incredibly minuscule, will cause an earthquake to happen. No telling how small, or when, or where it might be....

Expand full comment

Good read. My husband enjoyed it as well…

Coming from the tech field, and being part of the original “pioneer class” of firmware and software, I can still see where AI can go horribly wrong. HAL 9000 is possible in my opinion.

In the early days of computing, there was a term we used: GIGO. Garbage in, garbage out (GIGO) is that flawed, or nonsense (garbage) input data produces nonsense output. With as many pieces I have read about these AI “CHAT” “things”, there’s a lot of “garbage” out.

What do you think it would say if one asked it “why is the orange man bad?”.

And I bet it would fail at a Vesper martini recipe.

Expand full comment
author

You may be right about the martini thing. I'd rather have a martini with you and your husband than with a robot. I agree that it could all go south. Computers are tools like hammers. You can build a beautiful house, or you can whack someone on the head with it. That's a people issue. In the end, though, it's not going to go south because it spontaneously morphs into some kind of human-like intelligence. It needs to be used wisely, but we can't confuse it with "intelligence" unless we want to change the definition of the word. Enjoy that martini responsibly!

Expand full comment
May 2, 2023Liked by Frederick R Prete

So, I disagree on some of this, and I think you're misunderstanding where the alarm comes from, but I do think this is a useful post to make. To be clear, GPT-4 itself is fine, probably not extinction-causing, et cetera, et cetera: it's the trajectory that is (and has been) worrying. GPT-5, GPT-6, GPT-7, combined with whatever they end up doing in that time to add features to AutoGPT? *That*, even without it ever becoming sapient, gets you into nightmare territory. The arms race between malware and antivirus software comes to mind as a minimum, only instead of crashing your computer it crashes the power grid. Conversely, the potential *upside* (i.e., one of the reasons GPT-5+ will eventually get made) is basically "new scientific golden age" - consider someone asking GPT "what is the best way to alleviate poverty?": this is a very good question to ask (or even have AutoGPT-version-X start doing for you), *if* you can trust its answer.

Also, the hope is that this is a self-defeating prophecy. To the extent that there *is* hope, anyway; some are more optimistic than others.

Expand full comment
author

Actually, I agree with you. It's a tool. It needs to be used wisely. However, I think it's a mistake to confuse what it's doing with what natural systems do (like you or me). It did come up with a good recipe for that Old-Fashioned, however. Now, if it could only go into the kitchen and make it for me. Thanks for the comment!

Expand full comment
May 4, 2023Liked by Frederick R Prete

Sure; submarines don't swim like dolphins, and helicopters fly very differently than hummingbirds. AI solves problems differently than biological intelligences do, at least in part because it has very different senses and priorities (consider what senses your stomach has; it's blind and deaf, but still distinguishes between stimuli and reacts accordingly). Also in part because we haven't tried to hook it up to the robots (yet) - so far, drones still have human pilots in the loop somewhere.

Expand full comment
author

Points well taken... I just want to be careful not to confuse my submarine with my dolphin, or with another human being... Remember what happened to John Lilly and Margaret

https://everythingisbiology.substack.com/p/dont-have-sex-with-your-dolphin-even

Expand full comment

Really disappointing about the self-driving cars, but I agree with everything you said. In fact, as I'm typing this, I get those prompted words to follow because the computer is anticipating what I might say next. It's right about half the time, and it only guesses about a quarter of the time. Overall, that's a failing grade! Computers are an amazing tool, but they lack self-awareness. I'm not sure how we go from CHAT GPT to Data from Star Trek Next Generation, but we are nowhere near the creation of a Data (and he loves humans so there's no need to fear him).

Expand full comment
author

I agree completely!

Expand full comment

Brilliant!!

Thank you!

Expand full comment

I want to thank you for writing this post. As a poet and author (a science fiction author nonetheless) I have been routinely barraged by bards boasting of the boon of chatGPT. I have been beseiged by swathes of AI-generated book covers, vomitous graphic novels, and meandering, meaningless haikus.

I have left an author's facebook group to separate myself from those churning out AI-generated slush in the vague hopes it will sell better than their human-generated slush on Amazon. I have realised more and more of just how wide that gap is between artist and salesperson, seen the disdain some artists have for their own fanbase. And on top of that, the fear mongering got to me.

I have seen cartoonists halve their artist's rates, heard people talk of quitting literature. I was one of them. All the doomsday predictions on Youtube started getting to me. I thought that, at the very start of my career as a paid writer, some computer programmer had made something that would render me obsolete. After a lifetime of writing, I did not know what to do, where to go, who to be.

And then I went outside, and I felt a bit better.

And then I read this post, and I feel fine again. I may dedicate my next story to you, if you don't mind.

- Phillip

Expand full comment
author

Well, it would be a privilege to be a character in one of your stories. Trust me, my friend, AI will not replace you or your creative voice. But, as you said, it can generate a lot of bland slush. The only drawback to that is that many people can't distinguish between good, creative writing and slush. Now that I think of it, maybe ChatGTP will only replace the bad writers! The good writers will be safe...

Expand full comment

A character in a story. Now that's a good idea. I would love to. I'm writing about a shapeshifter at the moment and whilst I know it has next to no basis in biological reality, I am trying my best to make it realistic by seeing if we have anything close in this universe.

And I have to agree, many people can't discern between good and bland. I have thought a lot about the kind of people who have been gleefully replacing their own work with AI churnalism (I may be expanding the meaning of that word, but it is woefully underused). Many of them place an emphasis on quantity over quality, on getting Amazon's algorithm to notice them by the sheer volume they can pump out, or pretend they have pumped out. The result is that their books sell, get less than favourable reviews, and they have to rush to push out another one.

My goal as a writer is for re-reads to be as rewarding as first reads. With that in mind I tend to write like a 3D printer. The shape of a universe or plot appears first, then characters, then relationships, then the skeleton and the physics of the universe, its internal logic. I was briefly consumed by AI panic and I want to thank you again for helping relieve it.

Would you mind if I email you a short story and perhaps ask a biology question?

Expand full comment
author

Feel free to email me! I'll do my best with the biology question… I didn't know there'd be a test, so I haven't studied yet.... Does spelling count? LOL

Expand full comment

Long time no see. I have since delved more into the AI side of things and it seems there is a definite movement now for readers away from bad, collaged literature. Us weirdos might have our day yet!

Expand full comment

I pryde myself on being one of few writers who isn't too fussed if someone can spell or not, so you're safe!

Expand full comment
deletedMay 2, 2023Liked by Frederick R Prete
Comment deleted
Expand full comment
author

Yeah, me too. I agree with the AI people who say we will never have completely autonomous cars without completely closed roadways. And, we've got to get those bicyclists out of the way…

Expand full comment

Still waiting for my jet pack….

Expand full comment
author

It’s on the Amazon truck. Will be there shortly.

Expand full comment
author

Did your jet pack arrive? My son works at Amazon… I can have him track the package.

Expand full comment