ChatGPT-∞ v Human Mind @ Work
Updated: May 9
The current iterations generative ai do produce estimations of a middle-style, educated human, and Max Tegmark, of MIT, sees signs of reasoning in GPT-4. But what will GPT-7 be? I can't speak on the definition of AGI, but I don't think it's close yet. The canyon between current ai and human consciousness is so vast, though, there is something I can say. Current ai models won't do "human mind" mode, even with different architecture. Not with more iterations nor with more data.
Nobel winner, Roger Penrose was criticized, but he brought science closer to objectivity in the creative sense (see my last post). He believes consciousness is connected with the collapse of the wave function. That means that while the brain may be computational, the emergence of human consciousness involves something truly quantum, as in non-computational. If that's true, no ramped-up neural network or quantum computer (as currently conceived) would produce human consciousness.
Penrose and, neuroscientist, Stuart Hammeroff, are looking at small structures in the brain which may be capable of non-computational behavior, and, hopefully, which might turn out to help explain human consciousness better. In the interview linked above, Penrose answers a question pertinent to ai, I think, in relation to consciousness. More generally, the question was whether the consciousness we connect with the organ, the brain, might be possible elsewhere, or if the structures in the brain are necessary or unique.
Basically, Penrose had already connected the the collapse of the wave function with the brain. Hammeroff found structures in the brain that may be found adequately capable of the necessary behavior (microtubules; pyramidal cells). Furthermore, these structures appear in the cerebrum, which is associated with consciousness. As for the cerebellum, that's full of neurons, more even than the cerebrum, but it doesn't have these specialized structures.
I'm not a computer scientist nor a neuroscientist, but I would more so compare modern computer concepts to the cerebellum, than the cerebrum, just based on that. As for Penrose, he said that making consciousness artificially would require a computer that can "orchestrate" the collapse of the wave function or, otherwise, operate non-computationally. Like the brain does.
Quantum computers utilize quantum mechanics, but they aren't quantum in how they would navigate the ruliad object or possibility. I'm talking way, way up over my head there, so I'll just come back to Penrose who addressed this, that, no, quantum computers, as they are currently conceived, couldn't be capable of consciousness.
That makes me think more about what Tegmark said, about reason. Is reason something apart from consciousness? Or is Tegmark wrong, that ai isn't even reasoning, really? That, I can't tell you.
But the human mind is safe. It's really, really safe, as something precious.
I'd like to talk about how this impacts my work. For me, in short, I want my writing students to know, at a fundamental level, they have a value and that this is what I want to see in their work. Their contribution to the conversation, not just reiteration or juxtaposition or rearranging. There's a gap between Penrose and that idea, which remains on my mind.
I asked GPT-3.5 to fact check this and it pretty much gave me a glossary (bulleted below). I also used it to get rid of typos, but 2/3 of them were style. I won't say that's a case of chatgpt v human mind. For my previous post, I had asked GPT-4 to grapple with the ideas of Paul Davies; it did a lot of evading, but there was progress to the conversation. It is incredible. But, I'm also interested to read/talk/write more about what the shape and method of the human contribution is—because more and more that will be all we're good at. Here's a thought from computer scientist, Lex Fridman, on how ai interfaces with computer programming as a profession:
A few other highly interesting stories:
Update on the Moloch race of big tech v. smaller, open-source ai models ordinary folks are tinkering with meaningfully in terms of the Moloch race.
HeyPi is an amazing open source ai that converses much, much differently than any ChatGPT iterations so far. (Go to their website and click "meet pi"; only log in if you want to. I have no affiliation; use at your own risk.)
Stephan Wolfram on how simple rules can do complex things & making things computational . . . backdrops to open ai concepts (he doesn't address open source models, but those come to mind, more so than GPT).
Sal Khan, of Khan Academy, on the idea of creating ai tutors.
GPT-3.5's "glossary" of sorts (actually asked for a fact check on a first draft of this):
Max Tegmark is a physicist and cosmologist at MIT, and he has written about the possibility of AI becoming conscious and reasoning in his book "Life 3.0."
Roger Penrose is a mathematician and physicist, who has proposed a theory called "orchestrated objective reduction" (Orch OR), which posits that consciousness arises from quantum processes in the brain. However, his theory is not widely accepted among scientists, and there is little empirical evidence to support it.
Stuart Hameroff is a physician and anesthesiologist who has collaborated with Penrose on the Orch OR theory.
Microtubules are tiny tube-like structures found in cells that play a role in cell division and other cellular processes. Pyramidal cells are a type of neuron found in the cerebral cortex of the brain.
The cerebellum is a region of the brain that is involved in motor control and coordination, while the cerebrum is responsible for conscious thought and higher brain functions.
Quantum computers use quantum mechanics to perform calculations that would be impossible for classical computers, but they are not yet capable of simulating the complexity of human consciousness.
The idea that human consciousness arises from quantum processes in the brain is a controversial theory that is still the subject of much debate and research in the scientific community.
It is true that current AI models are not capable of true reasoning, although they can perform certain tasks that might be mistaken for reasoning, such as recognizing patterns and making decisions based on data.
The opinions expressed in the text are those of the author, and they may not be representative of the scientific consensus or the views of experts in the field.