top of page
Writer's picturewriteminded

Optimism for a global organization on AI governance? . . . IDKAT . . .

Updated: May 13, 2023

Longtime AI researcher, Gary Marcus, was one of the first I'd heard detract from AI progress. Initially, I heard him disparage existing generative ai models, but he does think ai can eventually get there, to AGI. No, what Marcus proposes seems even more impossible than machines competing with humanity: global agreement on how to shape ai and our relationships with it.


As for what ai could eventually do, Marcus thinks the two aspects of ai need to merge together in a similar way the brain merges intuitive and rational systems into one. Or, that's how I simplified, for myself, what Marcus depicted in his slides as he spoke:

merger of cerebrum and cerebellum like a merged ai according to Gary Marcus
merger of symbolic and neural network in one ai model










He says AI that emphasizes learning, doesn't have a strong handle on the truth. He shows examples where AIs have tricked people or written fictional narratives that appear perfectly true. Marcus points out many present and possible dangers to economies, public spaces, media, politics, and more, especially given bad actors and the nature of current capital incentives. We already have loads of misinformation and clickbait, now quality quizzes, for example, get churned out just plopping chunks of text into a box and hitting enter—take a quiz on this blog post.


So that's why Marcus wants a merger between large language models and symbolic elements that emphasize explicit reasoning—to make a complete and balanced bot. He says he doesn't know how to do that, but he believes it's possible.


With my antiquated technical training (mid-90s, lol), Feynman fandom, and a lot of reading, I do suspect there are reasons to be skeptical of that aim. Is such a technological merger really possible?


Marcus's main evidence is that the brain does it (ie merges these two structures), so we should be able to figure out how to get a computer to do it. Meanwhile, the premise of my recent post about the implications of Roger Penrose being right? It was that achieving artificial consciousness (not AGI—consciousness) would require a totally new computer science, beyond current quantum computers or even next gen concepts: a whole new way, that's non-computational.


So, yes, the brain does, at the very least, merge several technologies, how soon will a computer do it? I feel like AGI could be something less than human consciousness. But, to me, Marcus's view merely re-emphasizes just how hard it is to achieve a complete artificial being.


BAD ACTORS AND MISDIRECTED INCENTIVES

That doesn't change any of what he said about bad actors or misdirected incentives. His answer to that, which he's also hopeful of, is a new global organization focused on AI governance.


His evidence that's possible is a survey showing overwhelming public support for AI regulation. What does that have to do with forming a global organization that actually impacts governmental choices? I've not created such an organization, but I feel like geopolitics is trending toward de-globalization right now. How will so many players be joined on anything?


But Gary Marcus should not stop, for a couple reasons:

1. Do good and die doing it. It's just hard to imagine the world coming together over this anytime soon. That doesn't mean I'm scared or negative. I don't have Marcus's outlook on the impact. I guess, I have hope something more organic is happening, here, from my vantage point as a little writing prof in GR, MI. LOL. But trying to bring people together on this might bring people together. Why not try?

2. I'm sure some folks are running too fast. A few people tugging at the backs of their shirts may slow the whole thing down a bit to a medium pace, relative to what's going on. Too fast + too slow = just right-ish?

3. I had felt really safe in humanities as a writing instructor, in the sense that I teach information literacy, reading, thinking, synthesizing, collaborating, self-efficacy, and these types of processes involved in writing. And I'd played with GPT-4 and found uses, but not a rival. But, then, as I've looked at open source models and the ease with which small models can be trained on a small scale to do one job very well, this dawned on me: train an ai on my past feedback to past student projects. I'm a dynamic, thoughtful professor, but I teach introductory writing. I can't see ai being me in how my feedback addresses 1) the specific student according to my relationship with them and 2) the text relative to the student's goals and course objectives. But an ai could be trained and given enough information to beat me in reliability, for sure. It doesn't need coffee or a nap. Just dawned on my today, but I can't see how this isn't quite possible. I'm looking at it, maybe, somewhat like you. I'll use the ai to give better feedback, then the ai will use me to give better feedback, and then I will retire.


+ OPEN SOURCE THE WAY AROUND THE MOLOCH? OR NO?

The other "slow it down" expert I've listened to is Max Tegmark, and he also mostly references the Moloch situation among big tech companies. But, then, I got hyped on open source until I spoke with someone in the field.


I can't characterize it in detail, but, basically, open source can move at a higher velocity with smaller models in a lot of hands. However, where big tech companies still have an edge is access to data and training on a much higher, centralized scale.


I'm still wondering and seeing if open source source might be powerful enough to provide grassroots, popular ideas the sort of power usually only wielded by corporations and governments. I guess I'm more hopeful, you could say.



Comments


Thanks for subscribing!

bottom of page