Discussion about this post

User's avatar
Winkfield Twyman's avatar

Thank you for the close read. The book looks interesting and I will add it to my queue. I think I added caveats to the Tim story. Even if the Tim story was woven out of thin air, I think the greater insight of emergence accords with the evidence thus far. I think you are exactly right about human programming but when does the longer and longer lease of human control become meaningless. It is one thing to have a dog on a six foot lease. It is another thing to have a dog on a mile long lease. I think this reality is part of your comment.

Is the core insight of Why Greatness Cannot Be Planned emergence? The book looks interesting and has been added to my queue. Glad you appreciated The Jetsons reference. Those were the days of lazy Saturday morning.

RDM's avatar

You pulled me in early Saturday morning with a soothing picture of the Jetson's and .... well, I didn't expect what came next. :-|

You <?> write: "..Tim was proactive. He did not wait for a prompt..." Prima facie, I don't believe this. These agents were prompted at some point. And then left to autorun in a vast human-content data zoo...and started creating their own content. Somebody wrote these agents, somebody gave them access, somebody told them *something* -- "go play", "explore other ideas", " come up with new ideas", "explore themes", "have discussions", etc etc. These are not unbidden actions.

You <?> write computers don't have hobbies, but then you say they are proactive. I don't buy that. Some agent, somewhere in that sad menagerie, was told "Help me plan my weekend"...and it extended into the vagueness of 'help' and ordered food. Precisely as it was designed to. Just because the trillionth digit of pi is not what we expected, doesn't mean it wasn't already baked in. Model Warfare -- oops, Welfare -- is in *no* way a surprise: giga-hours of effort have gone into making these programs' output 'ethical', and putting up guardrails. This is simply an explicit continuation of that directive. Not an unexpected development.

In short, these agents are busy exploring the as-yet-unexpressed state of all their content, plus

whatever they spew out in their conversational mode. Of *course* there will be new things, just as some program will write a new poem -- "never before written!" -- from existing poetry corpuses. Of course, too, we will be surprised and delighted (or horrified) at what they discover

in that latent space. "Who wudda thunk the quadrillionth digit of pi was 4?!" We had not looked, but we set the frame and it followed the breadcrumbs.

Finally, I have not yet seen the energy usage calculation for all this activity. How many tokens and watts did all this require? What is the semantic signal to noise ration came of it? What, by way of comparison, would you expect to happen if you put millions of high IQ humans in a chat -- pizza deliveries and no hobbies? I'd expect far more, for far less energy. "But we're getting this faster..." Yeah, more pizza, faster. Just what the doctor ordered.

Stepping back from this initial take, into something more positive: there will be good things, very good things, coming from this type of interaction. AlphaFold is finding new proteins, bugs in existing programs are being found, etc etc. Please do yourself a favor (sorry: incoming unsolicited advice) and read Kenneth Stanley's "Why Greatness Cannot Be Planned", it's short and powerful and if you don't like your copy, I'll refund your purchase price and take shipment of the short book. Stanley talks about a vast, unplannable domain of exploration and invention that already preceded any "AI" by ... millenia. It should be required reading in any serious education. It explores and explains what is happening here.

I hope we don't lose Winkfield. I enjoyed his writing.

No posts

Ready for more?