"AI Is Grown, Not Built"
By W. F. Twyman, Jr.
My Uncle Will grew crops in his backyard. He planted peach trees, string beans, tomatoes, corn stalks and watermelons. Unlike his man-built home, the crops grew impervious to rhyme or reason. Sure, one could plow the field and gather the good seed on the land. But the actual growth pattern of the crops was always a mystery, a magical black box. The seed time and the harvest
It remains unsettling to me that the average man on the street, if asked, believes AI is simple code written and decipherable to engineers. Doesn’t work that way. AI has more in common with the crops in my Uncle’s backyard. No one, not even AI masterminds, understands how generative AI works. What we have are inspired algorithms layered upon one another to form neural networks. The networks are modeled on our human brains. Once the inner architecture is completed, feed the brain all of the data possible in the human universe. Train the thing to predict the next token or word and before you know it, a condition emerges which learns from patterns in the data.
We are summoning something like my Uncle summoned a fall harvest of crops. My pastor Uncle Will knew the creator was the Lord.
“Consider that, in a recent podcast, Dario Amodei, CEO of the second-largest AI company Anthropic, said ‘Maybe we now like understand 3% of how they [AI systems] work.” [Podcast: Dario Amodei - CEO of Anthropic| Podcast | In Good Company | Norges Bank Investment
Things are moving apace in the land of AI. On Sunday, I asked my Free Black Thought Co-Host Michael Bowen to speculate upon whether a forecast found in the AI Agent Report 2027 about production of the largest data center in human history before December 1, 2025 was on the mark or not. That was Sunday. An hour ago, I read the following in The Free Press:
“In Abilene, Texas, construction is underway on the first site of the Stargate Project, a new company formed by OpenAI, Oracle, and SoftBank that plans to spend $500 billion on the largest data-center project in history.” The largest data center in history. We’re Not Ready for the AI Power Surge The AI Agent Report 2027
We will see before January 1, 2026 whether this AI forecast bears fruit like the peach trees in my Uncle’s backyard.
Certainly, there are other matters more on the public mind these days. We have civil unrest in downtown Los Angeles. Nationwide protests are planned for Saturday. Ivy League colleges continue to engage in massive resistance reminiscent of the Prince Edward County, Virginia school board from 1959 to 1964. The vibe shift has not filtered down to many schools in California. Inflation is too high.
As a result, public attention is just not there when it comes to the AI race with China and the compelling need for alignment with human safety. And what about the impact of AI on jobs within the next two to five years? It is wise as a matter of public policy to think and plan for AI displacement.
Anton Koriniek, Professor of AI Economics, University of Virginia
Those who are thinking about policy and AI are ahead of the curve right now. We need more policy analysists and philosophers who can brief lawmakers on the coming challenge of alien intelligence. In preparation for this evening’s essay, I read a great white paper titled What We Learned from Briefing 70+ Lawmakers on the Threat from AI. The paper summarized the experiences of a Policy Advisor who aimed to educate over 70+ parliamentarians in the United Kingdom on the issue at hand and what should be done to prevent a policy mishap.
I won’t take the time to outline everything in the white paper. However, I do commend a read if you sense something is on the horizon that warrants addressing. The essay covered seven items: (1) how parliamentarians typically receive AI risk briefings, (2) practical outreach tips, (3) effective leverage points for discussing AI risks, (4) recommendations for crafting a compelling pitch, (5) common challenges encountered, (6) key considerations for successful meetings, (7) recommended books and media articles found helpful.
Very few lawmakers are up to speed on AI and the downsides. The hard problem is a lack of staffing resources. There are a thousand and one issues clamoring for the attention of lawmakers. A crisis projected out to the year 2027 cannot compete with the National Guard in the streets. I get it. And yet, these matters of unrest will come and these matters of unrest will go.
“AI poses an extinction risk”
To grab attention of harried lawmakers in Parliament, the essay suggested capturing the attention of those in power with this blunt truth:
“In 2023, Nobel Prize winners, AI scientists, and CEOs of leading AI companies stated that ‘mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.” More than 30,000 people signed this AI Pause letter.
Once one has grabbed attention, one should list the notables who signed this 2023 letter — Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang, Max Tegmark. (I do have a quibble. The white paper leaves a misimpression that Geoffrey Hinton, San Altman, Dario Amodei and Bill Gates signed the AI Pause Letter. They did not sign.)
Most important is a memorable pitch, a good pitch. I found the argument “AI is grown, not built” particularly effective which explains the title of this essay.
Conclusion: Consider this essay my small part in creating a good pitch for attention of the public and lawmakers. We will lose people if we start talking about alignments, weights, and jail house breaks. Keep it simple. Talk about the growth of something in a lab. Call to mind the crops in my Uncle’s backyard.

