From a Firehose
If you feel like you've been drinking from a firehose, join the club.
Just this month alone, the US is now bragging about bombing a 5000 year old civilization back to the stone age, and in so doing has caused the almost closure of the Strait of Hormuz, rocking energy prices and fertilizer shipments to the West. Eight million or so have demonstrated in the NO Kings protest against the most corrupt, vile, undemocratic administration in the 250 year history of our Republic. It just goes on and on.
But last night, we saw the AI Doc.
This from Wikipedia:
"The AI Doc: Or How I Became an Apocaloptimist is a 2026 American documentary film directed by Daniel Roher and Charlie Tyrell. It is produced by the Academy Award–winning teams behind Everything Everywhere All at Once (Daniel Kwan and Jonathan Wang) and Navalny (Shane Boris and Diane Becker)."
So this is no fly by night outfit. And despite the firehose of gluttony and ignorance we see in our daily political existence, this movie might very be the scariest movie you'll see this year. (or any year)
This from KQED,
"Roher gathers the interviewees into three broad groups: the terrifyingly pessimistic ones, the naively optimistic ones and the CEOs who are casually working on something that may or may not spark humanity’s demise. (Well, three out of five of them, anyway: OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei all appear. Mark Zuckerberg declined to participate and Elon Musk apparently backed out at the last minute.)
The worst AI predictions are presented first. Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, calmly talks of the “abrupt extermination” of humanity. Author and historian Yuval Noah Harari calls AI “a deadly threat.”
Center for Humane Technology President Tristan Harris — one of the most measured commentators in the movie — also shares some truly sobering views, the worst of which is that he knows active AI researchers who “don’t expect their children to make it to high school.” It doesn’t help matters that machine learning researcher Shane Legg follows this with the assertion, “The really powerful systems are coming and they’re coming soon. (clip)
In his conversations with Roher, Sam Altman talks a good game about the safety protocols that OpenAI has in place. Given his company’s highly controversial new contract with the Department of Defense, his words will either ring hollow or serve as comfort, depending on your viewpoint. For his part in The AI Doc, Anthropic’s Dario Amodei simply says, “Am I confident that everything’s going to work out? No, I’m not.” Hassabis is even more vague: “If something is possible to do, humanity is going to do it,” he says.
The feeling I left The AI Doc with is that the future of AI is overwhelmingly — and unfortunately — out of the hands of everyday people. "
That's just not just scary, that's undemocratic.
After the movie, we went down to our favorite italian restaurant and bar on the corner. There we ordered our usual meal of Kale Salad and Meatballs on Polenta. We were still working on digesting the movie when the food arrived. Next to us at the corner of bar were two 40ish males One of them was in High Tech and the other was in Property Management. The conversation went on long enough for me get a couple of doubles in. But they were moved by the movie. My partner was moved. And my hair went from being on fire to whatever the plasma version of that is.
I told my story about how I have spent most of my adult years (after my Armadillo daze) working in Renewables trying to deal with the existential challenge that humanity faces with Climate Change. How I built the first commercial wind farm in Texas, how I served as Chairman of the Electric Utility to adopt renewable energy goals that far exceeded the tepid goals of most organizations, how I lobbied the PUC for an energy only payment protocol ultimately creating an economic environment such that rosy red Texas has more Wind, more Solar, and more Storage than any other state. And even with the R's trying to stop the train, the tracks are laid.
I didn't mention that I had been blogging about it for 22 years now.
But I did tell them that I no longer think that Climate Change is the primary existential challenge of humankind. Oh it is, but it's like a meteor that is way out in deep space but clearly heading our way. The AI meteor is in our solar system. And there are billions and billions of dollars and other currencies being spent right now as the race for artificial general intelligence (AGI) dominance goes into overdrive.
According to Google AI,
As of early 2026, true AGI does not exist. Current research focuses on improving large language models (LLMs) to handle more generalized reasoning, though they still fail at complex, novel tasks that humans easily solve. It also says, Development of AGI brings significant risks, including ethical issues regarding AI safety, job displacement, and alignment with human values.
Global military expenditures reached a record high of approximately $2.7 trillion in 2024.
At 40% annual growth, AI investments will eclipse our war investments by next year.
We cannot drink from the Firehose.
We must point our efforts elsewhere.
My brother Frank would say:
You can't worry about your Dandriff,
When you are dying of Cancer.
Something else got him.
Earthfamily Principles
Earthfamilyalpha you tube channel
Earthfamilyalpha Content IV
Earthfamilyalpha Content III
Earthfamilyalpha Content II
Earthfamilyalpha Content











