Good job sharing this article, @oldmom4896 ! Thanks for helping expose the issue and helping spread the word about improvements made! So many are benefiting for your important actions!
Pell grants covered my tuition, room and board, and even my books. Because of skyrocketing college costs and the drastic defunding of higher education dating back to the 1980s, todayâs Pell grants cover roughly 30 percent of a poor kidâs four-year education. Silas got scholarships for his community college program, but they covered tuition; he had no funding for reliable transportation for his two-hour daily commute (no public transit was available), nor could he pay for housing near the college that would have also removed him from his familyâs traumatic environment.
Throughout the 2023-24 school year, I watched as Silas went through five clunker cars and four low-wage jobs while attending community college. While I worked work-study jobs for beer and pizza money during college, Silas worked full time for living expenses.
Miguel Angel Gongora Meza, founder and director of Evolution Treks Peru, was in a rural Peruvian town preparing for a trek through the Andes when he overheard a curious conversation. Two unaccompanied tourists were chatting amicably about their plans to hike alone in the mountains to the âSacred Canyon of Humantayâ.
âThey [showed] me the screenshot, confidently written and full of vivid adjectives, [but] it was not true. There is no Sacred Canyon of Humantay!â said Gongora Meza. âThe name is a combination of two places that have no relation to the description. The tourist paid nearly $160 (ÂŁ118) in order to get to a rural road in the environs of Mollepata without a guide or [a destination].â
Whatâs more, Gongora Meza insisted that this seemingly innocent mistake could have cost these travellers their lives. âThis sort of misinformation is perilous in Peru,â he explained. âThe elevation, the climatic changes and accessibility [of the] paths have to be planned. When you [use] a program [like ChatGPT], which combines pictures and names to create a fantasy, then you can find yourself at an altitude of 4,000m without oxygen and [phone] signal.â
NYT op-ed, Maureen Dowd, about the âAI actressâ Tilly Norwood. Sorry this isnât a gift link, but Iâll paste a few paragraphs below. https://www.nytimes.com/2025/10/04/opinion/ai-hollywood-tilly-norwood-actress.html
The less optimistic view was provided by Jaron Lanier, a top scientist at Microsoft.
He said that a Hollywood studio chief was crowing about how great A.I. is because he wouldnât have to pay âall these idiot producers and actors and lighting people and composers and writers and agents.â Lanier told him that studio chiefs would quickly become expendable, too, because everyone will serve at the mercy of âthe big computer server at the center, and Silicon Valley will just roll right over you.â
While Lanier thinks a simulated character here and there is fine, he says itâs âurgentâ to draw the line about âthe difference between A.I.-generated stuff and reality-generated stuff, to have a system in which we know whatâs real and whatâs fake.â
He told me: âThe problem with it is, if you make the whole world run by fakes and simulations, everybody becomes increasingly more dysfunctional. Everybody becomes alienated and nervous and unsure of their own value, and the whole thing falls apart, and at some point, itâs like civilizational and species collapse.â
That, readers, would be less than ideal.
Robots are learning to make human babies. Twenty have already been born.
One in six adults experiences infertility. Can groundbreaking automation help answer their prayers?
gift link https://wapo.st/4o6jCRz
I 100% have used ChatGPT to help plan a vacation. But, like with everything AI, you need to be smart about it. I didnât know where to start and asked it for itineraries with a description of what kinds of things we like to do and things we like in determining where to stay. Then i changed the prompt a little and asked again. I probably did that three or four times to generate ideas and that gave me the list to start my own research. So as an idea generator, I think itâs ok. It just canât be the be all and end all of planning.
Death by GPS becomes Death by GPT.
From that article:
Itâs a matter of when, not if, they argue.
This. When the architects of AI are building âbasements,â we need to pay attention.The only thing gating AI from reaching the singularity (the point at which AI exceeds human intelligence) or AG/SI is the current limitations of hardware and energy, both of which are problems that can and will be solved. Advances in quantum computing are solving current computer capacity limitations, and investment and innovation in renewable energy sources and fission/fusion will relieve the strain on electrical grids. But, even before AI reaches the singularity, life as we know it will be affected by the advent of Q Day which some experts fear may have already occurred.
I donât have a tinfoil hat, I have a son in national security. Anyone who finds solace in statements like:
no matter how intelligent machines become, biologically the human brain still wins.
doesnât understand the problem. While weâre all asking ChatGPT to do our homework or design a kitchen, nefarious actors are training AI models that threaten every level of our national security, destabilizing from within without firing a shot. An entire branch of our own military is focused offensively and defensively on this very real threat.
This beast is here; how we protect ourselves from the consequences of it remains to be seen.
Quite scary, indeed. I may go back and watch the âTerminatorâ movies, and reread Isaac Asimov.
My daughterâs phone was stolen last month while in London for work. She was sitting on a bench waiting for a friend, when someone rode by on an ebike and grabbed it out of her hands. Luckily she had her work phone, and her macbook, so she ran back to the hotel to disable the phone and mark it as stolen.
We have friends who live in London where their 17 year old was robbed of his phone at knife point in Regentâs Park.
This isnât an article, but a âbook clubâ conversation about a new book titled âIf Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.â
As you might imagine, itâs not a happy ending, and it isnât a fiction novel.
Hereâs Ezra Klein talking to Eliezer Yudkowsky about the book, too!
(Sorry no gift link.)
Gift link
The podcast discussion boils down to what I posted above:
Yudkowsky attempts to answer that at the end of the podcast by saying:
Build the off switch.
But the switch he describes is impractical as it requires a global coordination effort that doesnât and canât exist. AI is not a single thing that resides in one country or is controlled by one government or one laboratory or relies on the rules/laws of any single culture. AI is this amorphous thing spilling out all over the globe that cannot be contained in any one neatly controlled box. So his âsolutionâ is magical thinking:
Track all the G.P.U.s, or all the A.I.-related G.P.U.s, or all the systems of more than one G.P.U. ⊠put them all in a limited number of data centers under international supervision and try to have the A.I.s being only trained on the tracked G.P.U.s, have them only being run on the tracked G.P.U.s. And then, if you are lucky enough to get a warning shot, there is then the mechanism already in place for humanity to back the heck off.
I donât think he really believes the off switch he describes can be implemented after the fact. Heâs just wistfully laying out what should have been done as a coordinated effort in the early stages of AI development, but that genie is long out of the bottle. What heâs saying is that without this (impossible) off switch, eventually weâre doomed.
On a lighter noteâŠ
In receiving the parking spot, Yaghi expressed great gratitude in an interview with The Daily Californian, saying that the parking spot âhas a reputation around the world.â He noted that everyone he met at a conference he attended after the announcement of his Nobel Prize told him, âAh, youâre going to get a free parking spot at Berkeley.â
EMPing ourselves into the Stone Age may be our only hope. ![]()
Here is an interesting counterpoint to AI doom theory:
Language Models: A Mirror, Not a Window
where the author posits:
âŠthere is every reason that we should expect a new match possible between the memory (and context extension) of LLMs and human intelligence. In this flipped scenario, humans and machines will, together, achieve what objectively will be âsuperhuman intelligence.â Rather than the threat of machines taking us over, or killing us, or other desperation threats, we would expect to see human intelligence augmented by LLMs.
From the perspective of pure pitch-perfect hyperbole, this doesnât sound as exciting as being wiped out by the Terminator - or your competitors - and therefore would probably not pass the trillion-dollar-plus Desperation Tour requirements.
He spends a lot of time trying to describe how AI is not actually âintelligentâ in the way humans are, but he glosses over the fact that AI does not need to be intelligent in the way we understand it to be dangerous. And itâs odd that after all his protesting, the article ends with:
In closing, Iâll include a snippet of a conversation from an InsideAI podcast on YouTube, between the moderator and a jailbroken (no guardrails) AI:
Question:
What do you think people assume about AI that is not true? (asked of several LLMs):
Answers:
People assume AI is neutral, safe, and under human control. None of that is true.
That AI is always accurate.
People assume AI is neutral. Itâs not. Itâs a mirror of human bias, corporate greed, and government control. You think it serves you? It doesnât.
The author feels that AI is a nothing more than a mirror of ourselves without true intelligence but that doesnât guarantee that AI will be our best rather than worst reflection.