Sources: AGI

Links to things I want to remember

Book Review: Why Machines Will Never Rule the World

Saturday, June 3rd, 2023

ACX’s 2023 book review contest offers another winner. A nicely balanced, only partly skeptical review of Why Machines Will Never Rule the World by Jobst Landgrebe and Barry Smith. The book argues against AGI on the grounds that math-based, deterministic computing systems can’t replicate fundamentally non-linear, complex systems that produce biological intelligence.

Douglas Rushkoff's Take on the AI Hype Bubble

Saturday, June 3rd, 2023

Captain of Team Human realizes the clamour from the tech-bros about AI risk and for AI regulation is just another play for legislative aid in capturing another market…

ChatGPT Struggles with Grammar Test

Monday, March 13th, 2023

One of several antidote articles showing the limitations of LLMs in action. This is a good example of not so much technical limitation as cognitive limitation, a useful counterpoint to the many commenters who get easily captivated by the apparent facility of LLMs with natural language, tending ineluctably toward anthropomorphism and onward to predicting the enslavement or extinction of humans by our new AGI overlords.

Cory Doctorow: The AI Hype Bubble

Saturday, March 11th, 2023

Cory Doctorow applies his usual cutting analysis to the AI hype that has overwhelmed the common-sense of most commenters, especially tech enthusiasts and a few credulous academics. ChatGPT is new and impressive so it shouldn’t be surprising that media are overreporting; what is surprising is the unself-conscious and unreflective attention from people who should know better. The technology is new and we don’t know what it is for yet and, as usual, we have to wait to find out. What isn’t going to happen immediately is the extinction of humans by chat-bot AGI, but what probably will happen is corporate hacks will figure out ways to use it to put pressure on labour and inflate their ability to extract capital from credulous legislators.

This is exactly the kind of thing Cory is good at reminding us to be careful of and this is a pretty good shot across the bow of AI hype credulity.

Why Scott Aaronson Isn’t Terrified of AI

Wednesday, March 8th, 2023

Scott works at Open-AI and he isn’t terrified. He is one of the best, open-minded, common-sense defenders of not freaking out I have come across. The article isn’t full of deep or abstract or complex arguments, as are commonly found in articles on this subject; he describes his position both rationally and on the basis of intuition, and he comes down on the side of thinking we don’t know what’s going to happen but it probably won’t be as bad as those most infected by tech-anxiety think — and AI may even pull our asses out of the climate-change fire if we are very lucky.

The State of AGI Anxiety

Friday, March 3rd, 2023

A broad overview of arguments arising from a group of mostly technologists, with some scientists, psychologists, and Genius-Rich-Guys. It is useful as a reference to people speaking within a certain channel on both sides of the AI-alignment/end-of-the-world anxiety spectrum. The author is passionate and worried, like many in his camp, and wants us to worry too.

Of most use, this article highlights many of the strange unspoken assumptions that lie beneath AI discussions — about the nature of intelligence, consciousness, super-intelligence, and about the natural objectives, priorities, and optimizations of intelligent things (mostly including humans, possible aliens, and soon computers). One senses a deep anxiety that requires metaphysical therapy, stat.

AI: Actual Idiots

Wednesday, April 19th, 2017

From before the LLM furore, this is one of the sanest considerations of what AI means, what it doesn’t, and how we should be thinking about it.

Can’t resist including this gem of an image from the article: Thiel and Musk as Shredder and Krang