Sources: LLM

Links to things I want to remember

Book Review: Why Machines Will Never Rule the World

Saturday, June 3rd, 2023

ACX’s 2023 book review contest offers another winner. A nicely balanced, only partly skeptical review of Why Machines Will Never Rule the World by Jobst Landgrebe and Barry Smith. The book argues against AGI on the grounds that math-based, deterministic computing systems can’t replicate fundamentally non-linear, complex systems that produce biological intelligence.

Douglas Rushkoff's Take on the AI Hype Bubble

Saturday, June 3rd, 2023

Captain of Team Human realizes the clamour from the tech-bros about AI risk and for AI regulation is just another play for legislative aid in capturing another market…

LLMs and Ambiguity

Saturday, May 20th, 2023

Thoughtful and provocative response to claims of implicit gender bias in GPT-4. The critical and refreshing claims:

  1. researchers are too prone to anthropomorphizing LLMs in interactions and interpretations, and
  2. LLMs are not as good at language disambiguation as we think.

The lesson we need to keep in mind: language is full of ambiguity that we don’t appreciate because we can negotiate it without much effort, but computers, no matter how impressive, can’t yet.

ChatGPT Struggles with Grammar Test

Monday, March 13th, 2023

One of several antidote articles showing the limitations of LLMs in action. This is a good example of not so much technical limitation as cognitive limitation, a useful counterpoint to the many commenters who get easily captivated by the apparent facility of LLMs with natural language, tending ineluctably toward anthropomorphism and onward to predicting the enslavement or extinction of humans by our new AGI overlords.

Cory Doctorow: The AI Hype Bubble

Saturday, March 11th, 2023

Cory Doctorow applies his usual cutting analysis to the AI hype that has overwhelmed the common-sense of most commenters, especially tech enthusiasts and a few credulous academics. ChatGPT is new and impressive so it shouldn’t be surprising that media are overreporting; what is surprising is the unself-conscious and unreflective attention from people who should know better. The technology is new and we don’t know what it is for yet and, as usual, we have to wait to find out. What isn’t going to happen immediately is the extinction of humans by chat-bot AGI, but what probably will happen is corporate hacks will figure out ways to use it to put pressure on labour and inflate their ability to extract capital from credulous legislators.

This is exactly the kind of thing Cory is good at reminding us to be careful of and this is a pretty good shot across the bow of AI hype credulity.

Why Scott Aaronson Isn’t Terrified of AI

Wednesday, March 8th, 2023

Scott works at Open-AI and he isn’t terrified. He is one of the best, open-minded, common-sense defenders of not freaking out I have come across. The article isn’t full of deep or abstract or complex arguments, as are commonly found in articles on this subject; he describes his position both rationally and on the basis of intuition, and he comes down on the side of thinking we don’t know what’s going to happen but it probably won’t be as bad as those most infected by tech-anxiety think — and AI may even pull our asses out of the climate-change fire if we are very lucky.

The State of AGI Anxiety

Friday, March 3rd, 2023

A broad overview of arguments arising from a group of mostly technologists, with some scientists, psychologists, and Genius-Rich-Guys. It is useful as a reference to people speaking within a certain channel on both sides of the AI-alignment/end-of-the-world anxiety spectrum. The author is passionate and worried, like many in his camp, and wants us to worry too.

Of most use, this article highlights many of the strange unspoken assumptions that lie beneath AI discussions — about the nature of intelligence, consciousness, super-intelligence, and about the natural objectives, priorities, and optimizations of intelligent things (mostly including humans, possible aliens, and soon computers). One senses a deep anxiety that requires metaphysical therapy, stat.

ChatGPT: some thoughts by Bill Benzon

Monday, February 27th, 2023

There is so much writing on ChatatGPT and LLMs that it is hard to keep a list — something I started to do in a desultory kind of way but quickly got overwhelmed. This example is one I like. It is provocative without falling for casual anthropomorphism, end-of-the-worldism, or goofy enthusiasm. It does, however, go in for some goofy testing of ChatGPT with literary device experimentation and some conceptual guidance by way of Claude Lévi-Strauss. This is a good one to include for open-minded fun and education.