Sources: AI
Links to things I want to remember
Problems with Music and Technology
Wednesday, July 17th, 2024Interesting consideration of Rick Beato’s recent video arguing, in part, why music is getting worse. DeLong compares Beato’s argument to the complaint of John Philip Sousa in 1908 that the infernal talking machines were stealing music from the people. Beato and Ted Gioia have been working on this theme lately and, while I’m a big fan of both Beato and Gioia and I think they have legitamate grounds for concern, I can’t help feeling that their arguments aren’t quite fully considered. The rescue of music from the ease and mediocrity of AI as propssed has the whiff of precisely the kind of control and professionalization that Sousa was arguing against, and suggesting to young people that they should just do it the way we used to sounds guaranteed to provoke eye-rolls…
The inclusion of Sousa in the article reminds me of a quite different use of the same argument of Sousa by Lawrence Lessig in his TED talk of 2007 arguing for freedom of the commons with respect to music — Laws that choke creativity.
I suspect, and hope, that this conversation goes on and sees some refinement.
Why We Really Like Brains...
Tuesday, July 2nd, 2024Very useful and coherent argument that our standard conception of intelligence, and our many attempts to define it, is deeply anthropocentric and misguided in its tendency to look for intelligence as a thing present in entities we want to call intelligent, or as a property or product of some part of anatomy — like the brain.
Excellent antidote to the usual discussions and assumptions of intelligence and refreshingly free of firm conclusions.
Singularity: the Black Hole of Post-Post-Modernity
Saturday, May 4th, 2024Strange, dark reflections on the strange, dark state of modern culture. Nice, rambling essay connecting Vico, Kafka, AI, rationalists, and the rise of cheerful end-times techno-optimists and state-pessimists. Conclusion: the metaphors that drive the techno-culture of the moment have gone wrong; we need better metaphors.
Book Review: Why Machines Will Never Rule the World
Saturday, June 3rd, 2023ACX’s 2023 book review contest offers another winner. A nicely balanced, only partly skeptical review of Why Machines Will Never Rule the World by Jobst Landgrebe and Barry Smith. The book argues against AGI on the grounds that math-based, deterministic computing systems can’t replicate fundamentally non-linear, complex systems that produce biological intelligence.
Douglas Rushkoff's Take on the AI Hype Bubble
Saturday, June 3rd, 2023Captain of Team Human realizes the clamour from the tech-bros about AI risk and for AI regulation is just another play for legislative aid in capturing another market…
LLMs and Ambiguity
Saturday, May 20th, 2023Thoughtful and provocative response to claims of implicit gender bias in GPT-4. The critical and refreshing claims:
- researchers are too prone to anthropomorphizing LLMs in interactions and interpretations, and
- LLMs are not as good at language disambiguation as we think.
The lesson we need to keep in mind: language is full of ambiguity that we don’t appreciate because we can negotiate it without much effort, but computers, no matter how impressive, can’t yet.
Maciej Ceglowski on Superintelligence
Saturday, March 25th, 2023Excellent and entertaining talk about the preposterous popularity of the idea of Superintelligence among the tech elite. Old but still relevant.
Why Scott Aaronson Isn’t Terrified of AI
Wednesday, March 8th, 2023Scott works at Open-AI and he isn’t terrified. He is one of the best, open-minded, common-sense defenders of not freaking out I have come across. The article isn’t full of deep or abstract or complex arguments, as are commonly found in articles on this subject; he describes his position both rationally and on the basis of intuition, and he comes down on the side of thinking we don’t know what’s going to happen but it probably won’t be as bad as those most infected by tech-anxiety think — and AI may even pull our asses out of the climate-change fire if we are very lucky.
The State of AGI Anxiety
Friday, March 3rd, 2023A broad overview of arguments arising from a group of mostly technologists, with some scientists, psychologists, and Genius-Rich-Guys. It is useful as a reference to people speaking within a certain channel on both sides of the AI-alignment/end-of-the-world anxiety spectrum. The author is passionate and worried, like many in his camp, and wants us to worry too.
Of most use, this article highlights many of the strange unspoken assumptions that lie beneath AI discussions — about the nature of intelligence, consciousness, super-intelligence, and about the natural objectives, priorities, and optimizations of intelligent things (mostly including humans, possible aliens, and soon computers). One senses a deep anxiety that requires metaphysical therapy, stat.
ChatGPT: some thoughts by Bill Benzon
Monday, February 27th, 2023There is so much writing on ChatatGPT and LLMs that it is hard to keep a list — something I started to do in a desultory kind of way but quickly got overwhelmed. This example is one I like. It is provocative without falling for casual anthropomorphism, end-of-the-worldism, or goofy enthusiasm. It does, however, go in for some goofy testing of ChatGPT with literary device experimentation and some conceptual guidance by way of Claude Lévi-Strauss. This is a good one to include for open-minded fun and education.
Noah Smith Interviews Kevin Kelly
Monday, February 20th, 2023Nice update from Kevin Kelly — turns out, unsurprisingly, that Noah is a fan. Good interview, covers a broad range of topics of interest to Kelly.
Secretaries: a Forgotten History
Friday, January 20th, 2023A guest-post on Noah’s substack, mainly about jobs in the age of AI and speculating that the job of secretary may have a revival as an important human role in the AI future. What most struck me was the description of the origins of the secretarial job that is far different from our typical, gendered, vaguely dismissive assumptions suggesting a trivial task based in low-skill servitude. Robbins starts by reminding us (of something I didn’t know in the first place, to make the point) of the original meaning of secretary: “The ‘secretary’ literally means ‘person entrusted with secrets,’ from the medieval Latin secretarius, the trusted officer who writes the letters and keeps the records.”
Robbins’ brief history of the job’s descent toward its current status is surprising and refreshing.
AI: Actual Idiots
Wednesday, April 19th, 2017From before the LLM furore, this is one of the sanest considerations of what AI means, what it doesn’t, and how we should be thinking about it.
Can’t resist including this gem of an image from the article: