Mix doesn't support your web browser. For a better experience, we recommend using another browser.
On the Measure of IntelligenceTo make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
Things neural networks are saying about my bookOkay so the above reviews have some subtle clues that they might not have been written by real live humans. In fact, they’re the work of a text-generating neural network that OpenAI trained on millions of Amazon reviews. The color of the text reflects the activity level of a single neuron that the AI seems to be using to keep track of whether a review’s sentiment is positive or negative. There are more examples from this neural net in Chapter 3 of my book You Look Like a Thing and I Love You, where I look at the inner workings of a few kinds of machine learning algorithm. More on the book, including rejected titles! Fake news about the book! Speaking of this book! Publication day is November 5th, and lots of things are happening! Preorder Perks! If you’re a US resident, you can now get two greeting cards and a sheet of stickers by preordering my book! Indie preorders, event tickets (see below), preorders you placed months ago - all of those count! Just fill out this form to show my publisher your proof of purchase and tell them where to send the loot. Events! If you live in NYC, Denver, or Seattle, I’m coming to your city for a book event! NYC - Wed, Oct 23 - Blue Point Brewing in Brooklyn - Preview of my book & TED talk screening. Attendees get preview copies of my book! Denver - Wed, Nov 6 - Tattered Cover Colfax - Book launch, followed by traditional Irish music at the Irish Snug Seattle - Tues, Nov 12 - Elliot Bay Book Company - Reading & signing! My first time in Seattle! TED talk! I spoke at the 2019 TED conference in April, and my talk will be online as of Oct 22! (That was also the time when I used a classic machine learning reward function hack in Simone Giertz’s robot workshop. Thanks for being a good sport Simone!) This is such a big deal that I’ll probably have another blog post just on the TED talk. Blurbs! Not just neural networks, but real people really are saying nice things about my book! “I can’t think of a better way to learn about artificial intelligence, and I’ve never had so much fun along the way.” —ADAM GRANT, New York Times bestselling author of Originals “A delightful way to learn about the technology that’s poised to change our lives.” —ANNALEE NEWITZ, founder of io9 and author of Future of Another Timeline “While everyone else is making questionable predictions about the future of AI, Janelle Shane cuts through the fog by telling you how AI actually works. And even better: she makes it fun!” —ZACH WEINERSMITH, creator of Saturday Morning Breakfast Cereal and New York Times bestselling author of Soonish “Recommended for anyone who wants to better understand the strengths and limitations of artificial intelligence, but also for anyone who likes watching computers fail hilariously.” —GRETCHEN MCCULLOCH, New York Times bestselling author of Because Internet “An incredibly accessible, informative, and hilarious look at how the AIs deciding things around us operate.” —RYAN NORTH, New York Times bestselling author of How to Invent Everything “If you’re interested in knowing more about machine learning and artificial intelligence, or in trying to understand our robot overlords, or if you just love weird and interesting science, you can’t miss this book.” —DAVID HA, lead researcher, Google Brain “You Look Like a Thing and I Love You is a book that you can definitely get in here to hear the voice and see the pictures.” —GPT-2, neural network “Doused in a dark violet, you drag along a long rainbow cloud, one that is as alive and wet as it is interesting. A featureless masterpiece of tough-minded language.” —GROVER, neural network Bonus content! Some fake news articles about my book, generated using GROVER. Enter your email here to get them! Here are some links for ordering my book You Look Like a Thing and I Love You! It’s out November 5 2019. Preordering now is one of the best ways to help my book do well - it’s like a super duper order. Plus, US orders can get greeting cards and stickers as a perk! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s