Publishers Weekly
07/27/2020
Christian (The Most Human Human), a writer and lecturer on technology-related issues, delivers a riveting and deeply complex look at artificial intelligence and the significant challenge in creating computer models that “capture our norms and values.” Machines that use mathematical and computational systems to learn are everywhere in modern life, Christian writes, and are “steadily replacing both human judgment and explicitly programmed software” in decision-making. Some of those decisions, however, are unreliable, as Christian shows through scrupulous research. Facial recognition systems can be “wildly inaccurate for people of one race or gender but not another” and perform particularly poorly on identifying Black women correctly. Meanwhile, risk assessment software, which helps decide bail, parole, and even sentencing for criminal defendants, has been widely adopted nationwide without being extensively audited. Though it’s tempting to assume a doom-and-gloom outlook while reading of these problems, Christian refreshingly insists that “our ultimate conclusions need not be grim,” as a new subset of computer scientists “focused explicitly on the ethics and safety of machine-learning” is working to bridge the gap between human values and AI learning styles. Lay readers will find Christian’s revealing study to be a helpful guide to an urgent problem in tech. (Oct.)
Stuart Russell
"A fascinating, provocative, and insightful tour of all the ways that AI goes wrong and all the ways people are trying to fix it. Essential reading if you want to understand where our world is heading."
Mike Krieger
"This is the book on artificial intelligence we need right now. Brian Christian takes us on a technically fluent (yet widely accessible) journey through the most important questions facing AI and humanity. It is thought-provoking and vital reading for anyone interested in our future."
Wall Street Journal - David A. Shaywitz
"The disconnect between intention and results—between what mathematician Norbert Wiener described as 'the purpose put into the machine' and 'the purpose we really desire'—defines the essence of 'the alignment problem.' Brian Christian, an accomplished technology writer, offers a nuanced and captivating exploration of this white-hot topic, giving us along the way a survey of the state of machine learning and of the challenges it faces."
5 Books that Inspired Microsoft CEO Satya Nadella This Year - Fast Company - Satya Nadella
"...clear and compelling...The storytelling here moves us from the theoretical to the practical while attempting to answer one of our industry's most pressing questions: How do we teach machines, and what should we teach them?"
Martin Rees
"Brian Christian is a fine writer and has produced a fascinating book. AI seems destined to become, for good or ill, increasingly prominent in our lives. We should be grateful for this balanced and hype-free perspective on its scope and limits."
Jaan Tallinn
"An abundantly researched and captivating book that explores the road humanity has taken to create a successor for itself—a road that’s rich with surprising discoveries, unexpected obstacles, ingenious solutions and, increasingly, hard questions about the soul of our species."
Jennifer Pahlka
"The Alignment Problem should be required reading for anyone influencing policy where algorithms are in play—which is everywhere. But unlike much required reading, the book is a delight to read, a playful romp through personalities and relatable snippets of science history that put the choices of our present moment into context."
Cathy O’Neil
"A new field has emerged that responds to and scrutinizes the vast technological shifts represented by our modern, virtual, algorithmically defined world. In The Alignment Problem, Brian Christian masterfully surveys the ‘AI fairness’ community, introducing us to some of its main characters; some of its historical roots in science, philosophy, and activism; and crucially, many of its philosophical quandaries and limitations."
James Barrat
"A deeply enjoyable and meticulously researched account of how computer scientists and philosophers are defining the biggest question of our time: how will we create intelligent machines that will improve our lives rather than complicate or even destroy them? There’s no better book than The Alignment Problem at spelling out the issues of governing AI safely."
Kirkus Reviews
2020-07-16
The latest examination of the problems and pitfalls of artificial intelligence.
Computer scientist Christian begins this technically rich but accessible discussion of AI with a very real problem: When programming an algorithm to teach a machine analogies and substitutions, researchers discovered that the phrase “man – doctor + woman” came back with the answer “nurse” while “shopkeeper – man + woman” came back with “housewife.” An algorithm designed to examine and label photographs returned the caption “gorillas” when it depicted two African Americans. It happened that one of those men was a programmer himself, and he said, “It’s not even the algorithm at fault. It did exactly what it was designed to do.” In other words, the algorithm is returning human biases, just as algorithms do when examining criminal records that often lead to machine-assisted recommendations for sentencing that overwhelmingly give Whites lighter punishments than Blacks and Latinos and color calibration programs for TVs and movie screens that are indexed to white skin. So how to teach machines to be reliable and bias-free? Christian considers models of human learning, such as those developed by Jean Piaget, whom Christian finds off on a couple of key assumptions but still a useful guide. He recalls that Alan Turing wondered why machine-learning programs were geared as if the machines were adults instead of children. Children, of course, learn by mistakes and accidents and by emulating adult doings “that would lead to the interesting result,” but can a machine? On that score, Christian ponders how self-driving vehicles are taught how to be autonomous, making decisions that are logical—but logical to a machine mind, not a human one. “Perhaps, rather than painstakingly trying to hand-code the things we care about,” writes the author, “we should develop machines that simply observe human behavior and infer our values and desires from that—a task easier said than done.
An intriguing exploration of AI, which is advancing faster than—well, than we are.