Indigo in Logicaland: Adventures in Critical Thinking for AI Safety:

Indigo in Logicaland: Adventures in Critical Thinking for AI Safety:

by Jonathan Bennion
Indigo in Logicaland: Adventures in Critical Thinking for AI Safety:

Indigo in Logicaland: Adventures in Critical Thinking for AI Safety:

by Jonathan Bennion

Hardcover

$24.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

The age of AI demands stellar critical thinking.

With lessons based in critical thinking that include concepts like logical fallacies, evaluation of arguments, and reasoning skills, this adventure story teaches kids how to identify problematic thinking patterns while having fun along the way.

You'll launch your child into Logicaland, where young dragon Indigo Inkling and his mischievous parrot friend Pixie take a courageous stand against the kingdom's illogical ruler. As Indigo and Pixie confront obstacles like circular arguments, your child will learn to ask "why?" and separate fact from fiction.

While parents may face some bedtime debate, have no fear: exposing minds to reason early can increase likelihood of success to flourish, and instill possibly great dreams that could make more sense of the world.

Sit back and watch little eyes light up when friendly dragons, magical squirrels, and even grumpy trolls model evaluating arguments.

Come along for the journey and enjoy any questions that take shape through this adventure for years to come!

Product Details

ISBN-13: 9798881100766
Publisher: Barnes & Noble Press
Publication date: 12/28/2023
Pages: 116
Sales rank: 484,197
Product dimensions: 5.50(w) x 8.50(h) x 0.44(d)
Age Range: 9 - 12 Years

About the Author

Jonathan Bennion is an ML/AI Engineer with a data background at tech companies such as Facebook/Meta, Google and Amazon. Created the Developing LLM Applications with LangChain course on DataCamp to help evangelize AI literacy throughout all industries. His contributions to AI safety include creating LogicalFallacyChain in LangChain to detect and remove logical fallacies from language models as well as adding the human bias metric to DeepEval for human bias measurement.
From the B&N Reads Blog

Customer Reviews