![Amazon prime logo](https://cdn.statically.io/img/m.media-amazon.com/images/G/01/marketing/prime/new_prime_logo_RGB_blue._CB426090081_.png)
Enjoy fast, free delivery, exclusive deals, and award-winning movies & TV shows with Prime
Try Prime
and start saving today with fast, free delivery
Amazon Prime includes:
Fast, FREE Delivery is available to Prime members. To join, select "Try Amazon Prime and start saving today with Fast, FREE Delivery" below the Add to Cart button.
Amazon Prime members enjoy:- Cardmembers earn 5% Back at Amazon.com with a Prime Credit Card.
- Unlimited Free Two-Day Delivery
- Streaming of thousands of movies and TV shows with limited ads on Prime Video.
- A Kindle book to borrow for free each month - with no due dates
- Listen to over 2 million songs and hundreds of playlists
- Unlimited photo storage with anywhere access
Important: Your credit card will NOT be charged when you start your free trial or if you cancel during the trial period. If you're happy with Amazon Prime, do nothing. At the end of the free trial, your membership will automatically upgrade to a monthly membership.
Learn more
1.27 mi | ASHBURN 20147
![Kindle app logo image](https://cdn.statically.io/img/m.media-amazon.com/images/G/01/kindle/app/kindle-app-logo._CB668847749_.png)
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Follow the author
OK
How We Learn: Why Brains Learn Better Than Any Machine . . . for Now Paperback – February 2, 2021
![iphone with kindle app](https://cdn.statically.io/img/m.media-amazon.com/images/G/01/kindle/dp/nfcx/PersistentWidget-Ruby-Large._CB485955431_.png)
Explore your book, then jump right back to where you left off with Page Flip.
View high quality images that let you zoom in to take a closer look.
Enjoy features only possible in digital – start reading right away, carry your library with you, adjust the font, create shareable notes and highlights, and more.
Discover additional details about the events, people, and places in your book, with Wikipedia integration.
Purchase options and add-ons
An illuminating dive into the latest science on our brain's remarkable learning abilities and the potential of the machines we program to imitate them
The human brain is an extraordinary learning machine. Its ability to reprogram itself is unparalleled, and it remains the best source of inspiration for recent developments in artificial intelligence. But how do we learn? What innate biological foundations underlie our ability to acquire new information, and what principles modulate their efficiency?
In How We Learn, Stanislas Dehaene finds the boundary of computer science, neurobiology, and cognitive psychology to explain how learning really works and how to make the best use of the brain’s learning algorithms in our schools and universities, as well as in everyday life and at any age.
- Print length352 pages
- LanguageEnglish
- PublisherPenguin Books
- Publication dateFebruary 2, 2021
- Dimensions5.52 x 0.74 x 8.43 inches
- ISBN-100525559906
- ISBN-13978-0525559900
Frequently bought together
![How We Learn: Why Brains Learn Better Than Any Machine . . . for Now](https://cdn.statically.io/img/images-na.ssl-images-amazon.com/images/I/81q-Z7q3EjL._AC_UL116_SR116,116_.jpg)
Customers who bought this item also bought
- Reading in the Brain: The New Science of How We ReadPaperbackFREE Shipping on orders over $35 shipped by AmazonGet it as soon as Monday, Aug 19
- The Number Sense: How the Mind Creates Mathematics, Revised and Updated EditionPaperbackFREE Shipping by AmazonGet it as soon as Sunday, Aug 18
- Seeing the Mind: Spectacular Images from Neuroscience, and What They Reveal about Our Neuronal SelvesHardcoverFREE Shipping on orders over $35 shipped by AmazonGet it as soon as Sunday, Aug 18
- Language at the Speed of SightPaperbackFREE Shipping on orders over $35 shipped by AmazonGet it as soon as Sunday, Aug 18
- How to Take Smart Notes: One Simple Technique to Boost Writing, Learning and ThinkingPaperbackFREE Shipping on orders over $35 shipped by AmazonGet it as soon as Sunday, Aug 18
- Proust and the Squid: The Story and Science of the Reading BrainPaperbackFREE Shipping on orders over $35 shipped by AmazonGet it as soon as Sunday, Aug 18
Editorial Reviews
Review
“[An] expert overview of learning . . . Never mind our opposable thumb, upright posture, fire, tools, or language; it is education that enabled humans to conquer the world . . . Dehaene's fourth insightful exploration of neuroscience will pay dividends for attentive readers.”--Kirkus Reviews
“[Dehaene] rigorously examines our remarkable capacity for learning. The baby brain is especially awesome and not a ‘blank slate’ . . . Dehaene’s portrait of the human brain is fascinating.”--Booklist
“A richly instructive [book] for educators, parents, and others interested in how to most effectively foster the pursuit of knowledge.” --Publishers Weekly
Praise for Reading in the Brain:
"Splendid...Dehaene reveals how decades of low-tech experiments and high-tech brain-imaging studies have unwrapped the mystery of reading and revealed its component parts...A pleasure to read. [Dehaene] never oversimplifies; he takes the time to tell the whole story, and he tells it in a literate way."—The Wall Street Journal
"Masterful...a delight to read and scientifically precise."—Nature
Praise for Consciousness and the Brain:
"Ambitious . . . Dehaene offers nothing less than a blueprint for brainsplaining one of the world's deepest mysteries. . . . [A] fantastic book."—The Washington Post
"Dehaene is a maestro of the unconscious."—Scientific American Mind
"Brilliant... Essential reading for those who want to experience the excitement of the search for the mind in the brain."—Nature
About the Author
Excerpt. © Reprinted by permission. All rights reserved.
Seven Definitions of Learning
What does "learning" mean? My first and most general definition is the following: to learn is to form an internal model of the external world.
You may not be aware of it, but your brain has acquired thousands of internal models of the outside world. Metaphorically speaking, they are like miniature mock-ups more or less faithful to the reality they represent. We all have in our brains, for example, a mental map of our neighborhood and our home-all we have to do is close our eyes and envision them with our thoughts. Obviously, none of us were born with this mental map-we had to acquire it through learning.
The richness of these mental models, which are, for the most part, unconscious, exceeds our imagination. For example, you possess a vast mental model of the English language, which allows you to understand the words you are reading right now and guess that plastovski is not an English word, whereas swoon and wistful are, and dragostan could be. Your brain also includes several models of your body: it constantly uses them to map the position of your limbs and to direct them while maintaining your balance. Other mental models encode your knowledge of objects and your interactions with them: knowing how to hold a pen, write, or ride a bike. Others even represent the minds of others: you possess a vast mental catalog of people who are close to you, their appearances, their voices, their tastes, and their quirks.
These mental models can generate hyper-realistic simulations of the universe around us. Did you ever notice that your brain sometimes projects the most authentic virtual reality shows, in which you can walk, move, dance, visit new places, have brilliant conversations, or feel strong emotions? These are your dreams! It is fascinating to realize that all the thoughts that come to us in our dreams, however complex, are simply the product of our free-running internal models of the world.
But we also dream up reality when awake: our brain constantly projects hypotheses and interpretative frameworks on the outside world. This is because, unbeknownst to us, every image that appears on our retina is ambiguous-whenever we see a plate, for instance, the image is compatible with an infinite number of ellipses. If we see the plate as round, even though the raw sense data picture it as an oval, it is because our brain supplies additional data: it has learned that the round shape is the most likely interpretation. Behind the scenes, our sensory areas ceaselessly compute with probabilities, and only the most likely model makes it into our consciousness. It is the brain's projections that ultimately give meaning to the flow of data that reaches us from our senses. In the absence of an internal model, raw sensory inputs would remain meaningless.
Learning allows our brain to grasp a fragment of reality that it had previously missed and to use it to build a new model of the world. It can be a part of external reality, as when we learn history, botany, or the map of a city, but our brain also learns to map the reality internal to our bodies, as when we learn to coordinate our actions and concentrate our thoughts in order to play the violin. In both cases, our brain internalizes a new aspect of reality: it adjusts its circuits to appropriate a domain that it had not mastered before.
Such adjustments, of course, have to be pretty clever. The power of learning lies in its ability to adjust to the external world and to correct for errors-but how does the brain of the learner "know" how to update its internal model when, say, it gets lost in its neighborhood, falls from its bike, loses a game of chess, or misspells the word ecstasy? We will now review seven key ideas that lie at the heart of present-day machine-learning algorithms and that may apply equally well to our brains-seven different definitions of what "learning" means.
Learning Is Adjusting the Parameters
of a Mental Model
Adjusting a mental model is sometimes very simple. How, for example, do we reach out to an object that we see? In the seventeenth century, Ren Descartes (1596-1650) had already guessed that our nervous system must contain processing loops that transform visual inputs into muscular commands (see the figure on the next page). You can experience this for yourself: try grabbing an object while wearing somebody else's glasses, preferably someone who is very nearsighted. Even better, if you can, get a hold of prisms that shift your vision a dozen degrees to the left and try to catch the object. You will see that your first attempt is completely off: because of the prisms, your hand reaches to the right of the object that you are aiming for. Gradually, you adjust your movements to the left. Through successive trial and error, your gestures become more and more precise, as your brain learns to correct the offset of your eyes. Now take off the glasses and grab the object: you'll be surprised to see that your hand goes to the wrong location, now way too far to the left!
So, what happened? During this brief learning period, your brain adjusted its internal model of vision. A parameter of this model, one that corresponds to the offset between the visual scene and the orientation of your body, was set to a new value. During this recalibration process, which works by trial and error, what your brain did can be likened to what a hunter does in order to adjust his rifle's viewfinder: he takes a test shot, then uses it to adjust his scope, thus progressively shooting more and more accurately. This type of learning can be very fast: a few trials are enough to correct the gap between vision and action. However, the new parameter setting is not compatible with the old one-hence the systematic error we all make when we remove the prisms and return to normal vision.
Undeniably, this type of learning is a little particular, because it requires the adjustment of only a single parameter (viewing angle). Most of our learning is much more elaborate and requires adjusting tens, hundreds, or even thousands of millions of parameters (every synapse in the relevant brain circuit). The principle, however, is always the same: it boils down to searching, among myriad possible settings of the internal model, for those that best correspond to the state of the external world.
An infant is born in Tokyo. Over the next two or three years, its internal model of language will have to adjust to the characteristics of the Japanese language. This baby's brain is like a machine with millions of settings at each level. Some of these settings, at the auditory level, determine which inventory of consonants and vowels is used in Japanese and the rules that allow them to be combined. A baby born into a Japanese family must discover which phonemes make up Japanese words and where to place the boundaries between those sounds. One of the parameters, for example, concerns the distinction between the sounds /R/ and /L/: this is a crucial contrast in English, but not in Japanese, which makes no distinction between Bill Clinton's election and his erection. . . . Each baby must thus fix a set of parameters that collectively specify which categories of speech sounds are relevant for his or her native language.
A similar learning procedure is duplicated at each level, from sound patterns to vocabulary, grammar, and meaning. The brain is organized as a hierarchy of models of reality, each nested inside the next like Russian dolls-and learning means using the incoming data to set the parameters at every level of this hierarchy. Let's consider a high-level example: the acquisition of grammatical rules. Another key difference which the baby must learn, between Japanese and English, concerns the order of words. In a canonical sentence with a subject, a verb, and a direct object, the English language first states the subject, then the verb, and finally its object: "John + eats + an apple." In Japanese, on the other hand, the most common order is subject, then object, then verb: "John + an apple + eats." What is remarkable is that the order is also reversed for prepositions (which logically become post-positions), possessives, and many other parts of speech. The sentence "My uncle wants to work in Boston," thus becomes mumbo jumbo worthy of Yoda from Star Wars: "Uncle my, Boston in, work wants"-which makes perfect sense to a Japanese speaker.
Fascinatingly, these reversals are not independent of one another. Linguists think that they arise from the setting of a single parameter called the "head position": the defining word of a phrase, its head, is always placed first in English (in Paris, my uncle, wants to live), but last in Japanese (Paris in, uncle my, live wants). This binary parameter distinguishes many languages, even some that are not historically linked (the Navajo language, for example, follows the same rules as Japanese). In order to learn English or Japanese, one of the things that a child must figure out is how to set the head position parameter in his internal language model.
Learning Is Exploiting a Combinatorial Explosion
Can language learning really be reduced to the setting of some parameters? If this seems hard to believe, it is because we are unable to fathom the extraordinary number of possibilities that open up as soon as we increase the number of adjustable parameters. This is called the "combinatorial explosion"-the exponential increase that occurs when you combine even a small number of possibilities. Suppose that the grammar of the world's languages can be described by about fifty binary parameters, as some linguists postulate. This yields 2 combinations, which are over one million billion possible languages, or 1 followed by fifteen zeros! The syntactic rules of the world's three thousand languages easily fit into this gigantic space. However, in our brain, there aren't just fifty adjustable parameters, but an astoundingly larger number: eighty-six billion neurons, each with about ten thousand synaptic contacts whose strength can vary. The space of mental representations that opens up is practically infinite.
Human languages heavily exploit these combinations at all levels. Consider, for instance, the mental lexicon: the set of words that we know and whose model we carry around with us. Each of us has learned about fifty thousand words with the most diverse meanings. This seems like a huge lexicon, but we manage to acquire it in about a decade because we can decompose the learning problem. Indeed, considering that these fifty thousand words are on average two syllables, each consisting of about three phonemes, taken from the forty-four phonemes in English, the binary coding of all these words requires less than two million elementary binary choices ("bits," whose value is 0 or 1). In other words, all our knowledge of the dictionary would fit in a small 250-kilobyte computer file (each byte comprising eight bits).
This mental lexicon could be compressed to an even smaller size if we took into account the many redundancies that govern words. Drawing six letters at random, like "xfdrga," does not generate an English word. Real words are composed of a pyramid of syllables that are assembled according to strict rules. And this is true at all levels: sentences are regular collections of words, which are regular collections of syllables, which are regular collections of phonemes. The combinations are both vast (because one chooses among several tens or hundreds of elements) and bounded (because only certain combinations are allowed). To learn a language is to discover the parameters that govern these combinations at all levels.
In summary, the human brain breaks down the problem of learning by creating a hierarchical, multilevel model. This is particularly obvious in the case of language, from elementary sounds to the whole sentence or even discourse-but the same principle of hierarchical decomposition is reproduced in all sensory systems. Some brain areas capture low-level patterns: they see the world through a very small temporal and spatial window, thus analyzing the smallest patterns. For example, in the primary visual area, the first region of the cortex to receive visual inputs, each neuron analyzes only a very small portion of the retina. It sees the world through a pinhole and, as a result, discovers very low-level regularities, such as the presence of a moving oblique line. Millions of neurons do the same work at different points in the retina, and their outputs become the inputs of the next level, which thus detects "regularities of regularities," and so on and so forth. At each level, the scale broadens: the brain seeks regularities on increasingly vast scales, in both time and space. From this hierarchy emerges the ability to detect increasingly complex objects or concepts: a line, a finger, a hand, an arm, a human body . . . no, wait, two, there are two people facing each other, a handshake. . . . It is the first Trump-Macron encounter!
Learning Is Minimizing Errors
The computer algorithms that we call "artificial neural networks" are directly inspired by the hierarchical organization of the cortex. Like the cortex, they contain a pyramid of successive layers, each of which attempts to discover deeper regularities than the previous one. Because these consecutive layers organize the incoming data in deeper and deeper ways, they are also called "deep networks." Each layer, by itself, is capable of discovering only an extremely simple part of the external reality (mathematicians speak of a linearly separable problem, i.e., each neuron can separate that data into only two categories, A and B, by drawing a straight line through them). Assemble many of these layers, however, and you get an extremely powerful learning device, capable of discovering complex structures and adjusting to very diverse problems. Today's artificial neural networks, which take advantage of the advances in computer chips, are also deep, in the sense that they contain dozens of successive layers. These layers become increasingly insightful and capable of identifying abstract properties the further away they are from the sensory input.
Let's take the example of the LeNet algorithm, created by the French pioneer of neural networks, Yann LeCun (see figure 2 in the color insert). As early as the 1990s, this neural network achieved remarkable performance in the recognition of handwritten characters. For years, Canada Post used it to automatically process handwritten postal codes. How does it work? The algorithm receives the image of a written character as an input, in the form of pixels, and it proposes, as an output, a tentative interpretation: one out of the ten possible digits or twenty-six letters. The artificial network contains a hierarchy of processing units that look a bit like neurons and form successive layers. The first layers are connected directly with the image: they apply simple filters that recognize lines and curve fragments. The layers higher up in the hierarchy, however, contain wider and more complex filters. Higher-level units can therefore learn to recognize larger and larger portions of the image: the curve of a 2, the loop of an O, or the parallel lines of a Z . . . until we reach, at the output level, artificial neurons that respond to a character regardless of its position, font, or case. All these properties are not imposed by a programmer: they result entirely from the millions of connections that link the units. These connections, once adjusted by an automated algorithm, define the filter that each neuron applies to its inputs: their settings explain why one neuron responds to the number 2 and another to the number 3.
Product details
- Publisher : Penguin Books (February 2, 2021)
- Language : English
- Paperback : 352 pages
- ISBN-10 : 0525559906
- ISBN-13 : 978-0525559900
- Item Weight : 2.31 pounds
- Dimensions : 5.52 x 0.74 x 8.43 inches
- Best Sellers Rank: #117,380 in Books (See Top 100 in Books)
- #187 in Medical Cognitive Psychology
- #350 in Cognitive Psychology (Books)
- #522 in Biology (Books)
- Customer Reviews:
About the author
![Stanislas Dehaene](https://cdn.statically.io/img/m.media-amazon.com/images/S/amzn-author-media-prod/mb8ik2rtv2ravjiagvbr7ihtgj._SY600_.jpg)
Professor Stanislas Dehaene holds the Chair of Experimental Cognitive Psychology at the Collége de France, Paris. He directs the INSERM-CEA Cognitive Neuroimaging Unit at NeuroSpin in Saclay, south of Paris, France's advanced brain imaging research center. He is also the president of the Scientific Council for Education of the French ministry of education.
Stanislas Dehaene is recognized as one of Europe’s most prominent brain scientists. He is well known for his pioneering studies of “the number sense”, the innate brain circuits that we share with other primates and that allow us to understand numbers and mathematics. He is also a specialist of reading and uncovered the function of the ''visual word form area'', a left-hemisphere region that specializes for letters when we learn to read. Those discoveries have fostered his strong interest for learning and education. With his wife Ghislaine Dehaene-Lambertz, he has made fundamental discoveries on infants’ brain organization for language, and on how education to mathematics, reading and bilingualism shape the human brain. He has also observed some of the earliest “signatures of consciousness", i.e. patterns of brain responses that are unique to conscious processing and can be used to diagnose coma and vegetative-state patients.
Prof. Dehaene has accumulated numerous awards and prizes. In 2014, he was awarded the Grete Lundbeck Brain Prize, a 1-million € award which is considered the Nobel prize in the field (with G. Rizzolatti and T. Robbins). He is also a member of eight academies: the US National Academy of Sciences, the American Philosophical Society, the Pontifical Academy of Sciences, the French Académie des Sciences, the British Academy, Academia Europae, the Royal Academies for Science and the Arts of Belgium, and the European Molecular Biology Organization EMBO.
With an h-index of 173, Prof. Dehaene is a Thomas Reuters highly cited researcher. His research has been featured in numerous publications including a full-length portrait in the New Yorker (“The Numbers Guy”, by Jim Holt, 2008). He is the author of five books, three television documentaries, and over 400 scientific publications in journals such as Science, Nature, Nature Neuroscience, and PNAS. 70 of his articles were cited more than 500 times.
His books are a huge success, have been translated in fifteen languages, and several have received awards for best science writing:
• The Number Sense (1999): Jean Rostand award
• Reading in the Brain (2009): A Washington Post science book of the year
• Consciousness and the brain (2013): Grand Prix RTL-Lire for Best science book of the year
• How we Learn: why brains learn better than any machine… for now. (2020) Penguin Viking. Book of the year, the French Society for Neurology.
• Seeing the mind (2023). To appear at MIT Press.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonCustomers say
Customers find the book's content valuable and intelligible. They also appreciate the active engagement and feedback on errors.
AI-generated from the text of customer reviews
Customers find the book's content valuable, actionable, and delightful. They also say it provides a delightful blend of hardcore science and real-world problem solving. Customers also mention that motivation is essential to learning well.
"...Each of these chapters is a combinatory mine of research, experimental data and studies, as well as practical advice for learners and teachers,..." Read more
"An excellent summary of the latest research on the neuroscience on learning. Very readable and explained in lay terms...." Read more
"...brains learn better than machines, overall there's a lot of good and interesting information about the brain itself...." Read more
"...the background information into human learning, but also offers practical ideas on how to improve how much we learn and how long we retain it...." Read more
Customers find the book very intelligible and well written. They also say it should be on every teacher's Kindle or book shelf.
"...Very readable and explained in lay terms. The author does an effective job of explaining the extraordinary capacity of the human brain to learn...." Read more
"...In terms of depth and writing style it's approachable for the average science reader, maybe a little dry...." Read more
"...Brilliantly written easy to understand, love the four pillars." Read more
"I am absolutely enjoying this book.Loving the simplicity and content flow of this book.Just finished reading Chapter 1-..." Read more
Reviews with images
![On the Cutting Edge for Humanity's Good](https://cdn.statically.io/img/images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel._V192234675_.gif)
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
How We Learn is Stanislas Dehaene’s fourth book that I have read, and it does not disappoint. Dehaene effortlessly and compassionately moves between the abstract and the useful, carefully and methodically guiding the reader through a veritable mountain range of information from fields as different as neuroscience and education. And The Wall Street Journal got it right for this book as well when it declared (of Reading In The Brain) that Dehaene “never oversimplifies; he takes the time to tell the whole story; and he tells it in a literate way.”
All in all this is an incredible book, whether you’re interested in neuroscience, education, how brain plasticity and literacy are related, AI or even the brains of babies. There’s really something in it for everyone, whether you’re looking to apply your knowledge to study (or help someone else study) more effectively, or improve your own understanding of how the brain works. Dehaene is on the cutting edge, and he’s incredibly compassionate without ever being tendentious or moralistic. Below is a more detailed breakdown.
How We Learn is divided into three parts. Part One answers the question “What is Learning?” In the first chapter he discusses seven definitions of learning. One of the most interesting definitions (which isn’t even included among the first seven) is “Learning is inferring the grammar of a domain” in which he submits: “Characteristic of the human species is a relentless search for abstract rules, high-level conclusions that are extracted from a specific situation and subsequently tested on new observations” (35).
In Chapter 2 Dehaene wrestles for 20 pages with “Why our brain learns better than current machines,” continuing the discussion of learning all the while. Dehaene emphatically disagrees with the belief that “machines are about to overtake us” (27). A handful of the things he argues humans still do much better includes: Learning Abstract concepts; Data-efficient learning; Social learning; One-trial Learning; and, Systematicity and the language of thought.
in Part 2 Dehaene delves into “How Our Brain Learns.” This is the most scientifically granular section and, for many more technical readers, may be the most interesting. The neuroscience underpinning the four chapters in Part 2 is where Dehaene really shows off how dynamic a mind he has. Essentially, human thought is itself a kind of symbolic language. Furthermore, the literacy of thought starts almost as soon as a baby starts to develop as a fetus. By the time a baby is born, it is an incredibly well-developed instrument ready for its second (rather than first) phase of life, for which it has been preparing for three seasons. Dehaene’s thoughts and work on infants alone in this book is well worth ten times its price.
Part Three, more of the applied education section, starts with the “Four Pillars of Learning”: Attention (Ch 7, about 30 pages), Active Engagement (Ch 8, about 20 pages), Error Feedback (Ch 9, about 20 pages), Consolidation (Ch 10, about 15 pages). Each of these chapters is a combinatory mine of research, experimental data and studies, as well as practical advice for learners and teachers, reminiscent of Brown, Roediger and McDaniel’s excellent book Make It Stick.
The following are some kernels of very useful information from Chapters 7-10:
“The intellectual quotient [IQ] is just a behavioral ability, and as such, it is far from being unchangeable by education. Like any of our abilities, IQ rests on specific brain circuits whose synaptic weights can be changed by training” (167).
“A passive organism does not learn” (178).
“To learn, our brain must first form a hypothetical mental model [algorithm] of the outside world, which it then projects onto its environment and puts to a test by comparing its predictions to what it receives from the senses. This algorithm implies an active, engaged, and attentive posture. Motivation is essential: we learn well only if we have a clear goal and we fully commit to reaching it” (178).
“While it is crucial for students to be motivated, active, and engaged, this does not mean they should be left to their own devices” (184).
“Pure discovery learning, the idea that children can teach themselves, is one of the many educational myths that have been debunked but still remain curiously popular. […] Two other major misconceptions are linked to it: the myth of the digital native [and] the myth of learning styles” 185).
“Zero error, zero learning,” but… “We do not need an actual error in order to learn—all we need is an internal sign that travels in the brain” (204)
“It would be wrong, therefore, to believe that what matters most for learning is to make a lot of mistakes […] What matters is receiving explicit feedback that reduces the learner’s uncertainty. […] The theory of error backpropogation predicts: every unexpected event leads to corresponding adjustment of the internal model of the world" (205).
“This is the golden rule: it is always better to spread out the training periods rather than cram them into a single run. […] Decades of psychological research show that if you have a fixed amount of time to learn something, spacing out the lessons is a much more effective strategy than grouping them” (218).
“Sleep and leaning are strongly linked” (228).
“Computer scientists have already designed several learning algorithms that mimic the sleep/wake cycle” (231).
“From an educational perspective there is little doubt that improving the length and quality of sleep can be an effective intervention for all children, especially those with learning difficulties” (235).
Part Three ends with the Dehaene’s “Conclusion: Reconciling Education with Neuroscience.” He conveniently provides a bullet point summary as well as “Thirteen Take-Home Messages to Optimize Children’s Potential.” Here they are, without their supporting paragraphs.
Do not underestimate children.
Take advantage of the brain’s sensitivity periods.
Enrich the environment.
Rescind the idea that all children are different.
Pat attention to attention.
Keep children active, curious, engaged, and autonomous.
Make every school day enjoyable.
Encourage efforts.
Help students deepen their thinking.
Set clear learning objectives.
Accept and correct mistakes.
Practice regularly.
Let students sleep.
Dehaene ends with his insistence that “schools should devote more time to parents training,” and that “scientists must engage with teachers and schools in order to consolidate the growing field of educational science” (244).
![Customer image](https://cdn.statically.io/img/images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel._V192234675_.gif)
Reviewed in the United States on January 29, 2020
How We Learn is Stanislas Dehaene’s fourth book that I have read, and it does not disappoint. Dehaene effortlessly and compassionately moves between the abstract and the useful, carefully and methodically guiding the reader through a veritable mountain range of information from fields as different as neuroscience and education. And The Wall Street Journal got it right for this book as well when it declared (of Reading In The Brain) that Dehaene “never oversimplifies; he takes the time to tell the whole story; and he tells it in a literate way.”
All in all this is an incredible book, whether you’re interested in neuroscience, education, how brain plasticity and literacy are related, AI or even the brains of babies. There’s really something in it for everyone, whether you’re looking to apply your knowledge to study (or help someone else study) more effectively, or improve your own understanding of how the brain works. Dehaene is on the cutting edge, and he’s incredibly compassionate without ever being tendentious or moralistic. Below is a more detailed breakdown.
How We Learn is divided into three parts. Part One answers the question “What is Learning?” In the first chapter he discusses seven definitions of learning. One of the most interesting definitions (which isn’t even included among the first seven) is “Learning is inferring the grammar of a domain” in which he submits: “Characteristic of the human species is a relentless search for abstract rules, high-level conclusions that are extracted from a specific situation and subsequently tested on new observations” (35).
In Chapter 2 Dehaene wrestles for 20 pages with “Why our brain learns better than current machines,” continuing the discussion of learning all the while. Dehaene emphatically disagrees with the belief that “machines are about to overtake us” (27). A handful of the things he argues humans still do much better includes: Learning Abstract concepts; Data-efficient learning; Social learning; One-trial Learning; and, Systematicity and the language of thought.
in Part 2 Dehaene delves into “How Our Brain Learns.” This is the most scientifically granular section and, for many more technical readers, may be the most interesting. The neuroscience underpinning the four chapters in Part 2 is where Dehaene really shows off how dynamic a mind he has. Essentially, human thought is itself a kind of symbolic language. Furthermore, the literacy of thought starts almost as soon as a baby starts to develop as a fetus. By the time a baby is born, it is an incredibly well-developed instrument ready for its second (rather than first) phase of life, for which it has been preparing for three seasons. Dehaene’s thoughts and work on infants alone in this book is well worth ten times its price.
Part Three, more of the applied education section, starts with the “Four Pillars of Learning”: Attention (Ch 7, about 30 pages), Active Engagement (Ch 8, about 20 pages), Error Feedback (Ch 9, about 20 pages), Consolidation (Ch 10, about 15 pages). Each of these chapters is a combinatory mine of research, experimental data and studies, as well as practical advice for learners and teachers, reminiscent of Brown, Roediger and McDaniel’s excellent book Make It Stick.
The following are some kernels of very useful information from Chapters 7-10:
“The intellectual quotient [IQ] is just a behavioral ability, and as such, it is far from being unchangeable by education. Like any of our abilities, IQ rests on specific brain circuits whose synaptic weights can be changed by training” (167).
“A passive organism does not learn” (178).
“To learn, our brain must first form a hypothetical mental model [algorithm] of the outside world, which it then projects onto its environment and puts to a test by comparing its predictions to what it receives from the senses. This algorithm implies an active, engaged, and attentive posture. Motivation is essential: we learn well only if we have a clear goal and we fully commit to reaching it” (178).
“While it is crucial for students to be motivated, active, and engaged, this does not mean they should be left to their own devices” (184).
“Pure discovery learning, the idea that children can teach themselves, is one of the many educational myths that have been debunked but still remain curiously popular. […] Two other major misconceptions are linked to it: the myth of the digital native [and] the myth of learning styles” 185).
“Zero error, zero learning,” but… “We do not need an actual error in order to learn—all we need is an internal sign that travels in the brain” (204)
“It would be wrong, therefore, to believe that what matters most for learning is to make a lot of mistakes […] What matters is receiving explicit feedback that reduces the learner’s uncertainty. […] The theory of error backpropogation predicts: every unexpected event leads to corresponding adjustment of the internal model of the world" (205).
“This is the golden rule: it is always better to spread out the training periods rather than cram them into a single run. […] Decades of psychological research show that if you have a fixed amount of time to learn something, spacing out the lessons is a much more effective strategy than grouping them” (218).
“Sleep and leaning are strongly linked” (228).
“Computer scientists have already designed several learning algorithms that mimic the sleep/wake cycle” (231).
“From an educational perspective there is little doubt that improving the length and quality of sleep can be an effective intervention for all children, especially those with learning difficulties” (235).
Part Three ends with the Dehaene’s “Conclusion: Reconciling Education with Neuroscience.” He conveniently provides a bullet point summary as well as “Thirteen Take-Home Messages to Optimize Children’s Potential.” Here they are, without their supporting paragraphs.
Do not underestimate children.
Take advantage of the brain’s sensitivity periods.
Enrich the environment.
Rescind the idea that all children are different.
Pat attention to attention.
Keep children active, curious, engaged, and autonomous.
Make every school day enjoyable.
Encourage efforts.
Help students deepen their thinking.
Set clear learning objectives.
Accept and correct mistakes.
Practice regularly.
Let students sleep.
Dehaene ends with his insistence that “schools should devote more time to parents training,” and that “scientists must engage with teachers and schools in order to consolidate the growing field of educational science” (244).
![Customer image](https://cdn.statically.io/img/images-na.ssl-images-amazon.com/images/I/61pEfpDxJaL._SY88.jpg)
In terms of depth and writing style it's approachable for the average science reader, maybe a little dry. I would say it's somewhere between pop science level of discourse and a serious college text or book written for scientists and doctors. Books by Steven Pinker and others give more in-depth treatment to specific kinds of neural processes like how the brain stores and makes use of its own symbology (for example), but there's a price to be paid in those kinds of books. Namely you need to re-read stuff sometimes to really understand it, which generally is not required here.
Bottom line: I liked this book enough and learned enough that I will be buying more of the author's books on cognition and the brain as a problem solving machine. I think any will be a safe bet if you're into reading about the human brain and how it works.
Top reviews from other countries
![](https://images-na.ssl-images-amazon.com/images/S/amazon-avatars-global/default._CR0,0,1024,1024_SX48_.png)
O livro descreve muito de todos os avanços da neurociência para pedagogia, métodos de ensino e como proceder em diversas fases da vida para ter uma educação mais eficiente e de melhor qualidade. Além disso , mostra como os experimentos científicos derrubam os mitos e tradições que nos apegamos no ensino tradicional e que deveriam ser revistos.
Eu mesmo sempre me achei uma pessoa com melhor aprendizado através do visual, o que é descartado pela pesquisa e evidenciado pelo autor.
Os quatro pilares do ensino: atenção, engajamento ativo, apuração e discussão sobre os erros (erros são parte fundamental do processo de aprendizado) e consolidação do conhecimento. Com os pontos bem descritos no livro entendo que é possível achar os métodos mais eficazes. O melhor é que o livro não tem nem mesmo a pretensão de ser uma resposta definitiva, mas sim um ponto de apoio para a discussão da melhora pedagógica.
A edição para Kindle está excelente e sem nenhum problema detectado - entretanto a linguagem não é da mais simples, o que pode exigir atenção adicional a leitura ou até mesmo um nível bem mais avançado de inglês para captar completamente as ideias.
Recomendado para pais, professores ou até mesmo quem se interessa pelo tema e queira ampliar seu conhecimento sobre ciência dos métodos cognitivos.
![](https://images-eu.ssl-images-amazon.com/images/S/amazon-avatars-global/default._CR0,0,1024,1024_SX48_.png)
![](https://images-eu.ssl-images-amazon.com/images/S/amazon-avatars-global/default._CR0,0,1024,1024_SX48_.png)
![](https://images-eu.ssl-images-amazon.com/images/S/amazon-avatars-global/default._CR0,0,1024,1024_SX48_.png)
![](https://images-eu.ssl-images-amazon.com/images/S/amazon-avatars-global/default._CR0,0,1024,1024_SX48_.png)