map brains

Scientists map brains of the blind to solve mysteries of human brain specialization

Jerusalem, January 23, 2015 3⁄4 Studying the brain activity of blind people, scientists at the Hebrew University of Jerusalem are challenging the standard view of how the human brain specializes to perform different kinds of tasks, and shedding new light on how our brains can adapt to the rapid cultural and technological changes of the 21 st Century.

Research Highlights:

  1. Understanding the brain activity of the blind can help solve one of the oddest phenomena in the human brain: how can tasks such as reading and recognizing numerical symbols have their own brain regions if these concepts were only developed several thousand years ago (which is negligible on an evolutionary timescale)? What was the job of these regions before their invention?
  2. New research published today in Nature Communications demonstrates that vision is not a prerequisite for “visual” cortical regions to develop these preferences.
  3. This stands in contrast to the current main theory explaining this specialization, which suggests these regions were adapted from other visual tasks such as the angles of lines and their intersections.
  4. These results show that the required condition is not sensory-based (vision) but rather connectivity- and processing-based. For example, blind people reading Braille using their fingers will still use the “visual” areas.
  5. This research uses shows unique connectivity patterns between the visual-number-form-area (VNFA) to quantity-processing areas in the right hemisphere, and between the visual-word-form-area (VWFA) to language-processing areas in the left hemisphere.
  6. This type of mechanism can help explain how our brain adapts quickly to the changes of our era of constant cultural and technological innovations.

The accepted view in previous decades was that the brain is divided into distinct regions mainly by the sensory input that activates them, such as the visual cortex for sight and the auditory cortex for sound. Within these large regions, sub-regions have been defined which are specialized for specific tasks such as the “visual word form area,” a functional brain region believed to identify words and letters from shape images even before they are associated with sounds or meanings. Similarly there is another area that specializes in number symbols.

Now, a series of studies at the Hebrew University’s Amedi Lab for Brain and Multisensory Research challenges this view using unique tools known as Sensory Substitution Devices (SSDs).

The Amedi Lab is headed by Prof. Amir Amedi in the Department of Medical Neurobiology at the Institute for Medical Research Israel-Canada at the Hebrew University’s Faculty of Medicine. The Lab is also a founding member of the Hebrew University’s Edmond & Lily Safra Center for Brain Science.

Sensory Substitution Devices take information from one sense and present it in another, for example enabling blind people to “see” by using other senses such as touching or hearing. By using a smartphone or webcam to translate a visual image into a distinct soundscape, SSDs enable blind users to create a mental image of objects, such as their physical dimensions and color. With intense training (now available online at www.amedilab.com), blind users can even “read” letters by identifying their distinct soundscape.

“These devices can help the blind in their everyday life,” explains Prof. Amir Amedi, “but they also open unique research opportunities by letting us see what happens in brain regions normally associated with one sense, when the relevant information comes from another.”

Amedi’s team was interested in whether blind subjects using sensory substitution would, like sighted people, use the visual-word-form-area sub-region of the brain to identify shape images, or whether this area is specialized exclusively to visual reading with the eyes.

In a new paper published today in Nature Communications as “A number-form area in the blind,” Sami Abboud and colleagues in the Amedi Lab show that these same “visual” brain regions are used by blind subjects who are actually “seeing” through sound. According to lead researcher Sami Abboud, “These regions are preserved and functional even among the congenitally blind who have never experienced vision.”

The researchers used functional MRI imaging (fMRI) to study the brains of blind subjects in real-time while they used an SSD to identify objects by their sound. They found that when it comes to recognizing letters, body postures and more, specialized brain areas are activated by the task at hand, rather than by the sense (vision or hearing) being used.

The Amedi team examined a recently-identified area in the brain’s right hemisphere known as the ‘Visual Number Form Area.’ The very existence of such an area, as distinct from the visual word-form-area, is surprising since symbols such as ‘O’ can be used either as the letter O or as the number Zero, despite being visually identical.

Previous attempts to explain why both the word and number areas exist, such as the ‘Neural recycling theory’ by Dehaene and Cohen (2007), suggest that in the far distant past these areas were specialized for other visual tasks such as recognizing small lines, their angles and intersections, and thus were best suited for them. However, this new work shows that congenitally blind users using the sensory substitution devices still have these exact same areas, suggesting that vision is not the key to their development.

“Beyond the implications for neuroscience theory, these results also offer us hope for visual rehabilitation,” says Amedi. “They suggest that by using the right technology, even non-invasively, we can re-awaken the visually deprived brain to process tasks considered visual, even after many years of blindness.”

But if the specific sensory input channel is not the key to developing these brain regions, why do these functions develop in their specific anatomical locations? The new research points to unique connectivity patterns between the visual-word-form-area and language-processing areas, and between the visual-number-form-area and quantity-processing areas.

Amedi suggests, “This means that the main criteria for a reading area to develop is not the letters’ visual symbols, but rather the area’s connectivity to the brain’s language-processing centers. Similarly a number area will develop in a region which already has connections to quantity-processing regions.”

“If we take this one step further,” adds Amedi, “this connectivity-based mechanism might explain how brain areas could have developed so quickly on an evolutionary timescale. We’ve only been reading and writing for several thousand years, but the connectivity between relevant areas allowed us to create unique new centers for these specialized tasks. This same ‘cultural recycling’ of brain circuits could also be true for how we will adapt to new technological and cultural innovations in the current era of rapid innovation, even approaching the potential of the Singularity.”

The research was supported by a European Research Council grant; the Charitable Gatsby Foundation; the James S. McDonnell Foundation scholar award; the Israel Science Foundation; and the Edmond and Lily Safra Center for Brain Sciences (ELSC) Vision center grant.

About the Amedi Lab for Brain and Multisensory Research:

The Amedi Lab for Brain and Multisensory Research is headed by Prof. Amir Amedi in the Department of Medical Neurobiology at the Institute for Medical Research Israel-Canada (IMRIC) at the Hebrew University of Jerusalem’s Faculty of Medicine. The Lab is also a founding member of the Hebrew University’s Edmond & Lily Safra Center for Brain Science (ELSC).

The Lab deals with understanding the human brain, brain rehabilitation and plasticity, with emphasis on helping the blind and visually impaired. Several patented devices have been developed in the lab which can help people who are blind identify objects and navigate using a technique called “Sensory Substitution” (mainly ‘seeing’ by translating an image taken from a simple smartphone or webcam into sound with no need for special hardware).

EyeMusic is a tool that provides visual information through a musical auditory experience. Using a camera mounted on their glasses, patients can hear musical notes that create a mental image of the visual scene in front of them. Results include enabling blind users to find objects such as shoes in a cluttered room, choose a red apple out of a bowl of green ones, and more. For a clear explanation video in TEDx format see http://goo.gl/Lcb7QV. Mastering the EyeMusic requires intensive training (at least 20-30 hours for basic practical use). As part of trying to make this technology more available to the public, the team has recently created a new online training website (www.amedilab.com) and made EyeMusic freely available for download on iTunes and Google-Play.

Another device developed at the Amedi Lab is the EyeCane, which uses an algorithm to translate distance into sound and vibrations. The EyeCane aims to boost mobility and navigation for the visually-disabled, augmenting the traditional White-Cane with increased range (up to 5m), angles and unobtrusiveness. Within 5 minutes of training, users can successfully navigate, detect and avoid obstacles and estimate distance. Recently-published EyeCane research demonstrated that using the EyeCane distinctly improves users’ mobility patterns. “Our users no longer cling to the walls,” explains Shachar Maidenbaum, one of the researchers working on this project. “Usually the blind avoid large open spaces since they don’t have ‘anchors’ in them, but the expanded sensory information from the EyeCane lets them easily walk down the center of a corridor or cut through the center of large rooms.”

For information contact:

Dov Smith
Hebrew University Foreign Press Liaison
02-5882844 / +972-54-8820860
dovs@savion.huji.ac.il

Scroll to Top