Smartphones become ‘eye-phones’ with low-cost devices

0 comments

Posted on 4th April 2014 by Pacific ClearVision Institute in General |Retina

Stanford researchers have developed inexpensive adapters that enable a smartphone to capture high-quality images of the front and back of the eye.

Researchers at the Stanford University School of Medicine have developed two inexpensive adapters that enable a smartphone to capture high-quality images of the front and back of the eye. The adapters make it easy for anyone with minimal training to take a picture of the eye and share it securely with other health practitioners or store it in the patient’s electronic record.

“Think Instagram for the eye,” said one of the developers, assistant professor of ophthalmology Robert Chang, MD.

The researchers see this technology as an opportunity to increase access to eye-care services as well as to improve the ability to advise on patient care remotely.

Ophthalmology resident David Myung, MD, PhD, lead author of two upcoming papers describing the development and clinical experience with the devices, began the project with Chang about two years ago, just before Myung began his residency at Stanford. The papers were published online March 7 in the Journal of Mobile Technology in Medicine.

The standard equipment used to photograph the eye is expensive — costing up to tens of thousands of dollars — and requires extensive training to use properly. Primary care physicians and emergency department staff often lack this equipment, and although it is readily available in ophthalmologists’ offices, it is sparse in rural areas throughout the world.

Improved care

“Adapting smartphones for the eye has the potential to revolutionize the delivery of eye care — in particular, to provide it in places where it’s less accessible,” said Myung. “Whether it’s in the emergency department, where patients often have to wait a long time for a specialist, or during a primary-care physician visit, this new workflow will improve the quality of care for our patients, especially in the developing world where ophthalmologists are few and far between.

“A picture is truly worth a thousand words,” he added. “Imagine a car accident victim arriving in the emergency department with an eye injury resulting in a hyphema — blood inside the front of her eye. Normally the physician would have to describe this finding in her electronic record with words alone. Smartphones today not only have the camera resolution to supplement those words with a high-resolution photo, but also the data-transfer capability to upload that photo securely to the medical record in a matter of seconds.

Chang, who is the senior author of the two papers, added that ophthalmology is a highly image-oriented field. “With smartphone cameras now everywhere, and a small, inexpensive attachment that helps the ancillary health-care staff to take a picture needed for an eye consultation, we should be able to lower the barrier to tele-ophthalmology,” he said.

Adapters are available to attach a smartphone to a slit lamp — a microscope with an adjustable, high-intensity light — to capture images of the front of the eye. But Myung found this process time-consuming and inconvenient, even with commercially available adapters designed for this purpose. Given the fast pace of patient care, he wanted point-and-shoot ability in seconds, not minutes, with instant upload to a secure server. More importantly, the team envisioned the device to be readily usable by any health-care practitioner, not just eye doctors. So Myung decided to bypass the slit lamp, a complicated piece of equipment.

“I started entertaining the idea of a pocket-sized adapter that makes the phone do most of the heavy lifting,” he said. After numerous iterations, he found a combination of magnification and lighting elements that worked.

Using ‘Legos’

“It took some time to figure out how to mount the lens and lighting elements to the phone in an efficient yet effective way,” said Myung, who built the prototypes with inexpensive parts purchased almost exclusively online, including plastic caps, plastic spacers, LEDs, switches, universal mounts, macrolenses and even a handful of Legos.

After successfully imaging the front of the eye, he then focused on visualizing the inside lining of the back of the eye, called the retina. “Taking a photo of the retina is harder because you need to focus light through the pupil to reach inside the eye,” said Myung.

To optimize the view through a dilated pupil, Myung used optics theory to determine the perfect working distance and lighting conditions for a simple adapter that connects a conventional examination lens to a phone. Myung and chief ophthalmology resident Lisa He, MD, shot hundreds of photos with various iterations of the adapter, consulting with Chang and Mark Blumenkranz, MD, retina specialist and chair of the ophthalmology department, until they got it right. Then Stanford mechanical engineering graduate student Alexandre Jais constructed computerized models of these “screwed-and-glued” prototypes to produce 3D-printed versions. Jais made the first of these prototypes on his own 3D printer before moving to the Stanford Product Realization Lab to manufacture higher-resolution adapters.

Chief resident He is leading a clinical study grading the quality of images taken using the adapters in the Stanford emergency department. A second study, spearheaded by resident Brian Toy, MD, will test the ability of the adapters to track eye disease in patients with diabetes.

Myung and Chang have recently been awarded seed grants from the School of Medicine and the Stanford Biodesign Program to fund the production of the initial batch of adapters, currently dubbed EyeGo, for distribution and continued evaluation. The initial adapters will be available for purchase for research purposes only while the team seeks guidance from the Food and Drug Administration. “We have gotten the production cost of each type of adapter to under $90 but the goal is to make it even lower in the future,” Chang said. Recently, a team from the University of Melbourne in Australia used the two adapters on a medical mission trip to Ethiopia and told Chang they were excited about the results.

Myung, Chang, Jais and He co-authored both articles and Blumenkranz co-authored the article on the retinal-imaging adapter. Stanford’s Office of Technology and Licensing is managing the intellectual property.

Research presents new hope of early diagnosis of major cause of blindness

0 comments

Posted on 4th April 2014 by Pacific ClearVision Institute in General |Retina

Diabetic retinopathy is a common complication of diabetes, occurring when high blood sugar levels damage the cells in the retina at the back of the eye.

The disease is the most common cause of sight loss in people of working age. It is estimated that in England every year 4,200 people are at risk of blindness caused by diabetic retinopathy, with 1,280 new cases identified annually.

As part of the Retinal Vascular Modelling, Measurement and Diagnosis (REVAMMAD) project led by the University of Lincoln, UK, Marie Curie Researcher Georgios Leontidis is investigating new methods for the early screening and diagnosis of the disease by developing computer models which can detect small changes in the blood vessels of the eye.

Funded by the European Union’s 7th Framework (FP7) Marie Curie Initial Training Network program, the University of Lincoln has been awarded 900,000 euros from the 3.8 million euro budget to lead the project and to develop retinal imaging and measurement training and research.

It aims to improve diagnosis, prognosis and prevention of diseases such as diabetes, hypertension, stroke and coronary heart disease and retinal diseases.

All people with diabetes are at some risk of developing diabetic retinopathy, regardless of whether their condition is controlled by diet, tablets or insulin.Diabetic retinopathy progresses with time, but may not cause symptoms until it is advanced and close to affecting the retina.

The retina is a light-sensitive layer of tissue, lining the inner surface of the eye. The optics of the eye create an image of the visual world on the retina in a similar way to the film in a camera.Diabetes affects the structure of the vessel walls, making them stiffer. At an advanced point this causes them to break, creating haemorrhages and micro aneurysms, which are the first stages of diabetic retinopathy.

Georgios, an Electronics and Computer Engineer within the University of Lincoln’s School of Computer Science, is investigating the effects of diabetes on the retina’s vessel walls and how this ultimately affects the flow of blood in the whole vasculature of the retina.

He said: “Here at the University of Lincoln, our efforts focus on analysing images of diabetic patients before the first stage of diabetic retinopathy. In that way we want to see what changes diabetes causes to the retina vessels and how these changes progress to retinopathy. We will then try to correlate the standard features we extract from these images with functional changes that occur, such as abnormality in blood pressure, blood flow volume and blood flow velocity, as well as to associate them with some risk factors like age, type of diabetes, duration of diabetes, gender and smoking.”

Two players produce destructive cascade of diabetic retinopathy

0 comments

Posted on 4th April 2014 by Pacific ClearVision Institute in General |Retina

The retina can be bombarded by reactive oxygen species in diabetes, prompting events that destroy healthy blood vessels, form leaky new ones and ruin vision.

Now researchers have learned that those chemically reactive molecules must come from both the bone marrow as well as the retinal cells themselves to cause such serious consequences.

“It’s a cascade that requires two players to signal the next event that causes the damage,” said Dr. Ruth Caldwell, cell biologist at the Vascular Biology Center at the Medical College of Georgia at Georgia Regents University.

The good news is the finding also provides two new points for intervention, said Dr. Modesto Rojas, MCG postdoctoral fellow and first author of the study in the journal PLOS ONE.

Excessive glucose in the blood prompts excessive production of reactive oxygen species, or ROS, and the light-sensitive retina is particularly vulnerable. Caldwell’s research team had previously documented that ROS from white blood cells produced by the bone marrow as well as from retinal cells were the major instigators in diabetic retinopathy, a leading cause of blindness worldwide. But they weren’t sure which mattered most.

So they looked as several different scenarios, including mice lacking the ability to produce ROS by either the retinal or white blood cells, and found that if either were lacking, future damage was essentially eliminated. “One alone can’t do it,” said Caldwell, the study’s corresponding author. “They did not develop the early signs of diabetic retinopathy that we were measuring.”

While blocking ROS production by retinal cells could be difficult, drugs already exist that reduce activation of white blood cells. Those cells not only make ROS, but also adhere to blood vessel walls in the retina that become sticky in diabetes, Rojas said. In fact, a study published in October 2013 in PLOS ONE showed that neutrophil inhibitory factor could block the vascular lesions that are a hallmark of diabetic retinopathy without hurting the immunity of diabetic mice. The MCG scientists note that decreased activation does not impact the immune protection white blood cells also provide.

Next steps include studying those drugs in their animal models and learning more about how ROS causes the collateral damage that can destroy vision. “All of this is some sort of wound-healing response gone wrong,” Caldwell said.

ROS, a natural byproduct of the body’s use of oxygen, has healthy roles in the body, including cell signaling, but is destructive at high levels that result from disease states such as diabetes.

New structure in dogs’ eye linked to blinding retinal diseases

0 comments

Posted on 4th April 2014 by Pacific ClearVision Institute in General |Retina

In humans, a tiny area in the center of the retina called the fovea is critically important to viewing fine details. Densely packed with cone photoreceptor cells, it is used while reading, driving and gazing at objects of interest. Some animals have a similar feature in their eyes, but researchers believed that among mammals the fovea was unique to primates — until now.

University of Pennsylvania vision scientists report that dogs, too, have an area of their retina that strongly resembles the human fovea. What’s more, this retinal region is susceptible to genetic blinding diseases in dogs just as it is in humans.

“It’s incredible that in 2014 we can still make an anatomical discovery in a species that we’ve been looking at for the past 20,000 years and that, in addition, this has high clinical relevance to humans,” said William Beltran, an assistant professor of ophthalmology in Penn’s School of Veterinary Medicine and co-lead author of the study with Artur Cideciyan, research professor of ophthalmology in Penn’s Perelman School of Medicine.

“It is absolutely exhilarating to be able to investigate this very specialized and important part of canine central vision that has such unexpectedly strong resemblance to our own retina,” Cideciyan added.

Additional coauthors included Penn Vet’s Karina E. Guziewicz, Simone Iwabe, Erin M. Scott, Svetlana V. Savina, Gordon Ruthel and senior author Gustavo D. Aguirre; Perelman’s Malgorzata Swider, Lingli Zhang, Richard Zorger, Alexander Sumaroka and Samuel G. Jacobson; and the Penn School of Dental Medicine’s Frank Stefano.

The paper was published in the journal PLOS ONE.

The word “fovea” comes from the Latin meaning “pit,” owing to the fact that in humans and many other primates, the inner layers of the retina are thin in this area, while the outer layers are packed with cone photoreceptor cells. It is believed that this inner layer thinning allows the foveal cone cells privileged access to light.

It is known that dogs have what is called an area centralis, a region around the center of the retina with a relative increase in cone photoreceptor cell density. But dogs lack the pit formation that humans have, and, before this study, it was believed that the increase in cone photoreceptor cell density didn’t come close to matching what is seen in primates. Prior to this study, the highest reported density in dogs was 29,000 cones per square millimeter compared to more than 100,000 cones per square millimeter seen in the human and macaque foveas.

It turns out that previous studies in dogs had missed a miniscule region of increased cell density. In this study, while examining the retina of a dog with a mutation that causes a disease akin to a form of X-linked retinal degeneration in humans, the Penn researchers noticed a thinning of the retinal layer that contains photoreceptor cells.

Zeroing in on this region, they examined retinas of normal dogs using advanced imaging techniques, including confocal scanning laser ophthalmoscopy, optical coherence tomography and two-photon microscopy. By enabling the scientists to visualize different layers of the retina, these techniques allowed them to identify a small area of peak cone density and then estimate cone numbers by counting the cells in this unique area.

Based on their observations, the researchers found that cone densities reached more than 120,000 cells per square millimeter in a never-before-described fovea-like region of the area centralis — a density on par with that of primate foveas.

“There’s no real landmark for this area like there is in humans,” Aguirre said, “so to discover such a density was unexpected.”

They also recognized that the “output side” of this cone-dense region corresponded with an area of dense retinal ganglion cells, which transmit signals to the brain.

Human patients with macular degeneration experience a loss of photoreceptor cells — the rods and cones that process light — at or near the fovea, resulting in a devastating loss of central vision.

To see whether the fovea-like region was similarly affected in dogs, the Penn researchers used the same techniques they had employed to study normal dogs to examine animals that had mutations in two genes (BEST1 and RPGR) that can lead to macular degeneration in humans.

In both cases, the onset of disease affected the fovea-like region in dogs in a very similar way to how the diseases present in humans — with central retinal lesions appearing earlier than lesions in the peripheral retina.

“Why the fovea is susceptible to early disease expression for certain hereditary disorders and why it is spared under other conditions is not known,” Cideciyan said. “Our findings, which show the canine equivalent of a human genetic disease affecting an area of the retina that is of extreme importance to human vision, are very promising from the human point of view. They could allow for translational research by allowing us to test treatments for human foveal and macular degenerative diseases in dogs.”

In addition, the discovery offers insight into a rare human condition known as fovea plana, in which people have normal visual acuity but no “pit” in their fovea. In other words, their fovea resembles that of dogs, challenging the previously held assumption that lack of tissue and blood vessels overlaying the fovea is a prerequisite for the high resolution of vision.

The fact that dogs have a fovea-like area of dense photoreceptor cells may also indicate that dogs are seeing more acutely than once suspected.

“This gives us a structural basis to support the idea that dogs might have a higher visual acuity than has been measured so far,” Beltran said. “It could even be the case that some breeds have an especially high density of cells and could be used as working dogs for particular tasks that require high-level sight function.”

Looking ahead, the researchers may focus on this fovea-like area in studies of therapies for not only X-linked retinal degeneration and Best disease but also other sight-related problems affecting the macula and fovea.

Scientists visualize new treatments for retinal blindness

0 comments

Posted on 4th April 2014 by Pacific ClearVision Institute in General |Retina

A new report published online in The FASEB Journal may lead the way toward new treatments or a cure for a common cause of blindness (proliferative retinopathies). Specifically, scientists have discovered that the body’s innate immune system does more than help ward off external pathogens. It also helps remove sight-robbing abnormal blood vessels, while leaving healthy cells and tissue intact. This discovery is significant as the retina is part of the central nervous system and its cells cannot be replaced once lost. Identifying ways to leverage the innate immune system to “clean out” abnormal blood vessels in the retina may lead to treatments that could prevent or delay blindness, or restore sight.

“Our findings begin to identify a new role of the innate immune system by which endogenous mediators selectively target the pathologic retinal vasculature for removal,” said Kip M. Connor, Ph.D., a researcher involved in the work from the Department of Ophthalmology at the Harvard Medical School and Massachusetts Eye and Ear Infirmary Angiogenesis Laboratory in Boston, MA. “It is our hope that future studies will allow us to develop specific therapeutics that harnesses this knowledge resulting in a greater visual outcome and quality of life for patients suffering from diabetic retinopathy or retinopathy of prematurity.”

To make this discovery, Connor and colleagues compared two groups of mice, a genetically modified group which lacked activity in the innate immune complement system, and a normal group with a fully functional innate immune system. Researchers placed both groups in an environment that induced irregular blood vessel growth in the eye, mimicking what happens in many human ocular diseases. The mice that were lacking a functional innate immune system developed significantly more irregular blood vessels than the normal mice, indicating that the complement system is a major regulator of abnormal blood vessel growth within the eye. Importantly, in the normal mice, scientists were able to visualize the immune system targeting and killing only the irregular blood vessels while leaving healthy cells unharmed.

“Knowing how the complement system works to keep our retinas clean is an important first-step toward new treatments that could mimic this activity,” said Gerald Weissmann, M.D., Editor-in-Chief of The FASEB Journal. “It’s a new understanding of how proliferative retinopathies rob us of sight, and promises to let us see the path ahead clearly.”

Exercise may slow progression of retinal degeneration

0 comments

Posted on 4th April 2014 by Pacific ClearVision Institute in General |Retina

Moderate aerobic exercise helps to preserve the structure and function of nerve cells in the retina after damage, according to an animal study appearing February 12 in The Journal of Neuroscience. The findings suggest exercise may be able to slow the progression of retinal degenerative diseases.

Age-related macular degeneration, one of the leading causes of blindness in the elderly, is caused by the death of light-sensing nerve cells in the retina called photoreceptors. Although several studies in animals and humans point to the protective effects of exercise in neurodegenerative diseases or injury, less is known about how exercise affects vision.

Machelle Pardue, PhD, together with her colleagues Eric Lawson and Jeffrey H. Boatright, PhD, at the Atlanta VA Center for Visual and Neurocognitive Rehabilitation and Emory University, ran mice on a treadmill for two weeks before and after exposing the animals to bright light that causes retinal degeneration. The researchers found that treadmill training preserved photoreceptors and retinal cell function in the mice.

“This is the first report of simple exercise having a direct effect on retinal health and vision,” Pardue said. “This research may one day lead to tailored exercise regimens or combination therapies in treatments of blinding diseases.”

In the current study, the scientists trained mice to run on a treadmill for one hour per day, five days per week, for two weeks. After the animals were exposed to toxic bright light — a commonly used model of retinal degeneration — they exercised for two more weeks. The exercised animals lost only half the number of photoreceptor cells as animals that spent the equivalent amount of time on a stationary treadmill.

Additionally, the retinal cells of exercised mice were more responsive to light and had higher levels of a growth- and health-promoting protein called brain-derived neurotrophic factor (BDNF), which previous studies have linked to the beneficial effects of exercise. When the scientists blocked the receptors for BDNF in the exercised mice, they discovered that retinal function in the exercised mice was as poor as in the inactive mice, effectively eliminating the protective effects of the aerobic exercise.

“These findings further our current understanding of the neuroprotective effects of aerobic exercise and the role of BDNF,” explained Michelle Ploughman, PhD, who studies the effects of exercise on the healthy and diseased brain at Memorial University of Newfoundland, and was not involved with this study. “People who are at risk of macular degeneration or have early signs of the disease may be able to slow down the progression of visual impairment,” she added.

Computer models help decode cells that sense light without seeing

0 comments

Posted on 4th April 2014 by Pacific ClearVision Institute in General |Retina

Researchers have found that the melanopsin pigment in the eye is potentially more sensitive to light than its more famous counterpart, rhodopsin, the pigment that allows for night vision.

For more than two years, the staff of the Laboratory for Computational Photochemistry and Photobiology (LCPP) at Ohio’s Bowling Green State University (BGSU), have been investigating melanopsin, a retina pigment capable of sensing light changes in the environment, informing the nervous system and synchronizing it with the day/night rhythm. Most of the study’s complex computations were carried out on powerful supercomputer clusters at the Ohio Supercomputer Center (OSC).

The research recently appeared in the Proceedings of the National Academy of Sciences, in an article edited by Arieh Warshel, Ph.D., of the University of Southern California. Warshel and two other chemists received the 2013 Nobel Prize in Chemistry for developing multiscale models for complex chemical systems, the same techniques that were used in conducting the BGSU study, “Comparison of the isomerization mechanisms of human melanopsin and invertebrate and vertebrate rhodopsins.”

“The retina of vertebrate eyes, including those of humans, is the most powerful light detector that we know,” explains Massimo Olivucci, Ph.D., a research professor of Chemistry and director of LCPP in the Center for Photochemical Sciences at BGSU. “In the human eye, light coming through the lens is projected onto the retina where it forms an image on a mosaic of photoreceptor cells that transmits information from the surrounding environment to the brain’s visual cortex. In extremely poor illumination conditions, such as those of a star-studded night or ocean depths, the retina is able to perceive intensities corresponding to only a few photons, which are indivisible units of light. Such extreme sensitivity is due to specialized photoreceptor cells containing a light sensitive pigment called rhodopsin.”

For a long time, it was assumed that the human retina contained only photoreceptor cells specialized in dim-light and daylight vision, according to Olivucci. However, recent studies revealed the existence of a small number of intrinsically photosensitive nervous cells that regulate non-visual light responses. These cells contain a rhodopsin-like protein named melanopsin, which plays a role in the regulation of unconscious visual reflexes and in the synchronization of the body’s responses to the dawn/dusk cycle, known as circadian rhythms or the “body clock,” through a process known as photoentrainment.

The fact that the melanopsin density in the vertebrate retina is 10,000 times lower than that of rhodopsin density, and that, with respect to the visual photoreceptors, the melanopsin-containing cells capture a million-fold fewer photons, suggests that melanopsin may be more sensitive than rhodopsin. The comprehension of the mechanism that makes this extreme light sensitivity possible appears to be a prerequisite to the development of new technologies.

Both rhodopsin and melanopsin are proteins containing a derivative of vitamin A, which serves as an “antenna” for photon detection. When a photon is detected, the proteins are set in an activated state, through a photochemical transformation, which ultimately results in a signal being sent to the brain. Thus, at the molecular level, visual sensitivity is the result of a trade-off between two factors: light activation and thermal noise. It is currently thought that light-activation efficiency (i.e., the number of activation events relative to the total number of detected photons) may be related to its underlying speed of chemical transformation. On the other hand, the thermal noise depends on the number of activation events triggered by ambient body heat in the absence of photon detection.

“Understanding the mechanism that determines this seemingly amazing light sensitivity of melanopsin may open up new pathways in studying the evolution of light receptors in vertebrate and, in turn, the molecular basis of diseases, such as “seasonal affecting disorders,” Olivucci said. “Moreover, it provides a model for developing sub-nanoscale sensors approaching the sensitivity of a single-photon.”

For this reason, the LCPP group — working together with Francesca Fanelli, Ph.D., of Italy’s Universit√† di Modena e Reggio Emilia — has used the methodology developed by Warshel and his colleagues to construct computer models of human melanopsin, bovine rhodopsin and squid rhodopsin. The models were constructed by BGSU research assistant Samer Gozem, Ph.D., BGSU visiting graduate student Silvia Rinaldi, who now has completed his doctorate, and visiting research assistant Federico Melaccio, Ph.D. — both visiting from Italy’s Universit√† di Siena. The models were used to study the activation of the pigments and show that melanopsin light activation is the fastest, and its thermal activation is the slowest, which was expected for maximum light sensitivity.

The computer models of human melanopsin, and bovine and squid rhodopsins, provide further support for a theory reported by the LCPP group in the September 2012 issue of Science Magazine which explained the correlation between thermal noise and perceived color, a concept first proposed by the British neuroscientist Horace Barlow in 1957. Barlow suggested the existence of a link between the color of light perceived by the sensor and its thermal noise and established that the minimum possible thermal noise is achieved when the absorbing light has a wavelength around 470 nanometers, which corresponds to blue light.

“This wavelength and corresponding bluish color matches the wavelength that has been observed and simulated in the LCPP lab,” said Olivucci. “In fact, our calculations also indicate that a shift from blue to even shorter wavelengths (i.e. indigo and violet) will lead to an inversion of the trend and an increase of thermal noise towards the higher levels seen for a red color. Therefore, melanopsin may have been selected by biological evolution to stand exactly at the border between two opposite trends to maximize light sensitivity.”

The melanopsin research project was funded jointly by the BGSU Center for Photochemical Sciences and the College of Arts & Sciences, and, together with grants from the National Science Foundation and the Human Frontier Science Program, helped create the LCPP.

Image perception in the blink of an eye

0 comments

Posted on 4th April 2014 by Pacific ClearVision Institute in General |Retina

Imagine seeing a dozen pictures flash by in a fraction of a second. You might think it would be impossible to identify any images you see for such a short time. However, a team of neuroscientists from MIT has found that the human brain can process entire images that the eye sees for as little as 13 milliseconds — the first evidence of such rapid processing speed.

That speed is far faster than the 100 milliseconds suggested by previous studies. In the new study, which appears in the journal Attention, Perception, and Psychophysics, researchers asked subjects to look for a particular type of image, such as “picnic” or “smiling couple,” as they viewed a series of six or 12 images, each presented for between 13 and 80 milliseconds.

“The fact that you can do that at these high speeds indicates to us that what vision does is find concepts. That’s what the brain is doing all day long — trying to understand what we’re looking at,” says Mary Potter, an MIT professor of brain and cognitive sciences and senior author of the study.

This rapid-fire processing may help direct the eyes, which shift their gaze three times per second, to their next target, Potter says. “The job of the eyes is not only to get the information into the brain, but to allow the brain to think about it rapidly enough to know what you should look at next. So in general we’re calibrating our eyes so they move around just as often as possible consistent with understanding what we’re seeing,” she says.

Other authors of the paper are former MIT postdoc Brad Wyble, now at Pennsylvania State University, postdoc Carl Hagmann, and research assistant Emily McCourt.

Rapid identification

After visual input hits the retina, the information flows into the brain, where information such as shape, color, and orientation is processed. In previous studies, Potter has shown that the human brain can correctly identify images seen for as little as 100 milliseconds. In the new study, she and her colleagues decided to gradually increase the speeds until they reached a point where subjects’ answers were no better than if they were guessing. All images were new to the viewers.

The researchers expected they might see a dramatic decline in performance around 50 milliseconds, because other studies have suggested that it takes at least 50 milliseconds for visual information to flow from the retina to the “top” of the visual processing chain in the brain and then back down again for further processing by so-called “re-entrant loops.” These processing loops were believed necessary to confirm identification of a particular scene or object.

However, the MIT team found that although overall performance declined, subjects continued to perform better than chance as the researchers dropped the image exposure time from 80 milliseconds to 53 milliseconds, then 40 milliseconds, then 27, and finally 13 — the fastest possible rate with the computer monitor being used.

“This didn’t really fit with the scientific literature we were familiar with, or with some common assumptions my colleagues and I have had for what you can see,” Potter says.

Potter believes one reason for the subjects’ better performance in this study may be that they were able to practice fast detection as the images were presented progressively faster, even though each image was unfamiliar. The subjects also received feedback on their performance after each trial, allowing them to adapt to this incredibly fast presentation. At the highest rate, subjects were seeing new images more than 20 times as fast as vision typically absorbs information.

“We think that under these conditions we begin to show more evidence of knowledge than in previous experiments where people hadn’t really been expecting to find success, and didn’t look very hard for it,” Potter says.

The findings are consistent with a 2001 study from researchers at the University of Parma and the University of St. Andrews, who found that neurons in the brains of macaque monkeys that respond to specific types of image, such as faces, could be activated even when the target images were each presented for only 14 milliseconds in a rapid sequence.

“That was the only background that suggested maybe 14 milliseconds was sufficient to get something meaningful into the brain,” Potter says.

One-way flow

The study offers evidence that “feedforward processing” — the flow of information in only one direction, from retina through visual processing centers in the brain — is enough for the brain to identify concepts without having to do any further feedback processing.

It also suggests that while the images are seen for only 13 milliseconds before the next image appears, part of the brain continues to process those images for longer than that, Potter says, because in some cases subjects weren’t asked whether a specified image was present until after they had seen the sequence.

“If images were wiped out after 13 milliseconds, people would never be able to respond positively after the sequence. There has to be something in the brain that has maintained that information at least that long,” she says.

This ability to identify images seen so briefly may help the brain as it decides where to focus the eyes, which dart from point to point in brief movements called fixations about three times per second, Potter says. Deciding where to move the eyes can take 100 to 140 milliseconds, so very high-speed understanding must occur before that.

The researchers are now investigating how long visual information presented so briefly can be held in the brain. They are also scanning subjects’ brains with a magnetoencephalography (MEG) scanner during the task to see what brain regions are active when a person successfully completes the identification task.