Reza Shadmehr, Ph.D., professor of biomedical engineering at Hopkins, says the artificial brain in the computer, like its natural counterpart, is guided in part by a special kind of statistical “probability” theory called Bayesian math. A Bayesian probability is a subjective “opinion,” that measures a “learner’s” individual degree of belief in a particular outcome when that outcome is uncertain. The idea as applied to the workings of a brain is that each brain uses what it already knows to “predict” or “believe” that something new will happen, then uses that information to help make it so.
The computer model, Shadmehr says, almost precisely duplicates the results of experiments that tested the ability of monkeys to visually track rapid flashes of light. Such experiments are a staple in studying how the brain controls movement.
Initially, the animal learner made large errors, but also stored the information about its mistakes in a memory bank so it could adapt and make more accurate predictions the next time around. Every time the learner repeated the task, it would sift through the prior knowledge in its memory banks and make a prediction on how to move, which in turn would also be memorized. While short term memory was periodically purged, repeated errors were transferred to a long term memory bank.
The computer learner was tasked with “looking” at a spot of light. Then all the lights were turned off. The spot of light was turned on again and the computer learner was again asked to look at that same spot. The learner’s speed and pattern in adapting its movements matched the experimental results of the monkeys almost perfectly. This new tool could make it possible to predict the best ways to teach new movements and help design physical therapy regimens for the disabled or impaired.
MEDICA.de; Source: Johns Hopkins Medical Institutions