Not exact matches
In a second segment, Tracy Bedrosian of the Neurotechnology Innovations Translator talks about how the amount of time spent being licked
by mom might be linked to changes in the genetic
code of hippocampal
neurons in mice pups.
Their model is validated against the
coding properties of real
neurons by means of their location in what the authors call the «
coding space.»
Each cell in the image (both
neurons and the support cells known as glia) was color -
coded by hand, a process that took 150 hours.
Neurons in the hippocampus
code smooth changes in the shape of a room
by an abrupt change from a firing pattern characteristic of one distinct shape category to another.
By using «artificial
neurons» — essentially lines of
code, software — with neural network models, they can parse out the various elements that go into recognizing a specific place or object.
He mentioned mouse studies
by Chris Fiorillo, now at the Korea Advanced Institute of Science and Technology (KAIST), who inserted genetic sequences that
code for a light - sensitive protein called channelrhodopsin - 2 into dopamine - producing
neurons of mice.
In a paper published in PLOS Computational Biology in May, computational neuroscientists in the United Kingdom and Australia found that when neural networks using an algorithm for sparse
coding called Products of Experts, invented
by Hinton in 2002, are exposed to the same abnormal visual data as live cats (for example, the cats and neural networks both see only striped images), their
neurons develop almost exactly the same abnormalities.
If this view is correct, meaningful messages might be conveyed not just
by hordes of
neurons screaming in unison but
by a small group of cells whispering, perhaps in a terse temporal
code.
Summary: Recent advances in systems neuroscience have motivated a shift of perspective on neural
code from static single -
neuron codes such as tuning functions to
codes that are governed
by structure and dynamics of population activity.