For instance, an experimental
neural net at Mount Sinai called Deep Patient can forecast whether a patient will receive a particular diagnosis within the next year, months before a doctor would make the call.
What makes today's deep
neural nets at once powerful and capricious is their ability to find patterns in huge amounts of data.
Not exact matches
Having studied experimental psychology as an undergraduate
at Cambridge, Hinton was enthusiastic about
neural nets, which were software constructs that took their inspiration from the way networks of neurons in the brain were thought to work.
Neural nets are good
at recognizing patterns — sometimes as good as or better than we are
at it.
«It was not formulated in those terms,» LeCun recalls, «because it was very difficult
at that time actually to publish a paper if you mentioned the word «neurons» or «
neural nets.»
At the time,
neural nets were out of favor.
«It's very difficult to find out why [a
neural net] made a particular decision,» says Alan Winfield, a robot ethicist
at the University of the West of England Bristol.
«[Deep
neural nets] can be really good but they can also fail in mysterious ways,» says Anders Sandberg, a senior research fellow
at the University of Oxford's Future of Humanity Institute.
For example, by carving up an image of a cat and feeding a
neural net the pieces one
at a time, a programmer can get a good idea of which parts — tail, paws, fur patterns or something unexpected — lead the computer to make a correct classification.
«The
neural networks we tested — three publicly available
neural nets and one that we developed ourselves — were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy,» said the study's lead author Yashar Hezaveh, a NASA Hubble postdoctoral fellow
at KIPAC.
If you think of this
neural net as a sequence of steps, where you're processing information
at each step and feeding it to the next one, then one of the goals from the algorithmic standpoint is to reduce that to the smallest number of steps yet get the same results.
Contestants could feed the program their own sound files and analyze the
neural net's simulated bursts of activity, or they could look
at archived responses to sound files that Brody and Hopfield had presented to the
neural net.
Once the system includes more neurons and the kinks are worked out, it could supply data centers, autonomous cars, and national security services with
neural nets that are orders of magnitude faster than existing designs, while using orders of magnitude less power, according to the study's two primary authors, Yichen Shen, a physicist, and Nicholas Harris, an electrical engineer, both
at MIT.
The score feeds back through the
neural net, so the robot can learn which movements are better for the task
at hand.
«Compared to conventional
neural probes, the much - reduced dimensions of our
NET - e probes allow us to implant the devices
at previously unattainable high densities without damage to brain tissue,» Wei tells nanotechweb.org.