|
Dr. Takayuki Osogami |
Artificial neural networks have long been studied with the hope of achieving a machine with the human capability to
learn. Today’s attempts at artificial neural networks are built upon Hebb’s rule, which Dr.
Donald Hebb proposed in 1949 as how neurons adjust the strength of their
connections. Since Hebb, other “rules” of neural learning have been introduced
to refine Hebb’s rule, such as spike-timing
dependent plasticity (STDP). All of this helps us understand our brains,
but makes developing artificial neurons even more challenging.
A biological neural network is too complex to exactly map into an artificial neural network. But IBM mathematician Takayuki Osogami and his team at IBM Research-Tokyo
might have figured it out by developing artificial neurons that mathematically mimic STDP to learn words, images, and even music. Takayuki’s mathematical
neurons form a new artificial neural network, creating a dynamic Boltzmann machine (DyBM) that can learn about
information from multiple contexts through training.
The
team taught seven artificial neurons the word “SCIENCE” (one artificial neuron
per bit) in a form of a bitmap image. So, the image of “SCIENCE” becomes:
00111001100111011110100010011001111
01000010010010010000100010100101000
01000010000010010000110010100001000
00110010000010011100101010100001110
00001010000010010000100110100001000
00001010010010010000100010100101000
01110001100111011110100010011001111.
The “1s” equate to the lines making up the
letters, while the “0s” translate to the white space around the letters.
What these seven neurons can do all at once is read and write 7-bit information. The word “SCIENCE” is expressed as 7-bit x 35-bit of pattern sequences that equals
a 245 bits monochrome bitmap image. These seven neurons read and memorized each piece of the 7-bit information in the image. For example, “0100010” is the
tenth column of the whole image according to the learning order, and the neurons recall those pieces in the order they learned them. By memorizing the
word from left to right and from right to left, the neurons could recognize “SCIENCE” forward and backward, or in any order – like how we might solve a
jumble word puzzle or crossword puzzle.
|
Figure 1: The DyBM successfully learns two target sequences and retrieves a particular sequence when an associated cue is presented. (Credit: Scientific Reports) |
More neurons. More memories.
Things get more complicated (and
interesting) when these artificial neurons learn about different topics in
different formats, such as the human evolution image, below. Takayuki’s team
put 20 artificial neurons to the task of learning this image, which shows how
we humans have evolved, from left to right. Why they used 20 artificial neurons
this time? One column of the image showing the human evolution consists of 20
bits, and they made it consistent with it.
These
neurons learned how the pieces of the image line up in the correct order of
evolution – from apes to Homo sapiens. As Takayuki runs simulation, these
neurons learn more over time, while detecting the mistakes they make – and
making corrections per simulation. With each simulation, the neurons generate
an image to show their progress in re-creating the image. It took just 19
seconds for the 20 neurons to learn the image correctly, as mapped out below.
|
Figure 2: The DyBM learned the target sequence of human evolution.
(Credit: Scientific Reports) |
Images and text are
one thing. But neurons encompass all senses. So, Takayuki’s team put 12 of
their artificial neurons to work learning music. Using a simplified version of
the German folk song, Ich bin ein Musikante, each neuron was assigned to
one of the 12 notes (Fa, So, Ra, Ti, Do, Re, Mi, Fa, So, Ra, Ti, Do). After
900,000 training sessions, they learned the sequential patterns of tones to the
point of being able to generate a simplified version of the song.
The neurons learn
music much like we might: repetition from beginning to end to the point of
memorization. Currently, the 12 neurons only generate quarter notes, but simply
doubling the neurons to 24 gives the system the ability to comprehend
half-notes.
|
Figure 3: The DyBM learned the target music. (Credit: Scientific
Reports) |
Takayuki’s DyBM
not only memorizes and recalls sequential patterns, but it can also detect
anomalies in sequential patterns, and make predictions about future patterns.
It could be used to predict driving risks through car-mounted cameras, or
generate new music, or even detect and correct grammatical errors in text.
Takayuki’s work, currently funded by the Japan Science and Technology Agency’s
Core Research for Evolutionary Science and Technology (CREST), hopes to advance
his DyBM by integrating it with reinforcement learning techniques to optimally
act on the basis of such anomaly detection and prediction.
The scientific paper Seven neurons memorizingsequences of alphabetical images via spike-timing dependent plasticity by Takayuki
Osogami and Makoto
Otsuka appears in Scientific Reports of the Nature Publishing Group on
September 16, 2015, DOI: 10.1038/SREP14149.
Labels: artificial intelligence, artificial neurons, hebb's rule, ibm research tokyo