from networkworld.com: In the never-ending quest to get computers to process, really
understand and actually reason, scientists at DARPA want to look more deeply into how computers can mimic a
key portion of our brain.
The military’s advanced research group recently put out a call, or
Request For Information (RFI), on how it could develop systems that go beyond
machine learning, Bayesian techniques, and graphical technology to solve
“extraordinarily difficult recognition problems in real-time.”
Current systems offer partial solutions to this problem, but are
limited in their ability to efficiently scale to larger more complex
datasets, DARPA said. “They are also compute intensive, exhibit limited
parallelism, require high precision arithmetic, and, in most cases, do
not account for temporal data. “
What DARPA is interested in is looking at mimicking a portion of the
brain known as the neocortex which is utilized in higher brain
functions such as sensory perception, motor commands, spatial reasoning,
conscious thought and language. Specfically, DARPA said it is looking
for information that provides new concepts and technologies for
developing what it calls a “Cortical Processor” based on Hierarchical Temporal Memory.
“Although a thorough understanding of how the cortex works is beyond
current state of the art, we are at a point where some basic algorithmic
principles are being identified and merged into machine learning and
neural network techniques. Algorithms inspired by neural models, in
particular neocortex, can recognize complex spatial and temporal
patterns and can adapt to changing environments. Consequently, these
algorithms are a promising approach to data stream filtering and
processing and have the potential for providing new levels of
performance and capabilities for a range of data recognition problems,”
DARPA stated. “The cortical computational model should be fault
tolerant to gaps in data, massively parallel, extremely power efficient,
and highly scalable. It should also have minimal arithmetic precision
requirements, and allow ultra-dense, low power implementations.”
Some of the questions DARPA is looking to answer include:
- What are the capabilities and limitations of HTM-like algorithms for addressing real large-scale applications?
- What algorithm or algorithms would a cortical processor execute?
- What opportunities are there for significant improvements in power
efficiency and speed that can be achieved by leveraging recent advances
in dense memory structures, such as multi-level floating gates,
processors in memory, or 3D integration? - What is the best trade-off between flexibility (or configurability) and performance?
- Is it possible to build specialized architectures that demonstrate
sufficient performance, price and power advantages over mainline
commercial silicon to justify their design and construction? - What new capabilities could a cortical processor enable that would result in a new level of application performance?
- What entirely new applications might be possible if a cortical processor were available to you?
- What type of metric could be used for measuring performance and suitability to task
The new RFI is only part of the research and development DARPA has
been doing to build what it calls a new kind of computer with similar
form and function to the mammalian brain. Such artificial brains would
be used to build robots whose intelligence matches that of mice and
cats, DARPA says.
Recently IBM said it created DARPA-funded prototype chips that could mimic brain-like actions.
The prototype chips will give mind-like abilities for computers to
make decisions by collating and analyzing immense amounts of data,
similar to humans gathering and understanding a series of events,
Dharmendra Modha, project leader for IBM Research told the IDG News Service.
The experimental chips, modeled around neural systems, mimic the
brain’s structure and operation through silicon circuitry and advanced
algorithms.
IBM hopes reverse-engineering the brain into a chip could forge
computers that are highly parallel, event-driven and passive on power
consumption, Modha said. The machines will be a sharp departure from
modern computers, which have scaling limitations and require set
programming by humans to generate results.
Like the brain, IBM’s prototype chips can dynamically rewire to
sense, understand and act on information fed via sight, hearing, taste,
smell and touch, or through other sources such as weather and
water-supply monitors. The chips will help discover patterns based on
probabilities and associations, all while rivaling the brain’s compact
size and low power usage, Modha said.
Leave a Reply
You must be logged in to post a comment.