DARPA wants computers that fuse with higher human brain function

from networkworld.com: In the never-ending quest to get computers to process, really
understand and actually reason, scientists at DARPA want to look more deeply into how computers can mimic a
key portion of our brain.

The military’s advanced research group recently put out a call, or
Request For Information (RFI), on how it could develop systems that go beyond
machine learning, Bayesian techniques, and graphical technology to solve
“extraordinarily difficult recognition problems in real-time.”

Current systems offer partial solutions to this problem, but are
limited in their ability to efficiently scale to larger more complex
datasets, DARPA said.  “They are also compute intensive, exhibit limited
parallelism, require high precision arithmetic, and, in most cases, do
not account for temporal data. “


What DARPA is interested in is looking at mimicking a portion of the
brain known as the  neocortex which is utilized in higher brain
functions such as sensory perception, motor commands, spatial reasoning,
conscious thought and language.
Specfically, DARPA said it is looking
for information that provides new concepts and technologies for
developing what it calls a “Cortical Processor” based on Hierarchical Temporal Memory.


“Although a thorough understanding of how the cortex works is beyond
current state of the art, we are at a point where some basic algorithmic
principles are being identified and merged into machine learning and
neural network techniques. Algorithms inspired by neural models, in
particular neocortex, can recognize complex spatial and temporal
patterns and can adapt to changing environments. Consequently, these
algorithms are a promising approach to data stream filtering and
processing and have the potential for providing new levels of
performance and capabilities for a range of data recognition problems,”
DARPA stated.  “The cortical computational model should be fault
tolerant to gaps in data, massively parallel, extremely power efficient,
and highly scalable. It should also have minimal arithmetic precision
requirements, and allow ultra-dense, low power implementations.”

Some of the questions DARPA is looking to answer include:


  • What are the capabilities and limitations of HTM-like algorithms for addressing real large-scale applications?
  • What algorithm or algorithms would a cortical processor execute?
  • What opportunities are there for significant improvements in power
    efficiency and speed that can be achieved by leveraging recent advances
    in dense memory structures, such as multi-level floating gates,
    processors in memory, or 3D integration?
  • What is the best trade-off between flexibility (or configurability) and performance?
  • Is it possible to build specialized architectures that demonstrate
    sufficient performance, price and power advantages over mainline
    commercial silicon to justify their design and construction?
  • What new capabilities could a cortical processor enable that would result in a new level of application performance?
  • What entirely new applications might be possible if a cortical processor were available to you?
  • What type of metric could be used for measuring performance and suitability to task


The new RFI is only part of the research and development DARPA has
been doing to build what it calls a new kind of computer with similar
form and function to the mammalian brain. Such artificial brains would
be used to build robots whose intelligence matches that of mice and
cats, DARPA says.


Recently IBM said it created DARPA-funded prototype chips that could mimic brain-like actions.


The prototype chips will give mind-like abilities for computers to
make decisions by collating and analyzing immense amounts of data,
similar to humans gathering and understanding a series of events,
Dharmendra Modha, project leader for IBM Research told the IDG News Service
The experimental chips, modeled around neural systems, mimic the
brain’s structure and operation through silicon circuitry and advanced
algorithms.


IBM hopes reverse-engineering the brain into a chip could forge
computers that are highly parallel, event-driven and passive on power
consumption
, Modha said. The machines will be a sharp departure from
modern computers, which have scaling limitations and require set
programming by humans to generate results.


Like the brain, IBM’s prototype chips can dynamically rewire to
sense, understand and act on information fed via sight, hearing, taste,
smell and touch, or through other sources such as weather and
water-supply monitors.
The chips will help discover patterns based on
probabilities and associations, all while rivaling the brain’s compact
size and low power usage, Modha said.

One response to “DARPA wants computers that fuse with higher human brain function”

  1. Edward Siegel Avatar

    Not new and not "news"!!! What Hopfield and student Sejnowski never realized nor gave credit for is as follows: Artificial neural-networks(ANN) patterned on biological neural networks(BNN) artificial-intelligence(ANN) were alive and well long before 1980 when physicist Edward Siegel [consulting with Richard Feynman(Caltech) for ANN AI pioneer Charles Rosen(Machine-Intelligence) & Irwin Wunderman(H.P.) & Vesco Marinov & Adolph Smith(Exxon Enterprises/A.I.) discovered trendy much-hyped "quantum-computing" by two-steps: (1) "EUREKA": realization that ANNs by-rote on-node switching sigmoid-function 1/[1 + e^(E/T)] ~ 1/[1 + e^(hw/kT)] ~ 1/[+ 1 + e^(E/T)] ~ 1/[ + 1 + e^(hw/kT)] is Fermi-Dirac quantum-statistics 1/[1 + e^ (E/ T)] ~ 1/[1 + e^(hw/kT)] ~ 1/[+ 1 + e^(E/T)] ~ 1/[ + 1 + e^(hw/kT)] = 1/[e^(hw/kT) + 1] dominated by Pauli exclusion-principle forcing non-optimal local-minima(example: periodic-table's chemical-elements) forcing slow memory-costly computational-complexity Boltzmann-machine plus simulated-annealing, but permitting from non-optimal local-minima to optimal global-minimum quantum-tunneling!!! (2) "SHAZAM": quantum-statistics "supersymmetry"- transmutation from Fermi-Dirac to Bose-Einstein 1/[+ 1/[e^(hw/kT) + 1] —> 1/[e^(hw/kT) – 1] ~ 1/f power-spectrum, with no local-minima and permitting Bose-Einstein condensation( BEC) via a noise-induced phase-transition (NIT). Frohlich biological BEC & BNN 1/f-"noise"~"generalized-susceptibility" power-spectrum concurred!!! Siegel's work[IBM Conference on Computers & Mathematis,Stanford(1986); Symposium on Fractals, MRS Fall Meeting, Boston(1989)=five seminal-papers!!!] was used without any attribution whatsoever as/by "Page-Brin" PageRank[R. Belew, Finding Out About, Cambridge(2000)]Google first search-engine!!! Siegel empirically rediscovered Aristotle's"square-of-opposition" in physics and mathematics, which three-dimensionally tic-tac-toe diagrams synonyms(functors) versus antonyms (morphisms) versus analogy/metaphor. Amazingly neuroimager Jan Wedeen has recently clinically discovered just such a three-dimensional network of neurons which dominates human brain thinking. Siegel "FUZZYICS=CATEGORYICS=PRAGMATYICS"/ Category-Semantics Cognition for physics/mathematics is a purposely-simple variant of Altshuler-Tsurakov-Lewis "TRIZ"(Russian acronym: "Method of Inventive Problem-Solving") embodied in softwares Invention-Machine(Boston) and Ideation(Michigan)for engineers inventing optimality!
    Dr. Edward Siegel
    "physical-mathematicist"
    CATEGORYSEMANTICS@GMAIL.COM
    (206) 659-0235

Leave a Reply