Institut für Kognitionswissenschaft

Institute of Cognitive Science

Osnabrück University navigation and search

Main content

Top content

Lecture by Prof. Kwabena Boahen, Head of Brains in Silicon Lab, Stanford University, on:

"Scaling Knowledge Processing from 2D Chips to 3D Brains"

Time: Thursday, 30.05.2024 19:00 (sharp)- 22:00
Location: 66/E34 - Barbarastraße 12, 49076 Osnabrück

Kwabena Boahen is Professor of Bioengineering and of Electrical Engineering at Stanford University and one of the key innovators and godfathers of neuromorphic computing. He holds a B.S. and M.S. in electrical engineering from Johns Hopkins University and earned his Ph.D. in computation and neural systems from the California Institute of Technology (Caltech). His doctoral work involved designing and fabricating a silicon chip that emulates the functioning of the retina. This research laid the foundation for his groundbreaking contributions to neuromorphic engineering. In his talk here he will talk about a new generation of computer chips that work 3d.

If you’d like to explore more, you can find his publications on Goggle Scholar or visit the Brains in Silicon Lab at Stanford University, where he leads the research in this exciting area.


As a computer's processors increase in number, they process data at a higher rate and exchange results across a greater distance. Thus, the computer consumes energy at a rate that increases quadratically. In contrast, as a brain's neurons increase in number, it consumes energy at a rate that increases not quadratically but rather just linearly. Thus, an 86B-neuron human brain consumes not 2 terawatts but rather just 25 watts. To scale linearly rather than quadratically, the brain follows two design principles.

First, pack neurons in three dimensions (3D) rather than just two (2D). This principle shortens wires and thus reduces the energy a signal consumes as well as the heat it generates.

Second, scale the number of signals per second as the square-root of the number of neurons rather than linearly. This principle matches the heat generated to the surface area available and thus avoids overheating.

I will illustrate how we could apply these two principles to design AI hardware that runs not with megawatts in the cloud but rather with watts on a phone