Ed Boyden: The brain is like a computer, and we can fix it with nanorobots
Synthetic biology has the potential to replace or improve drug therapies for a wide range of brain disorder
Ed Boyden heads the Synthetic Neurobiology Group at MIT Media Lab. He is working on developing technologies and tools for “analysing and engineering brain circuits” – to reveal which brain neurons are involved in different cognitive processes and using this knowledge to treat brain disorders.
What is synthetic neurobiology?
The synthetic biology part is about taking molecules from the natural world and figuring how to make them into little machines that we can use to address complex brain problems.
Moreover, if we can synthesise the computation of the brain and write information to it, that allows us to test our understanding of the brain and fix disorders by controlling the processes within – running a piece of software on the brain as if it is a computer.
The brain as computer… we probably shouldn’t be surprised that your initial training was in electrical engineering and physics?
Training as a physicist was very helpful because you are trained to think about things both at a logical and intuitive level. Electrical engineering was great too because neurons are electrical devices and we have to think about circuits and networks. I was interested in big unknowns and the brain is one of the biggest, so building tools that allow us to regard the brain as a big electrical circuit appealed to me.
So do you have a “circuit board” of the brain?
It’s not even known how many kinds of cells there are in the brain. If you were looking for a periodic table of the brain, there is no such thing. I really like to think of the brain as a computer. Let’s take an iPhone – there are millions around the world, they all have the same map but at this moment they are all doing a different computations – from firing birds at walls to reading an email. You need more than just a map to understand a computation.
So how do you find out about the functions of the different neurons?
We have a collaboration with a team at Georgia Institute of Technology to build robots to help us analyse the brain at single-cell resolution. We hope to use these robots to harvest the contents of cells to figure out what their properties are. The tip of this robot is a millionth of a metre wide.
And what would you do with the data?
One strategy we are working on is what you might call high throughput screening (HTS) for the living brain. HTS has been used for decades to, for example, screen for genes important for a biological process. But how do you do it in the living brain? We are working on technologies like those robotics or three-dimensional interfaces which would allow you to target information to thousands of points of the brain, so you could determine which circuits are important to a given cognitive process or fixing a disorder.
Robots and interfaces – sounds invasive.
Some degree of invasiveness might not be the end of the world – 250,000 people have some kind of neural implant already, such as deep brain stimulators or cochlear implants. Some people perceive that invasive treatment done subtly could be more desirable than something that you have to wear all the time like an helmet.
Have your techniques been used in live experiments?
In a collaboration led by Alan Horsager from the University of Southern California, we tried to restore vision to a blind eye. There are lots of examples of blind eyes where the photoreceptors have gone: in such a case, there’s no drugs you can give because there’s nothing to bind to. So we thought, why don’t we build an entire suite of tools that would deliver the gene for a light-activated protein into a targeted set of cells and try to restore visual behaviour. Neurons are electrical devices. Normally, photosensory cells in the retina capture light and transform them into electrical signals, which can then be processed by the retina and relayed to the rain. But what if the photosensory cells are gone? What we did was take a light-sensitive protein from a species of green algae, which converts light into electrical signals, and installed it in spared cells in the retina of a blind mouse. Then, the newly photosensitive cells in the retina could capture light. Basically the previously blind retina became a camera.We found we could take a blind mouse that couldn’t solve a maze problem and by making its retina light sensitive, it could navigate a fairly complex maze and go right to the target. Does this show the mouse has conscious vision? I don’t know if we can really say that, but it does show these mice can make cognitive use of visual information.
How far are we from using these techniques on humans?
My lab is focused on inventing the tools. But of the people who are pursuing blindness treatments there are at least five groups who have stated plans or started ventures to take these technologies and move to humans.
What are the advantages of these technologies over drugs?
They can help solve problems where drugs can’t. And maybe they can help people find better drugs. There are many disorders where a specific kind of cell in the brain is atrophied or degenerates. If we can get information to that cell, then we might more accurately be able to correct a brain disorder while minimising side-effects. A drug might affect cells that are normal as well as cells that need to be fixed, causing side effects.
And these tools could also be used to aid drug discovery?
Drugs have a lot of good things about them – they are portable, non-invasive, they don’t need a specialist to administer them. Suppose we could go through the brain with an array of light sources and track down which specific molecules on specific cells are most impactful for treating a disorder. If we can find a drug that can bind to that molecule, (although only 1 in 10 molecules are bindable) maybe we could develop drugs that affect specific classes of cell in the brain and not others.