The first computers were biological: they had two arms, two legs and 10 fingers. “Computer” was a job title, not the name of a machine. The occupation vanished after programmable, electric calculating machines emerged in the late 1940s. We have thought of computers as electronic devices ever since. Over the past 15 years or so, however, biology has been making a comeback of sorts in computing. Scientists in universities and biotech start-ups believe they are close to advancing the first biocomputers from mere research objects to useful, real-world tools. These systems, built out of genes, proteins and cells, include basic elements of computer logic: IF/THEN tests, AND and OR operations, even simple arithmetic operations. Some systems include primitive digital memories. Given appropriate biological inputs, these living computers generate (mostly) predictable outputs. Within about the next five years, the first biological computers might be used as sensitive and accurate diagnostics and therapeutics for human diseases, including cancer, inflammatory diseases and rare metabolic disorders. We and others who are engineering these cellular logic systems envision a future—one not far off—in which they are safe and smart enough to treat disorders as well as identify them. The technology could make it possible to produce complex chemicals, such as biofuels and pharmaceuticals, in novel ways that are faster and less expensive than we can create today. It might allow us to respond to spills by lacing contaminated ecosystems with organisms designed to monitor and degrade toxins. That is not to say that biocomputing technology is now advanced. On the contrary, the field is in its infancy. Don’t think iPhone—think Colossus. Colossus was one of the first programmable electronic computers. Had you walked into Bletchley Park, the top-secret code-breaking center north of London where Colossus began operating in 1944, you would have seen it whirring away, paper tape streaming over pulleys, 1,600 vacuum tubes humming. By today’s standards, Colossus was laughably primitive. It filled a room—hence the name. It could do only a few kinds of calculations and could not store its own program. It took days or weeks to design, load and test a new program. Operators had to physically rewire the machine each time. Despite its limitations, Colossus was able to break the encryption the Nazis used to encode their most important messages. That clunky toddler of a computer helped to win a World War. And its descendants propelled civilization, decades later, from the industrial age to the information age. The most impressive cellular computers made so far are actually much simpler, slower and less capable than Colossus. Like the earliest electronic, digital computers, they do not always work, they run only the simplest programs and they are not reprogrammable outside the laboratory. But we see in this technology some of the same transformative potential for society that digital electronics had in its formative years. Even a tiny bit of smarts, applied cleverly, can create near-magical results in a living system. Cellular computers are not likely to ever replace the electronic and optical variety. Biology will not win any races against solid-state physics. But the chemistry of life has a unique power of its own, and it can interface with the natural world—much of which, after all, runs on biology—in ways that electronic systems cannot. Switch On, Switch Off Every cell in your body is, in some sense, a little computer. The cell receives inputs, often in the form of biochemical molecules attaching to its surface. It processes these inputs through intricate cascades of molecular interactions. Sometimes those reactions affect the activity level of one or more genes in the cell’s DNA—that is, how much a given gene is “expressed” by being transcribed into RNA and then translated into multiple copies of the protein molecule the gene encodes. This analog, chemical computation generates outputs: a squirt of hormone from a gland cell, an electrical impulse from a nerve cell, a stream of antibodies from an immune cell, and so on. As synthetic biologists, we aim to exploit those natural information-processing abilities of cells to run programs that we design. We aspire to go well beyond conventional genetic engineering that just “knocks out” a gene, or cranks up its expression, or inserts a gene or two from one species into cells of a different species. Our goal is to be able to quickly and reliably tailor the behavior of many different varieties of cells (or populations of cells) in much the same way that an electrical engineer designs a circuit board: by choosing standardized parts from a catalog and wiring them together. Unfortunately, biology is different from electronics in ways that frustrate that ambition; more on that later. The field has made slow but considerable progress. The first big advances came in 2000. That year James Collins and his colleagues at Boston University stitched together two mutually interfering genes to make a genetic switch that can be toggled between two stable states—a one-bit digital memory. In addition, a group led by Michael Elowitz, then at Princeton University, engineered a rudimentary oscillator into a strain of the bacterium Escherichia coli. The transformed microbe blinked like a Christmas light as a fluorescence gene turned on and off periodically. By 2003 Ron Weiss, then at Princeton, had designed a “Goldilocks” biocircuit that causes a cell to light up when the concentration of an environmental compound is just right: not too high, not too low. That system linked together four inverters, which change a HIGH signal to a LOW signal, and vice versa. A few years later Adam Arkin and his colleagues at the University of California, Berkeley, came up with a heritable form of memory that, when triggered, uses enzymes called recombinases to snip small sections out of the DNA, flip them backward and then put them back into place. The modified DNA segment passes from a cell to its daughters when that cell divides—a useful feature, considering that many bacteria reproduce every hour or two. Crafting single-operation parts is one thing; cobbling many parts into an integrated system is much trickier but much more useful. Synthetic biologists have created genetic parts to perform all the basic Boolean operations of digital logic (AND, OR, NOT, XOR, and so on). By 2011 two groups of researchers had inserted individual logic gates into bacterial cells and programmed the cells to communicate with one another through chemical “wires,” essentially creating multicellular computers. Martin Fussenegger, Simon Ausländer and their colleagues at the Swiss Federal Institute of Technology Zurich then assembled such parts to create still more advanced systems that could perform simple arithmetic. One of us (Lu), working with Collins, George Church of Harvard Medical School, and others, combined heritable memory units into a cascade to yield an engineered strain of E. coli that can count to three. The memory state remains intact in this system from one generation of cell to the next. That is a crucial feature because it allows information about past biochemical events to be stored for retrieval at some reasonably distant time in the future. In principle, the counter we made could be enhanced to reach higher numbers and to record important biological events, such as cell division or cellular suicide. A Feature and a Bug Biological computing has begun moving beyond proof-of-concept demonstrations; potential real-world applications are now in sight. Within the past several years we and others have found many ways to engineer sensors, logic operators and memory components into genetic circuits that can carry out truly useful tasks in living cells. In 2011, for example, a group that included Weiss, now at the Massachusetts Institute of Technology, Zhen Xie, now at Tsinghua University in China, and Yaakov Benenson of the Swiss Federal Institute of Technology Zurich created a far more advanced genetic logic system that can force a cell to self-destruct if it contains a specific cancerous signature. The genetic circuit monitors the levels of six different biological signals—in this case, short pieces of RNA called microRNAs that regulate gene expression. The six microRNA signals form a distinct signature of human-derived cancer cells known as HeLa cells. When the circuit is in a HeLa cell, it triggers a genetic kill switch and produces a protein that directs the cell to commit suicide. In a non-HeLa cell, the circuit is inactive and does not trigger cell suicide. Other research groups, including our own, have demonstrated biocomputing circuits that can perform basic arithmetic (addition or subtraction), compute ratios or logarithms, convert two-bit digital signals to analog output levels of a protein, and record and transmit the on/off states of all their logic gates from the parent cell to its children. Last year our group, along with Christopher Voigt’s group, both at M.I.T., developed a biocomputing microbe that works inside a mammal’s gut. We used mice as test subjects, but the bacterial species we modified, Bacteroides thetaiotaomicron, is found naturally and at very high levels in the gut of roughly half of adult humans. Previously, Pamela Silver of Harvard Medical School and her colleagues engineered E. coli to operate in the mouse gut. The biocircuit turns the bacterium into a spy. While the microbe loiters inside the gut, it uses part of its DNA like a notebook to detect whether it has bumped into a predetermined chemical. We targeted innocuous compounds that we could feed to the mice, but the target could easily be a toxic molecule or biomarker present only when the host has a particular disease. After ingesting the compounds, the mice excrete the surveillance bacteria in their droppings. In those microbes that recorded an exposure to the target, the circuits trigger production of luciferase, an enzyme that glows in the dark. The telltale glow is faint, but we can see it under a microscope. It is not hard to imagine how such biocomputing systems could be helpful to people who have a gut condition, such as inflammatory bowel disease (IBD). Soon we may be able to program innocuous, naturally occurring bacteria to seek out and report on early signs of cancer or IBD. The devices could change the color of the stool—or add a chemical to it that is detectable by using an inexpensive kit similar to a home pregnancy test. The Hard Parts of Wetware Cellular sentries like those we just described do not need much computational power to greatly improve on the diagnostic tests already available. An IF/THEN test, a few AND and OR gates, and one or two bits of persistent memory are sufficient. That is fortunate because biocomputer engineers face a long list of hard challenges that electronic computer engineers never had to deal with. Compared with the gigahertz speeds of electronic circuits, for example, biology proceeds at a snail’s pace. When we apply inputs to our genetic systems, it typically takes hours for the output to emerge. Fortunately, many biological events of interest do not operate on extremely short timescales. Nevertheless, researchers continue to look for faster ways to compute in living cells. Communication poses a separate problem. In conventional computers, avoiding cacophony is easy: you simply connect components by wires. When many components have to share a wire, you can give each one its own little window of time to speak or listen by synchronizing each part to a universal clock signal. But biology is wireless, and there is no master clock. Communication within and between cells is inherently noisy, like radio. One reason for the noise is that biological parts use chemicals rather than physical wires to signal one another. All the components that use any particular chemical “channel” can talk at the same time. What is worse, the underlying chemical reactions that send and receive signals are themselves noisy; biochemistry is a game of probabilities. Designing systems that compute reliably despite noisy signals is a continual challenge. These issues especially plague biocomputing systems that use analog computing, as many do, because, like slide rules, they depend on values (the levels of proteins or RNAs) that can vary nearly continuously. Digital systems, in contrast, process signals that are either HIGH or LOW, TRUE or FALSE. Although that makes digital logic more robust to noise, many fewer parts are available that work this way. The biggest problem we face is unpredictability, which is a polite way of saying ignorance. Electrical engineers have numerical models that predict, with near-perfect precision, what a new circuit design will do before they build it. Biologists simply do not understand enough about how cells work—even simple ones like bacteria—to make the same kind of predictions. We feel our way forward, largely by trial and error and often find that when our systems function, they do so only for a while. Then they fall apart. Many times we do not understand why. But we are learning—and one important reason to build computers out of cells is that this process of building, testing and debugging biological computers can uncover subtleties of cellular biology and genetics that no one had noticed before. Birth of a New Machine It may take decades to conquer all these challenges; some, such as the relatively slow speed of biological processing, may be forever intractable. Thus, it seems unlikely that biocomputing will grow in performance at the same exponential trajectory that digital electronic computing did. We do not expect that biological computers will ever be faster than conventional computers for mathematical computation or pushing data around. Biocomputer engineers do benefit, however, from an ever accelerating increase in the rate at which we can read and synthesize raw DNA. Like Moore’s law, that trend reduces the time it takes us to design, build, test and refine our gene circuits every year. Although it is still early days, commercially viable applications of biocomputing are coming. Cells can navigate living tissue, discriminate among complex chemical signals, and stimulate growth and healing in ways that no microchip ever could. If biocomputer diagnostics work well, the next logical step is to use them to treat disease when and where they detect it. Cancer treatment clinics have already started isolating immune system cells known as T cells from patients who have blood cancer, inserting genes into the T cells that direct them to kill the cancer and then injecting them back into the body. Researchers are now working to add logic to the genetic package that gets loaded into the T cells so that they can recognize multiple cancer signatures and be equipped with off switches that doctors can use to control them. Many other kinds of cancer might become treatable by this approach. In 2013 Collins and Lu got together with several other biologists to found Synlogic, a company to commercialize medicines that use modified probiotic bacteria that can be safely swallowed. The start-up is now refining biocomputers intended to treat phenylketonuria and urea cycle disorders, two rare but serious metabolic disorders that affect people from birth. Animal trials have begun, with encouraging results. As we gain deeper insight into how the microbiome affects human health, we should find engineered bacteria to be beneficial therapeutics for a widening array of diseases—not just cancer but also inflammatory, metabolic and cardiovascular disorders. With growing experience and an ever increasing library of bioparts, “smart” medicines will become more common and more powerful. Moreover, the technology seems likely to spread from medicine to other areas. In the energy sector, smart bugs may be efficient producers of biofuels. In chemical and materials engineering, biocomputers may prove useful in synthesizing products that are currently hard to make or in exerting just-in-time control over biomanufacturing. In environmental conservation, biocomputers could monitor remote locations for cumulative exposure to toxic substances and then perform remediation. The field is rapidly evolving—literally. Almost certainly, the most amazing uses of biocomputing have yet to be conceived.

Over the past 15 years or so, however, biology has been making a comeback of sorts in computing. Scientists in universities and biotech start-ups believe they are close to advancing the first biocomputers from mere research objects to useful, real-world tools. These systems, built out of genes, proteins and cells, include basic elements of computer logic: IF/THEN tests, AND and OR operations, even simple arithmetic operations. Some systems include primitive digital memories. Given appropriate biological inputs, these living computers generate (mostly) predictable outputs.

Within about the next five years, the first biological computers might be used as sensitive and accurate diagnostics and therapeutics for human diseases, including cancer, inflammatory diseases and rare metabolic disorders. We and others who are engineering these cellular logic systems envision a future—one not far off—in which they are safe and smart enough to treat disorders as well as identify them. The technology could make it possible to produce complex chemicals, such as biofuels and pharmaceuticals, in novel ways that are faster and less expensive than we can create today. It might allow us to respond to spills by lacing contaminated ecosystems with organisms designed to monitor and degrade toxins.

That is not to say that biocomputing technology is now advanced. On the contrary, the field is in its infancy. Don’t think iPhone—think Colossus.

Colossus was one of the first programmable electronic computers. Had you walked into Bletchley Park, the top-secret code-breaking center north of London where Colossus began operating in 1944, you would have seen it whirring away, paper tape streaming over pulleys, 1,600 vacuum tubes humming. By today’s standards, Colossus was laughably primitive. It filled a room—hence the name. It could do only a few kinds of calculations and could not store its own program. It took days or weeks to design, load and test a new program. Operators had to physically rewire the machine each time.

Despite its limitations, Colossus was able to break the encryption the Nazis used to encode their most important messages. That clunky toddler of a computer helped to win a World War. And its descendants propelled civilization, decades later, from the industrial age to the information age.

The most impressive cellular computers made so far are actually much simpler, slower and less capable than Colossus. Like the earliest electronic, digital computers, they do not always work, they run only the simplest programs and they are not reprogrammable outside the laboratory. But we see in this technology some of the same transformative potential for society that digital electronics had in its formative years. Even a tiny bit of smarts, applied cleverly, can create near-magical results in a living system.

Cellular computers are not likely to ever replace the electronic and optical variety. Biology will not win any races against solid-state physics. But the chemistry of life has a unique power of its own, and it can interface with the natural world—much of which, after all, runs on biology—in ways that electronic systems cannot.

Switch On, Switch Off

Every cell in your body is, in some sense, a little computer. The cell receives inputs, often in the form of biochemical molecules attaching to its surface. It processes these inputs through intricate cascades of molecular interactions. Sometimes those reactions affect the activity level of one or more genes in the cell’s DNA—that is, how much a given gene is “expressed” by being transcribed into RNA and then translated into multiple copies of the protein molecule the gene encodes. This analog, chemical computation generates outputs: a squirt of hormone from a gland cell, an electrical impulse from a nerve cell, a stream of antibodies from an immune cell, and so on.

As synthetic biologists, we aim to exploit those natural information-processing abilities of cells to run programs that we design. We aspire to go well beyond conventional genetic engineering that just “knocks out” a gene, or cranks up its expression, or inserts a gene or two from one species into cells of a different species. Our goal is to be able to quickly and reliably tailor the behavior of many different varieties of cells (or populations of cells) in much the same way that an electrical engineer designs a circuit board: by choosing standardized parts from a catalog and wiring them together. Unfortunately, biology is different from electronics in ways that frustrate that ambition; more on that later.

The field has made slow but considerable progress. The first big advances came in 2000. That year James Collins and his colleagues at Boston University stitched together two mutually interfering genes to make a genetic switch that can be toggled between two stable states—a one-bit digital memory. In addition, a group led by Michael Elowitz, then at Princeton University, engineered a rudimentary oscillator into a strain of the bacterium Escherichia coli. The transformed microbe blinked like a Christmas light as a fluorescence gene turned on and off periodically.

By 2003 Ron Weiss, then at Princeton, had designed a “Goldilocks” biocircuit that causes a cell to light up when the concentration of an environmental compound is just right: not too high, not too low. That system linked together four inverters, which change a HIGH signal to a LOW signal, and vice versa.

A few years later Adam Arkin and his colleagues at the University of California, Berkeley, came up with a heritable form of memory that, when triggered, uses enzymes called recombinases to snip small sections out of the DNA, flip them backward and then put them back into place. The modified DNA segment passes from a cell to its daughters when that cell divides—a useful feature, considering that many bacteria reproduce every hour or two.

Crafting single-operation parts is one thing; cobbling many parts into an integrated system is much trickier but much more useful. Synthetic biologists have created genetic parts to perform all the basic Boolean operations of digital logic (AND, OR, NOT, XOR, and so on). By 2011 two groups of researchers had inserted individual logic gates into bacterial cells and programmed the cells to communicate with one another through chemical “wires,” essentially creating multicellular computers.

Martin Fussenegger, Simon Ausländer and their colleagues at the Swiss Federal Institute of Technology Zurich then assembled such parts to create still more advanced systems that could perform simple arithmetic. One of us (Lu), working with Collins, George Church of Harvard Medical School, and others, combined heritable memory units into a cascade to yield an engineered strain of E. coli that can count to three. The memory state remains intact in this system from one generation of cell to the next. That is a crucial feature because it allows information about past biochemical events to be stored for retrieval at some reasonably distant time in the future. In principle, the counter we made could be enhanced to reach higher numbers and to record important biological events, such as cell division or cellular suicide.

A Feature and a Bug

Biological computing has begun moving beyond proof-of-concept demonstrations; potential real-world applications are now in sight. Within the past several years we and others have found many ways to engineer sensors, logic operators and memory components into genetic circuits that can carry out truly useful tasks in living cells.

In 2011, for example, a group that included Weiss, now at the Massachusetts Institute of Technology, Zhen Xie, now at Tsinghua University in China, and Yaakov Benenson of the Swiss Federal Institute of Technology Zurich created a far more advanced genetic logic system that can force a cell to self-destruct if it contains a specific cancerous signature. The genetic circuit monitors the levels of six different biological signals—in this case, short pieces of RNA called microRNAs that regulate gene expression. The six microRNA signals form a distinct signature of human-derived cancer cells known as HeLa cells. When the circuit is in a HeLa cell, it triggers a genetic kill switch and produces a protein that directs the cell to commit suicide. In a non-HeLa cell, the circuit is inactive and does not trigger cell suicide.

Other research groups, including our own, have demonstrated biocomputing circuits that can perform basic arithmetic (addition or subtraction), compute ratios or logarithms, convert two-bit digital signals to analog output levels of a protein, and record and transmit the on/off states of all their logic gates from the parent cell to its children.

Last year our group, along with Christopher Voigt’s group, both at M.I.T., developed a biocomputing microbe that works inside a mammal’s gut. We used mice as test subjects, but the bacterial species we modified, Bacteroides thetaiotaomicron, is found naturally and at very high levels in the gut of roughly half of adult humans. Previously, Pamela Silver of Harvard Medical School and her colleagues engineered E. coli to operate in the mouse gut.

The biocircuit turns the bacterium into a spy. While the microbe loiters inside the gut, it uses part of its DNA like a notebook to detect whether it has bumped into a predetermined chemical. We targeted innocuous compounds that we could feed to the mice, but the target could easily be a toxic molecule or biomarker present only when the host has a particular disease.

After ingesting the compounds, the mice excrete the surveillance bacteria in their droppings. In those microbes that recorded an exposure to the target, the circuits trigger production of luciferase, an enzyme that glows in the dark. The telltale glow is faint, but we can see it under a microscope.

It is not hard to imagine how such biocomputing systems could be helpful to people who have a gut condition, such as inflammatory bowel disease (IBD). Soon we may be able to program innocuous, naturally occurring bacteria to seek out and report on early signs of cancer or IBD. The devices could change the color of the stool—or add a chemical to it that is detectable by using an inexpensive kit similar to a home pregnancy test.

The Hard Parts of Wetware

Cellular sentries like those we just described do not need much computational power to greatly improve on the diagnostic tests already available. An IF/THEN test, a few AND and OR gates, and one or two bits of persistent memory are sufficient. That is fortunate because biocomputer engineers face a long list of hard challenges that electronic computer engineers never had to deal with.

Compared with the gigahertz speeds of electronic circuits, for example, biology proceeds at a snail’s pace. When we apply inputs to our genetic systems, it typically takes hours for the output to emerge. Fortunately, many biological events of interest do not operate on extremely short timescales. Nevertheless, researchers continue to look for faster ways to compute in living cells.

Communication poses a separate problem. In conventional computers, avoiding cacophony is easy: you simply connect components by wires. When many components have to share a wire, you can give each one its own little window of time to speak or listen by synchronizing each part to a universal clock signal.

But biology is wireless, and there is no master clock. Communication within and between cells is inherently noisy, like radio. One reason for the noise is that biological parts use chemicals rather than physical wires to signal one another. All the components that use any particular chemical “channel” can talk at the same time. What is worse, the underlying chemical reactions that send and receive signals are themselves noisy; biochemistry is a game of probabilities. Designing systems that compute reliably despite noisy signals is a continual challenge.

These issues especially plague biocomputing systems that use analog computing, as many do, because, like slide rules, they depend on values (the levels of proteins or RNAs) that can vary nearly continuously. Digital systems, in contrast, process signals that are either HIGH or LOW, TRUE or FALSE. Although that makes digital logic more robust to noise, many fewer parts are available that work this way.

The biggest problem we face is unpredictability, which is a polite way of saying ignorance. Electrical engineers have numerical models that predict, with near-perfect precision, what a new circuit design will do before they build it. Biologists simply do not understand enough about how cells work—even simple ones like bacteria—to make the same kind of predictions. We feel our way forward, largely by trial and error and often find that when our systems function, they do so only for a while. Then they fall apart. Many times we do not understand why.

But we are learning—and one important reason to build computers out of cells is that this process of building, testing and debugging biological computers can uncover subtleties of cellular biology and genetics that no one had noticed before.

Birth of a New Machine

It may take decades to conquer all these challenges; some, such as the relatively slow speed of biological processing, may be forever intractable. Thus, it seems unlikely that biocomputing will grow in performance at the same exponential trajectory that digital electronic computing did. We do not expect that biological computers will ever be faster than conventional computers for mathematical computation or pushing data around. Biocomputer engineers do benefit, however, from an ever accelerating increase in the rate at which we can read and synthesize raw DNA. Like Moore’s law, that trend reduces the time it takes us to design, build, test and refine our gene circuits every year.

Although it is still early days, commercially viable applications of biocomputing are coming. Cells can navigate living tissue, discriminate among complex chemical signals, and stimulate growth and healing in ways that no microchip ever could. If biocomputer diagnostics work well, the next logical step is to use them to treat disease when and where they detect it.

Cancer treatment clinics have already started isolating immune system cells known as T cells from patients who have blood cancer, inserting genes into the T cells that direct them to kill the cancer and then injecting them back into the body. Researchers are now working to add logic to the genetic package that gets loaded into the T cells so that they can recognize multiple cancer signatures and be equipped with off switches that doctors can use to control them. Many other kinds of cancer might become treatable by this approach.

In 2013 Collins and Lu got together with several other biologists to found Synlogic, a company to commercialize medicines that use modified probiotic bacteria that can be safely swallowed. The start-up is now refining biocomputers intended to treat phenylketonuria and urea cycle disorders, two rare but serious metabolic disorders that affect people from birth. Animal trials have begun, with encouraging results.

As we gain deeper insight into how the microbiome affects human health, we should find engineered bacteria to be beneficial therapeutics for a widening array of diseases—not just cancer but also inflammatory, metabolic and cardiovascular disorders. With growing experience and an ever increasing library of bioparts, “smart” medicines will become more common and more powerful. Moreover, the technology seems likely to spread from medicine to other areas. In the energy sector, smart bugs may be efficient producers of biofuels. In chemical and materials engineering, biocomputers may prove useful in synthesizing products that are currently hard to make or in exerting just-in-time control over biomanufacturing. In environmental conservation, biocomputers could monitor remote locations for cumulative exposure to toxic substances and then perform remediation.

The field is rapidly evolving—literally. Almost certainly, the most amazing uses of biocomputing have yet to be conceived.