JIM HOLT’S SMARTPHONE IS NOT ALL THAT SMART. IT HAS A MAPPING APPLICATION HE USES TO FIND RESTAURANTS, but when he’s finished searching, the app continues to draw so much power and memory that he can’t even do a simple thing like send a text message, complains Holt, an engineer at Freescale Semiconductor. Holt’s phone highlights a general problem with computing systems today: one part of the system does not know what the other is doing. Each program gobbles what resources it can, and the operating system is too stupid to realize that the one app the user cares about at the moment is getting squeezed out. This issue plagues not only smartphones but personal computers and supercomputers, and it will keep getting worse as more machines rely on multicore processors. Unless the various components of a computer learn to communicate their availabilities and needs to one another, the future of computing may not be able to live up to its glorious past. Holt and his collaborators in Project Angstrom, a Massachusetts Institute of Technology–led research consortium, have come up with an answer: the “self-aware” computer. In conventional computers, the hardware, software and operating system (the go-between for hardware and software) cannot easily tell what the other components are doing, even though they are all running inside the same machine. An operating system, for example, does not know if a video-player application is struggling, even though someone watching the video would certainly notice the jerky picture. Last year an M.I.T. team released Application Heartbeats, research software that monitors how all the different applications are faring. It can tell, for instance, that video software is running at a pokey 15 frames per second, not an optimal 30. The idea is to eventually make operating systems that can detect when applications are running unacceptably slowly and consider potential solutions. If the computer had a full battery, perhaps the operating system would direct more computing power to the app. If not, maybe the operating system would tell the application to use a lower-quality but more efficient set of instructions. The operating system would learn from experience, so it might fix the problem faster the second time around. And a self-aware computer would be able to juggle complex goals such as “run these three programs but give priority to the first one” and “save energy as much as possible, as long as it doesn’t interfere with this movie I’m trying to watch.” The next step is to design a follow-on operating system that can tailor the resources going to any one program. If video were running slowly, the operating system would allocate more power to it. If it was running at 40 frames a second, however, the computer might shunt power elsewhere because movies do not look better to the human eye at 40 frames per second than they do at 30. “We’re able to save 40 percent of power over standard practice today,” says Henry Hoffmann, a doctoral student in computer science at M.I.T. who is working on the software. Self-aware systems will not only make computers smarter, they could prove essential for managing ever more complex computers in the future, says Anant Agarwal, the project’s lead scientist. Over the past decade computer engineers have added more and more basic computing units, called cores, to computers. Today’s computers have two to four cores, but future machines will use anywhere from dozens to thousands of cores. That would make the task of splitting up computational tasks among the cores, which programmers now do explicitly, nearly impossible. A self-aware system will take that burden off the programmer, adjusting the program’s core use automatically. Being able to handle so many cores may bring about a whole new level of computing speed, paving the way for a continuation of the trends toward ever faster machines. “As we have very large numbers of cores, we have to have some level of self-aware systems,” says John Villasenor, a professor of electrical engineering at the University of California, Los Angeles, who is not involved in Project Angstrom. “I think you’ll see some elements of this in the next couple of years.”

Holt’s phone highlights a general problem with computing systems today: one part of the system does not know what the other is doing. Each program gobbles what resources it can, and the operating system is too stupid to realize that the one app the user cares about at the moment is getting squeezed out. This issue plagues not only smartphones but personal computers and supercomputers, and it will keep getting worse as more machines rely on multicore processors. Unless the various components of a computer learn to communicate their availabilities and needs to one another, the future of computing may not be able to live up to its glorious past.

Holt and his collaborators in Project Angstrom, a Massachusetts Institute of Technology–led research consortium, have come up with an answer: the “self-aware” computer. In conventional computers, the hardware, software and operating system (the go-between for hardware and software) cannot easily tell what the other components are doing, even though they are all running inside the same machine. An operating system, for example, does not know if a video-player application is struggling, even though someone watching the video would certainly notice the jerky picture.

Last year an M.I.T. team released Application Heartbeats, research software that monitors how all the different applications are faring. It can tell, for instance, that video software is running at a pokey 15 frames per second, not an optimal 30.

The idea is to eventually make operating systems that can detect when applications are running unacceptably slowly and consider potential solutions. If the computer had a full battery, perhaps the operating system would direct more computing power to the app. If not, maybe the operating system would tell the application to use a lower-quality but more efficient set of instructions. The operating system would learn from experience, so it might fix the problem faster the second time around. And a self-aware computer would be able to juggle complex goals such as “run these three programs but give priority to the first one” and “save energy as much as possible, as long as it doesn’t interfere with this movie I’m trying to watch.”

The next step is to design a follow-on operating system that can tailor the resources going to any one program. If video were running slowly, the operating system would allocate more power to it. If it was running at 40 frames a second, however, the computer might shunt power elsewhere because movies do not look better to the human eye at 40 frames per second than they do at 30. “We’re able to save 40 percent of power over standard practice today,” says Henry Hoffmann, a doctoral student in computer science at M.I.T. who is working on the software.

Self-aware systems will not only make computers smarter, they could prove essential for managing ever more complex computers in the future, says Anant Agarwal, the project’s lead scientist. Over the past decade computer engineers have added more and more basic computing units, called cores, to computers. Today’s computers have two to four cores, but future machines will use anywhere from dozens to thousands of cores. That would make the task of splitting up computational tasks among the cores, which programmers now do explicitly, nearly impossible. A self-aware system will take that burden off the programmer, adjusting the program’s core use automatically.

Being able to handle so many cores may bring about a whole new level of computing speed, paving the way for a continuation of the trends toward ever faster machines. “As we have very large numbers of cores, we have to have some level of self-aware systems,” says John Villasenor, a professor of electrical engineering at the University of California, Los Angeles, who is not involved in Project Angstrom. “I think you’ll see some elements of this in the next couple of years.”