Dartmouth Engineer - The Magazine of Thayer School of EngineeringDartmouth Engineer - The Magazine of Thayer School of Engineering

Lab Reports

Powerful Platform for Analog Computing

A transistor, conceived of in digital terms, has two states: on and off, which can represent the 1s and 0s of binary arithmetic.

But in analog terms, the transistor has an infinite number of states, which could, in principle, represent an infinite range of mathematical values. Digital computing, for all its advantages, leaves most of transistors’ informational capacity on the table.

Digital computing illustration
Illustration by Jose-Luis Olivares/MIT.

In recent years, analog computers have proven to be much more efficient at simulating biological systems than digital computers. But existing analog computers have to be programmed by hand, a complex process that would be prohibitively time consuming for large-scale simulations.

At a recent Association for Computing Machinery conference on programming language design and implementation, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory and Dartmouth College presented a new compiler for analog computers, a program that translates between high-level instructions written in a language intelligible to humans and the low-level specifications of circuit connections in an analog computer.

The work could help pave the way to highly efficient, highly accurate analog simulations of entire organs, if not organisms.

“At some point, I just got tired of the old digital hardware platform,” says Martin Rinard, an MIT professor of electrical engineering and computer science and a coauthor on the paper describing the new compiler. “The digital hardware platform has been very heavily optimized for the current set of applications. I want to go off and fundamentally change things and see where I can get.”

The first author on the paper is Sara Achour, a graduate student in electrical engineering and computer science advised by Rinard. They’re joined by Rahul Sarpeshkar, the Thomas E. Kurtz Professor and professor of engineering, physics, and microbiology and immunology at Dartmouth.

Sarpeshkar, a former MIT professor who joined the Thayer faculty in 2015, has long studied the use of analog circuits to simulate cells. “I happened to run into Rahul at a party and he told me about this platform he had,” Rinard says. “And it seemed like a very exciting new platform.”

The compiler takes as input differential equations, which biologists frequently use to describe cell dynamics, and translates them into voltages and current flows across an analog chip. In principle, it works with any programmable analog device for which it has a detailed technical specification, but in their experiments, the researchers used the specifications for an analog chip that Sarpeshkar developed.

The researchers tested their compiler on five sets of differential equations commonly used in biological research. On the simplest test set, with only four equations, the compiler took less than a minute to produce an analog implementation; with the most complicated, with 75 differential equations, it took close to an hour. But designing an implementation by hand would have taken much longer.

Differential equations are ideally suited to describing chemical reactions in the cell, since the rate at which two chemicals react is a function of their concentrations. According to the laws of physics, the voltages and currents across an analog circuit need to balance out. If those voltages and currents encode variables in a set of differential equations, then varying one will automatically vary the others. If the equations describe changes in chemical concentration over time, then varying the inputs over time yields a complete solution to the full set of equations.

A digital circuit, by contrast, needs to slice time into thousands or even millions of tiny intervals and solve the full set of equations for each of them. And each transistor in the circuit can represent only one of two values, instead of a continuous range of values. “With a few transistors, cytomorphic analog circuits can solve complicated differential equations—including the effects of noise—that would take millions of digital transistors and millions of digital clock cycles,” Sarpeshkar says.

From the specification of a circuit, the researchers’ compiler determines what basic computational operations are available to it; Sarpeshkar’s chip includes circuits that are already optimized for types of differential equations that recur frequently in models of cells.

The compiler includes an algebraic engine that can redescribe an input equation in terms that make it easier to compile. To take a simple example, the expressions a(x + y) and ax + ay are algebraically equivalent, but one might prove much more straightforward than the other to represent within a particular circuit layout.

Once it has a promising algebraic redescription of a set of differential equations, the compiler begins mapping elements of the equations onto circuit elements. Sometimes, when it’s trying to construct circuits that solve multiple equations simultaneously, it will run into snags and will need to backtrack and try alternative mappings.

But in the researchers’ experiments, the compiler took between 14 and 40 seconds per equation to produce workable mappings, which suggests that it’s not getting hung up on fruitless hypotheses.

“‘Digital’ is almost synonymous with ‘computer’ today, but that’s actually kind of a shame,” says Adrian Sampson, an assistant professor of computer science at Cornell University. “Everybody knows that analog hardware can be incredibly efficient—if we could use it productively. This paper is the most promising compiler work I can remember that could let mere mortals program analog computers.”

—Adapted from an article by Larry Hardesty of the MIT News Office and used with permission.

Integrating Renewables into the Grid

Thayer Professor Amro Farid and researchers from MIT and the United Arab Emirates’ Masdar Institute of Science and Technology have developed critical new formulas for the smooth integration of renewable energy into the electric grid.

Integrating renewable energy into the electric power system is essential for reducing carbon dioxide emissions. But the variability of renewable energy presents challenges for balancing energy sources feeding the grid. In order to get the balance right, power grid operators must procure operating reserves, a form of excess generation capacity. Yet, while insufficient operating reserves put the grid’s reliability at risk, excessive capacity is costly.

Renewable energy windmills
Photograph courtesy istockphoto.

“Maintaining reserve capacity is relatively easy when electricity is generated by fossil fuels, because we can control how much we put into the system,” Farid says. “In contrast, the amount of renewables available to the grid at any time, like the weather that influences them, must be predicted.”

According to Farid, the new formulas he and his collaborators developed tell exactly how much reserve capacity the power grid should have, depending on the amount of renewables on the grid and various parameters, such as market time steps and forecast errors, that influence how the grid is operated. The formulas can be used to calculate the requirements for each type of operating reserves: load following (the extra power-generating capacity needed in case demand exceeds what was predicted), ramping (the ability to flexibly raise or lower power generation), and regulation (power-generation capacity controlled by automatic feedback loops).

The formulas are detailed in “An A Priori Analytical Method for the Determination of Operating Reserve Requirements,” forthcoming in the International Journal of Electrical Power & Energy Systems. Farid coauthored the article with Aramazd Muzhikyan of the Masdar Institute and Kamal Youcef-Toumi of MIT.

“These formulas,” Farid says, “will be useful to the energy industry and policy makers.”

Large Fields, Micro Details

Researchers from Dartmouth and the National Institute of Standards and Technology (NIST) have developed a way to quickly scan large fields of view for microscopic-level details, an imaging technique with surgical and other applications.

“Doctors use a microscope to determine if tissue is normal, but during surgery they can’t use a microscope everywhere,” says Professor Stephen Kanick, a member of the research team. “This approach lets the surgeon image the full field to show areas of abnormal microstructure and therefore show where to point the biopsy needle.”  

Sensitive to differences in tissue microstructure—such as density, particle size, and characteristics of the extracellular matrix—the new technique takes just minutes to image structural features over a field of view on the order of square centimeters. “It’s like if you were using Google Earth and you wanted to determine which houses are actually clusters of condominiums,” explains Kanick. “Our approach could tell you where to look for the condos without having to do the work of zooming in over and over again.”

The technique is detailed in “Wide-field quantitative imaging of tissue microstructure using sub-diffuse spatial frequency domain imaging,” published in Optica, The Optical Society’s journal for high-impact optics research. Authors include researchers from Thayer’s Optics in Medicine group, Dartmouth-Hitchcock, and NIST.

The research continues. Says first coauthor David “Bo” McClatchy ’13, a Thayer PhD candidate, “We want to translate the optical properties data into maps that are even easier to interpret.”

Categories: The Great Hall, Lab Reports

Tags: energy, engineering in medicine, faculty, public policy, research

comments powered by Disqus