New Software Promises to Radically Simplify Chip Design
Breakthrough in graph analysis could help engineers keep pace with Moore’s law
In 1975, Intel founder Gordon Moore predicted that the number of transistors on a single microchip would double every two years. He was right: silicon wafers that in the 1970s held just a few thousand transistors now routinely contain tens of billions of components. That makes chip design an extraordinarily complex and costly process, with teams of engineers working for months to design and test the circuitry for any individual chip.
Now, a breakthrough from researchers at Stevens Institute of Technology promises to streamline the chip-design process. Using a branch of mathematics known as spectral graph theory, electrical and computer engineering associate professor Zhuo Feng and his team have developed algorithms that make it easier to understand the relationships between billions of different chip components, and that also provide rich insights into many other kinds of data. “From microchip design to social networks, we live in a world of interconnected information,” Feng says. “Our algorithms let us manage that data more efficiently than ever before.”
The team’s approach interprets networks of information, or graphs, as numerical matrices. Exploring the mathematical properties of these matrices called eigenvalues and eigenvectors provides insights into the graph’s underlying structure, even if the specific data making up the graph is unknown. “This is a powerful, mathematically sound, and highly efficient method for approximating big networks,” Feng explains. “It lets us create a high-quality proxy of a graph that preserves important information about the graph’s structure.”
Based on these algorithms, Feng has created a series of software tools to simplify, enrich, or condense graphs while retaining information about their structure and contents. One such tool, the Graph Spectral Sparsifier (GRASS), “sparsifies” graphs by stripping away the less important connections between vertices or data points, a bit like simplifying a roadmap by removing side-streets and strengthening major highways.
Chip designers can use GRASS to distill complex circuits into manageable schematics that can be easily fine-tuned and tested, while retaining all the original information about the locations of individual components. That radically simplifies the chip design process, allowing an integrated circuit with billions of elements to be analyzed in just a few hours. “This is clearly both very promising and very powerful,” Feng says.
Crucially, the algorithms underpinning GRASS are future-proof, and will deliver efficient insights no matter how crowded microchips become in future. “Our method scales almost linearly, so if the number of chip components doubles, our run time will roughly double,” Feng explains. That’s a vast improvement over past efforts to automate chip design, which saw run times increase much more rapidly when chips doubled in complexity. “Because chip complexity doubles every other year, the scalability of our method is the single most important thing about it,” Feng says.
The Stevens team’s algorithms also power another software tool, Graph Spectral Learning at Scale (GRASPEL), that introduces a spectral method to discern structures in disorganized and potentially high-dimensional data sets. That’s important for several data-mining and machine-learning problems such as image recognition, since it’s possible to sort images into different categories, and thus determine their contents, by spotting patterns in the underlying data that makes up the images.
GRASPEL will be useful for chip design, too, allowing engineers to understand the relationships between underlying circuit design parameters such as wire geometries, transistor sizes, power, speed, and cost, and to optimize microcircuitry based on previously unrecognized patterns in their data. “Right now we’re still laying the foundations, but we hope that a few years from now chip designers will be using these tools regularly,” Feng says.
Finally, Feng’s team has collaboratively developed a tool called GraphZoom that allows the efficient and effective extraction of low-dimensional vector representations of huge graphs. The GraphZoom framework can deal with a variety of big graph data-sets — from nano-scale chip networks to the entirety of a social network — to be processed hundreds of times faster than ever before. Since GraphZoom encapsulates a hierarchy of spectrally-similar graph representations at different scales, it sometimes produces up to 20% more accurate results than previous approaches. “This is all very new, but it will have a wide range of applications shortly,” Feng says.
The software has been made publicly available for use by academic researchers, and is also being tested by industry partners such as Nvidia. Next, Feng plans to study spectral algorithms for confronting more exotic datasets such as hypergraphs, in which each edge may include multiple different vertices, or data points, at the same time, as well as directed graphs, in which connections flow in specific directions. “Even for these very complex graphs, we’re confident we can develop faster algorithms,” Dr. Feng says. “Our method will let us tackle all kinds of graphs far more efficiently than was ever previously possible.”
Learn more about the Department of Electrical & Computer Engineering at Stevens:
Department of Electrical & Computer Engineering Dept