William & Mary

Latest information about COVID-19 Learn More

Computer science: As Moore’s Law slows down, the Insight Architecture Lab accelerates

  • Members of William & Mary’s Insight Architecture Lab
    Accelerator insight:  Members of William & Mary’s Insight Architecture Lab including (from left) Gurunath Kadam and Hongyuan Liu, Ph.D. students, and Adwait Jog, assistant professor of computer science.  Photo by Stephen Salpukas
Photo - of -

Adwait Jog sat down at a table in McGlothlin-Street Hall last semester and delivered his verdict on the status of a long-standing observation that has predicted the expansion of computing power for decades.

“Moore’s Law is slowing down,” he said. Jog went on to explain that the 1965 observation that projects a rapid, regular rate of advances in semiconductor technology has run its course.

For the foreseeable future at least, computer scientists can’t address most issues by throwing more silicon at the problem. It doesn’t help that Moore’s Law expired at a most inconvenient time, too: just when the demands of big-data applications began to pile up.

Jog is an assistant professor in William & Mary’s Department of Computer Science. He and other computer scientists are working to make computers more efficient by improving the architecture of the machines, necessary for computational handling of projects ranging from machine learning to genomics.

As the era of Moore’s Law ends, the limitations of improvements to the central processing unit — CPU— become clear. New approaches to faster, more secure and more efficient computing are being developed. Jog’s Insight Architecture Lab is exploring the potential of accelerators, components that boost one or another particular function of a computer. Their work is supported by a number of funding institutions, notably the National Science Foundation.

The members of the Insight Architecture Lab have outlined their accelerator advances in papers that have been accepted at top-tier computing-architecture conferences including the IEEE Symposium on High Performance Computer Architecture and the ACM International Conference on Architectural Support for Programming Languages and Operating Systems.

Accelerators were developed as specialist elements with the architecture of a computer, Jog explains. A common example is the GPU — graphics processing unit — that works with the CPU to boost display performance.

Accelerators offer many virtues that make them ripe for a greater role in a post-Moore’s Law age. For one thing, their design is optimal for high-throughput and for parallelism — dividing code to run through many processors simultaneously. They handle large sets of data well, operate at high speed and are more energy-efficient. But accelerators do have a single drawback, and it’s a large one.

“Accelerators are very good at doing one thing, but they’re not general purpose,” he said. “There is always a tension between general purpose processors and specialized architectures.”

Jog and his team of graduate students are working to lessen that tension between the specialized and the general. Members of the Insight Architecture Lab prepared for months to present their progress and ideas on accelerator-based solutions in a series of papers at a set of top-tier computing-architecture conferences. The members of the lab, all Ph.D. students, are Mohamed Ibrahim, Haonan Wang, Gurunath Kadam and Hongyuan Liu.

Some accelerators are more specialized than others. The GPU is a general-purpose accelerator, Jog explained. A GPU is specialized in comparison to the CPU, but in the accelerator world, it’s classified as a generalist. The ultra-specialists in the computer architecture world, Jog explained, are known as domain-specific accelerators.

The Insight Architecture Lab is working to overcome the challenges posed by domain-specific accelerators in order to incorporate the ultra-specialized units into next-generation computing.

Because domain-specific devices are designed and programmed for a variety of computing purposes, there are too many different kinds to make it reasonable to incorporate a large number of these specialized circuits into an efficient computing architecture. In addition, domain-specific design presents another layer of challenges for the implementation of abductive logic programming.

Last, but not least, Jog notes that there is very little understanding when it comes to protection against faults and hacking attacks on domain-specific accelerators.

Jog characterizes the lab’s approach as the “Three I’s”:

  • Improve general-purpose accelerators
  • Infuse domain-specific hardware into general-purpose accelerators
  • Immunize the circuits against attacks and faults.

Jog explains that their work on the first and second I is directed at the AP, or Automata processor, a new device introduced a few years ago by Micron Corp. The AP, he explains, is part of the trend to improve computer operation through developing more specialized architecture.

Computer efficiency is often achieved through a concept known as virtualization. Jog explained virtualization by comparing computer memory to allocation of corporate parking spaces. Like the slots in a corporate parking lot, computer memory today is a finite, precious resource and virtualization becomes more challenging as the datasets grow.

 “You can allocate more parking permits than you have parking spaces,” he said. “Then you just hope that people won’t get in long lines and end up going home because they can’t find a space.

“But if you know when each employee is due to arrive and leave work, then you can allocate more efficiently,” he added.

The Automata work was a beginning, and the Insight Architecture Lab is moving beyond that work.

“What we want to do is take some of the common things we used for Automata and put them in the GPU,” he said. The idea is to make the GPU do more work and become more efficient, so the architecture doesn’t rely on a multitude of domain-specific accelerators.

“I mean, I don’t want thousands of different accelerators in my system! I may not have a budget for putting accelerators for say both genomics and machine learning,” Jog said. “What I envision is incorporating a few accelerators — and I want to make them more capable and secure.”

He said the development of a line of deft, efficient and secure accelerators is a major focus of his group. And they’ve made progress. They have designed a GPU-based accelerator that has taken “nice pieces from different domains,” as Jog says and it was demonstrated to perform 26 times better than the previous implementation of the GPU.

The lab is also addressing the security of the new accelerators (the third I). Jog said in the age of virtualized cloud-based computing, many users are accessing the same hardware and not all of those users are legitimate. Some users are there to steal data and security is an imperative.

“Security is a challenge,” he said. “You want efficiency; you want security. How can we get both? This is difficult.”

Gurunath Kadam, a member of the Insight Architecture Lab, has made a special study of accelerator security. Jog explained a class of attempts to get into a system are called side-channel attacks. Basically, a side-channel attack is a probe based on a series of inferences centered on computing time.

“Here is a simple example,” Jog said. “You have a one and a zero, OK? I don’t know whether you have one or zero. But I know that if you do a certain computation with a one, it takes 10 seconds and a zero takes 20 seconds. So if I see your computation takes 10 seconds, I know you have a one.”

One defense against side-channel attacks is to essentially “hide the clock,” by making all computations take the same amount of time. But the security of hiding the clock comes with a certain trade-off in efficiency. And there are always new ways to try to infiltrate a system. Kadam says the lab aims to be proactive.

“Our main goal is to discover new attacks, if any, and offer solutions,” he said.