William & Mary

The road to artificial intelligence is paved with calculus

  • Teaching computers to learn:
    Teaching computers to learn:  A 2017 Ph.D. graduate of W&M’s Department of Computer Science, Marty White returned to his alma mater to teach a course on neural networks, a key subfield of artificial intelligence.  Photo by Adrienne Berard
Photo - of -

Marty White scrawled three words on a whiteboard: Good. Bad. Ugly.

The three adjectives served as parting wisdom for a dozen William & Mary students seated in McGlothlin-Street Hall. White was wrapping up the final class of the semester for his course “Neural Networks for Machine Learning.”

A 2017 Ph.D. graduate of W&M’s Department of Computer Science, White returned to his alma mater to teach after he heard the department wanted to offer another course on machine learning, a key subset of artificial intelligence.

“I believed neural networks could serve as the perfect backdrop for a class studying what learning from data means and how to do it well,” White said. “Since the fundamentals draw from calculus, probability, statistics, and linear algebra, the first part of the course is pretty intense, but I was interested in returning to teach because I had some ideas on how to manage this complexity.”

White’s last lecture of the semester centered on practical and ethical pitfalls within neural networks. In the Russian nesting doll of AI, neural networks are somewhere near the center. They rest inside a multitude of fields, the principal one being machine learning, the range of techniques used to give computer systems the ability to learn.

Before computers develop intelligence, they need to learn. In order to learn, they need to process information via neural networks.

Such networks can be used for good, like detecting cancer, White said. They can also be bad. White gave the example of Amazon Alexa inadvertently sharing obscene material with a toddler. And they can be downright ugly, like when network-backed bots undermine democracies.

“With something like Twitter,” White said. “Everyone can follow anyone or anything. Things can get ugly because it touches a lot of people instantaneously.”

White designed the cross-listed graduate/undergraduate course to provide fundamentals for teaching computers how to learn from experience. He is offering the class again in the fall and it is already at capacity. 

“Artificial intelligence has many subfields,” White said. “This course covered a focused subset of a much bigger picture.”

For being a small subset, neural networks touch nearly every aspect of modern society. They are at the core of programs like Siri that respond to speech. Digital security systems that scan social media for threats rely on neural networks. Researchers use neural networks to make sense of million-image databases.

“Under the hood, the technology for vision, speech or language applications, it’s all neural networks these days,” White said.

In an era in which such technology has become ubiquitous, White is candid about the fact that every innovation changes the playing field. He stresses the need to prepare for jobs that may not exist yet. Computer scientists have to be flexible, willing to respond to rapid advances in their discipline.

“From the beginning of this class, I emphasized abstract thinking,” White said. “I started this course by saying neural networks at their core are function approximators. You’ve got inputs, some processing and then you get outputs. At the end of the day, what we’re doing is we’re taking data and we’re learning that process, that function.”

It seems simple enough, but it’s one of the most complex areas of computer science. Even a foundational grasp of neural networks requires relatively advanced mathematics, White said.

A neural network has to be built around the type of information it will receive. Take, for example, a neural network designed to analyze maps. If the input information is topography, the programmer has to create a neural network capable of processing imagery. If the input information includes towns and street names, that requires a sequence modeler capable of processing language. 

“When you’re presented with a problem and you hear something like ‘how many’ or ‘how much,’ those phrases mean something very specific,” White said. “They should get you started on the approach that you’re ultimately going to use to solve the problem.”

A myriad of challenges present themselves when building a neural network, but White explains that the biggest problem facing his students may have nothing to do with programming mechanics. He says the real challenge has to do with the ethics, the human side of artificial intelligence.

“All the stuff that we do in machine learning is ultimately about finding patterns,” White said. “So how do we find patterns? We group things together, we analyze things over time.”

That process requires abstraction, taking individual data points and using them to chart larger trends and patterns. It becomes easy to forget that those discrete points are often living, breathing people, White said.

“One concept that has stayed with me from this class is how powerful data can be, even if the data is unlabeled,” said Ian Wright ‘19, who will spend the summer interning for a company that uses neural networks to help detect and respond to cyber security threats.

Another student will be working on information processing with W&M’s research lab AidData, while other students will stay on with the department as researchers. Whatever his students decide to do going forward, White tells them to maintain sensitivity with the data they use  and design neural networks with ethics at the forefront. 

“Ultimately, the course is called Neural Networks for Machine Learning,” White said. “But I think the course was always designed to be much more than that.”