Behrooz Tahmasebi โ an MIT PhD student in the Department of Electrical Engineering and Computer Science (EECS) and an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL) โ was taking a mathematics course on differential equations in late 2021 when a glimmer of inspiration struck. In that class, he learned for the first time about Weylโs law, which had been formulated 110 years earlier by the German mathematician Hermann Weyl. Tahmasebi realized it might have some relevance to the computer science problem he was then wrestling with, even though the connection appeared โ on the surface โ to be thin, at best. Weylโs law, he says, provides a formula that measures the complexity of the spectral information, or data, contained within the fundamental frequencies of a drum head or guitar string.

Tahmasebi was, at the same time, thinking about measuring the complexity of the input data to a neural network, wondering whether that complexity could be reduced by taking into account some of the symmetries inherent to the dataset. Such a reduction, in turn, could facilitate โ as well as speed up โ machine learning processes.

Weylโs law, conceived about a century before the boom in machine learning, had traditionally been applied to very different physical situations โ such as those concerning the vibrations of a string or the spectrum of electromagnetic (black-body) radiation given off by a heated object. Nevertheless, Tahmasebi believed that a customized version of that law might help with the machine learning problem he was pursuing. And if the approach panned out, the payoff could be considerable.

He spoke with his advisor, Stefanie Jegelka โ an associate professor in EECS and affiliate of CSAIL and the MIT Institute for Data, Systems, and Society โ who believed the idea was definitely worth looking into. As Tahmasebi saw it, Weylโs law had to do with gauging the complexity of data, and so did this project. But Weylโs law, in its original form, said nothing about symmetry.

He and Jegelka have now succeeded in modifying Weylโs law so that symmetry can be factored into the assessment of a datasetโs complexity. โTo the best of my knowledge,โ Tahmasebi says, โthis is the first time Weylโs law has been used to determine how machine learning can be enhanced by symmetry.โ

The paper he and Jegelka wrote earned a โSpotlightโ designation when it was presented at the December 2023 conference on Neural Information Processing Systems โ widely regarded as the worldโs top conference on machine learning.

This work, comments Soledad Villar, an applied mathematician at Johns Hopkins University, โshows that models that satisfy the symmetries of the problem are not only correct but also can produce predictions with smaller errors, using a small amount of training points. [This] is especially important in scientific domains, like computational chemistry, where training data can be scarce.โ

In their paper, Tahmasebi and Jegelka explored the ways in which symmetries, or so-called โinvariances,โ could benefit machine learning. Suppose, for example, the goal of a particular computer run is to pick out every image that contains the numeral 3. That task can be a lot easier, and go a lot quicker, if the algorithm can identify the 3 regardless of where it is placed in the box โ whether itโs exactly in the center or off to the side โ and whether it is pointed right-side up, upside down, or oriented at a random angle. An algorithm equipped with the latter capability can take advantage of the symmetries of translation and rotations, meaning that a 3, or any other object, is not changed in itself by altering its position or by rotating it around an arbitrary axis. It is said to be invariant to those shifts. The same logic can be applied to algorithms charged with identifying dogs or cats. A dog is a dog is a dog, one might say, irrespective of how it is embedded within an image.ย

The point of the entire exercise, the authors explain, is to exploit a datasetโs intrinsic symmetries in order to reduce the complexity of machine learning tasks. That, in turn, can lead to a reduction in the amount of data needed for learning. Concretely, the new work answers the question: How many fewer data are needed to train a machine learning model if the data contain symmetries?

There are two ways of achieving a gain, or benefit, by capitalizing on the symmetries present. The first has to do with the size of the sample to be looked at. Letโs imagine that you are charged, for instance, with analyzing an image that has mirror symmetry โ the right side being an exact replica, or mirror image, of the left. In that case, you donโt have to look at every pixel; you can get all the information you need from half of the image โ a factor of two improvement. If, on the other hand, the image can be partitioned into 10 identical parts, you can get a factor of 10 improvement. This kind of boosting effect is linear.

To take another example, imagine you are sifting through a dataset, trying to find sequences of blocks that have seven different colors โ black, blue, green, purple, red, white, and yellow. Your job becomes much easier if you donโt care about the order in which the blocks are arranged. If the order mattered, there would be 5,040 different combinations to look for. But if all you care about are sequences of blocks in which all seven colors appear, then you have reduced the number of things โ or sequences โ you are searching for from 5,040 to just one.

Tahmasebi and Jegelka discovered that it is possible to achieve a different kind of gain โ one that is exponential โ that can be reaped for symmetries that operate over many dimensions. This advantage is related to the notion that the complexity of a learning task grows exponentially with the dimensionality of the data space. Making use of a multidimensional symmetry can therefore yield a disproportionately large return. โThis is a new contribution that is basically telling us that symmetries of higher dimension are more important because they can give us an exponential gain,โ Tahmasebi says.ย

The NeurIPS 2023 paper that he wrote with Jegelka contains two theorems that were proved mathematically. โThe first theorem shows that an improvement in sample complexity is achievable with the general algorithm we provide,โ Tahmasebi says. The second theorem complements the first, he added, โshowing that this is the best possible gain you can get; nothing else is achievable.โ

He and Jegelka have provided a formula that predicts the gain one can obtain from a particular symmetry in a given application. A virtue of this formula is its generality, Tahmasebi notes. โIt works for any symmetry and any input space.โ It works not only for symmetries that are known today, but it could also be applied in the future to symmetries that are yet to be discovered. The latter prospect is not too farfetched to consider, given that the search for new symmetries has long been a major thrust in physics. That suggests that, as more symmetries are found, the methodology introduced by Tahmasebi and Jegelka should only get better over time.

According to Haggai Maron, a computer scientist at Technion (the Israel Institute of Technology) and NVIDIA who was not involved in the work, the approach presented in the paper โdiverges substantially from related previous works, adopting a geometric perspective and employing tools from differential geometry. This theoretical contribution lends mathematical support to the emerging subfield of โGeometric Deep Learning,โ which has applications in graph learning, 3D data, and more. The paper helps establish a theoretical basis to guide further developments in this rapidly expanding research area.โ

## Leave a Reply