The DRP is running in Spring Term 2024!

Stay tuned for updates!

Representation Theory of Finite Groups

Graduate Mentor: Beth-Anne Castellano

Representation theory allows us to study abstract algebraic structures such as groups by "linearizing" them. In particular, a representation of a group is a vector space equipped with a linear action of that group. This allows us to associate matrices to group elements. As it turns out, representations of finite groups can be completely understood by studying their building blocks, called "irreducible representations." Moreover, "characters," which are traces of matrices, give us all of the data we need to distinguish representations of the same group. In this reading course, we'll marry our knowledge of linear and abstract algebra to explore the fundamental ideas of this theory and gain a new perspective on groups! (Depending on time and the interests of the student, we could also explore the representation theory of the symmetric group, which is very combinatorial.)

Suggested References: Representations and Characters of Groups (James and Liebeck)

Prerequisites: Math 31 or the equivalent

Parallel Computing and Scientific Machine Learning

Post doc Mentor: Lizuo Liu

In technical computing, machine learning and scientific computing are the two main branches. Machine learning, with techniques like convolutional neural networks, drives data-driven analytics, while scientific computing relies on differential equation modeling for simulating scientific laws. Recently, a convergence of these disciplines has emerged as scientific machine learning, showcasing advancements such as using neural networks to accelerate simulations of partial differential equations. Our program explores these methods, addressing challenges like backpropagating neural networks defined by differential equations and optimizing algorithms for performance. Students will learn to integrate numerical techniques efficiently across fields, culminating in a project where they apply these methods to a problem in scientific machine learning and produce publication-quality analyses.

Suggested References:

Prerequisites: N/A

The Temperley-Lieb algebra and applications to knot theory and quantum information theory

Post doc Mentor: Michael Montgomery

The Temperley-Lieb algebra was discovered to solve problems in statistical mechanics but due to the work of Vaughan Jones, Louis Kauffman and others, connections were found with knot theory. In the process, it gained a pictural description as an algebra of string diagrams. We will start with the basics of what the Temperley-Lieb algebra is and how to define the Jones knot polynomial with it. From there we can decide to focus on connections to quantum information theory or study the Temperley-Lieb algebra and planar algebras more generally.

Suggested References: Temperley-Lieb Algebra: From Knot Theory to Logic and Computation via Quantum Mechanics (Samson Abramsky) and Representations of the Temperley-Lieb algebra (Anne Moore)

Prerequisites: Math 24 or 31 would be helpful, but anyone interested should reach out.

An Introduction to Statistical Learning, with applications in python

Post doc Mentor: Ciaran Schembri

"An Introduction to Statistical Learning, with applications in python" is a comprehensive book that introduces readers to the field of statistical learning. It covers fundamental concepts and techniques in machine learning and statistical modelling, providing a practical approach with examples and exercises. The book discusses topics such as linear regression, classification methods, resampling methods, model selection and regularization, tree-based methods, support vector machines, unsupervised learning, and more.

Suggested References: "An Introduction to Statistical Learning, with applications in python" by Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani.

Prerequisites: Math 13, Math 20 and Math 22 or equivalent.

Markov Chain Monte Carlo and Bayesian Computation

Graduate Mentor: Jonathan Lindbloom

The use of Markov Chain Monte Carlo (MCMC) methods for solving problems arising in Bayesian computation is ubiquitous. Our guiding question for this DRP is: how can we characterize a high-dimensional probability distribution for which we possess only the unnormalized density function? We will begin with an in-depth study of Monte Carlo methods and the theoretical basis for the Metropolis-Hastings algorithm. We will then proceed with a survey of many specialized instances of MCMC algorithms, including: random walk MH, adaptive methods, Langevin algorithms, Gibbs sampling, Hamiltonian MCMC, the NUTS sampler, and ensemble methods such as parallel tempering and affine-invariant samplers. To supplement our theoretical understanding, we will implement some of these methods from scratch to sample the posteriors of some simple Bayesian models. This program would be a great jumping-off point for anyone interested in the intersection of applied mathematics, computational science, and statistics.

Suggested References: Monte Carlo Statistical Methods (Robert and Casella)

Prerequisites: Math 13, Math 22/24, Math 20/40/60, and familiarity with a programming language

Krylov methods for solving linear systems

Graduate Mentor: Jonathan Lindbloom

The ability to solve large linear systems efficiently is critical throughout the applied and computational sciences. Unfortunately, the methods for solving linear systems that are learned in an introductory course on linear algebra are typically not suited for solving such systems. In this DRP, we will study Krylov iterative methods which are well-suited to large-scale problems – for this reason, they were named one of the top 10 algorithms of the 20th century

Suggested References: Iterative Methods and Preconditioners for Systems of Linear Equations (Ciaramella & Gander)

Prerequisites: Math 13, Math 22/24. Math 56/76 is helpful background but not required.

Reinforcement Learning and Markov Decision Process

Graduate Mentor: Mark Lovett

Markov Decision processes are best described as processes where a player is trying to find a policy (strategy) that best maximizes their overall utility; as such Markov decision processes are the foundation of reinforcement learning (RL). With the renewed interest in AI and machine learning applications of RL have expanded and already include examples such as agents trying to maximize their returns in the stock market, finding the maximal utilization of a resource distribution, and robotics; however, there still much that is not understood about RL and its dynamics. A fundamental understanding of markov decision processes allows us to deeply explore RL and its applications. I expect this project will look at multiple resources to help the students gain a fundamental understanding of reinforcement learning and Markov decision processes in general. We will start with the fundamentals of reinforcement learning and Markov decision process before hopefully moving on to some hands-on simulations.

Suggested References:

Prerequisites: Math 13, Math 24, Math 23, Programing experience in python is encouraged


Graduate Mentor: Ben Adenbaum

Matroids are objects which provide a framework in which to think about the abstract notion of independence. The general theory of these objects can provide a unified way to understand aspects of graph theory, linear algebra, and combinatorics.

Suggested References: Matroids: A Geometric Introduction (Gordon and McNulty)

Prerequisites: Math 22 or 28 or 38.

... tiles, tiles, tiles, ...

Graduate Mentor: Brian Mintz

What is it about repeating patterns that have captivated humanity since time immemorial? From intricate Islamic mosaics to the mind-bending works of M.C. Escher, tessellation has a universal, unending appeal. To mathematicians, there is another layer to this that is perhaps even more beautiful. In this project, we'll take a tour of the theory underlying regular patterns, discussing symmetries as an introduction to group theory, and the recent breakthrough of the "Einstein" aperiodic monotile.

Suggested References: "The Tiling Book: An Introduction to the Mathematical Theory of Tilings" by Colin Adams

Prerequisites: None.

Social Choice Theory

Graduate Mentor: Brian Mintz

What makes an election "fair"? While 30% of Massachusettsans are republicans, none of their representatives are. But this isn't a result of gerrymandering, rather it is an unavoidable fact of their distribution: it is impossible to make a republican majority district in that state. Mathematically speaking, trying to combine preferences in a group is not a trivial task. One fundamental result in social choice theory is Arrow's impossibility theorem, which states that certain reasonable properties of a fair voting system, such as the lack of a dictator, are inherently incompatible. This project will take a deep dive into different models of aggregating preferences, such as ranked choice voting, and the fascinating mathematics therein.

Suggested References: "A Primer in Social Choice Theory" by Wulf Gaertner

Prerequisites: None.

Genetic Algorithms

Graduate Mentor: Brian Mintz

Evolution is nature's optimization algorithm. Through mutation and selection, and a lot of time, an astounding array of organisms and traits have evolved. Taking inspiration in this, researchers have devised similar algorithms to solve complex problems. We'll investigate the applications of this approach to a variety of problems, as well as how the underlying theory compares with other optimization techniques like gradient ascent or linear programming

Suggested References: "Introduction to Evolutionary Computing" by Agoston E. Eiben and J.E. Smith

Prerequisites: Math 8, some coding experience will be very helpful, but not required.

Elliptic curves and modular forms

Post Doc Mentor: Eran Assaf

In how many ways can you represent a number as a sum of four squares? Can you hear the shape of a drum? What are all the integers x, y, z satisfying x^n + y^n = z^n for n a positive integer? Is the number 1 + 1/2^3 + 1/3^3 + 1/4^3 + …. rational? All these questions (and more) are answered by the beautiful theory of modular forms, which will be explored in this reading course. We will learn about elliptic curves, modular curves and classical modular forms, with a view towards the modularity theorem - every rational elliptic curve arises from a modular form, in a hands on approach. This program would be a good preparation for those interested in pursuing research in number theory in the future.

Suggested References: Elliptic Curves, Modular Forms and L-functions (Álvaro Lozano-Robledo)

Prerequisites: Math 25, Math 31, Math 35, Math 43.