This website uses features that are not well-supported by your browser. Please consider upgrading to a browser and version that fully supports CSS Grid and the CSS Flexible Box Layout Module.

**Abstract: **
Since the beginning of the COVID-19 pandemic, there has been considerable interest and discussion in both the popular media and the scientific/medical literature on pooled testing for COVID. Indeed, in June 2020, the FDA released guidelines on pooled testing procedures that are now available to diagnostic laboratories.

Pooled testing, as described in the popular press and the FDA's ruling, is one model for combinatorial group testing. In this talk, I will discuss a variety of mathematical models for combinatorial group testing, including the design of both the pooling matrix and the decoding algorithms. I will cover major mathematical and algorithmic results in combinatorial group testing and then address what these mathematical results have to say about the practical application of pooled testing. The mathematical tools span a variety of areas from error correcting codes to expander graphs. As with many scientific and technological endeavors, the gap between theory and practice is enormous. (While many of the popular press articles detailed the origins of combinatorial group testing, many left out that it was never actually used in its original form!) On a more positive note, I will give some examples of where combinatorial group testing is used in “theory applications.”

**NB:** PDF version of this announcement (suitable for posting).

**Abstract: **
The Discrete Fourier Transform (DFT) is a fundamental component of numerous computational techniques in signal processing and scientific computing. The most popular means of computing the DFT is the Fast Fourier Transform (FFT). However, with the emergence of big data problems, in which the size of the processed data sets can easily exceed terabytes, the “Fast” in Fast Fourier Transform is often no longer fast enough. In addition, in many big data applications it is hard to acquire a sufficient amount of data in order to compute the desired Fourier transform in the first place. The Sparse Fourier Transform (SFT) addresses the big data setting by computing a compressed Fourier transform using only a subset of the input data, in time sub-linear in the data set size. The goal of this talk is to survey these recent developments, to explain the basic techniques with examples and applications in big data, to demonstrate trade-offs in empirical performance of the algorithms, and to discuss the connection between the SFT and other techniques for massive data analysis such as streaming algorithms and compressive sensing.

**Abstract: **
Given a set of distances amongst points, determining what metric representation is most “consistent” with the input distances or the metric that best captures the relevant geometric features of the data is a key step in many machine learning algorithms. In this talk, we discuss a number of variants of this problem, from convex optimization problems with metric constraints to sparse metric repair.