**Note** This talk should be accessible to both graduate students and undergraduates. I will supply necessary background information.

**Abstract.** The problem of finding efficient representations of digital signals
draws on ideas from probability and statistics, information theory,
harmonic and functional analysis, measure theory, and psychophysics.
Not only is the problem mathematically intriguing, it is of great
practical significance with the current widespread adoption of digital
telecommunication systems.

I will first give an overview of linear transform coding, a well understood paradigm that is at the heart of industry standard coders such as JPEG and MPEG. Transform coders model images as wide-sense stationary random vectors with a particular covariance structure. Images are transformed into a basis in which the variance is concentrated into a small number of coefficients. Images are then stored by saving discretized versions of these high variance coefficients. Linear transform coders are very efficient at removing image redundancy due to global linear interpixel relationships, but considerable additional redundancy exists within images. Finding efficient methods of exploiting this additional redundancy is an open problem.

Fractal image compression schemes address the problem of utilizing higher order image redundancy by making use of redundancy across scales. Fractal schemes, which arose from the study of iterated function systems, operate on a set of principles very different from those employed by transform coders. An image is encoded by first constructing a contraction map of which the image is an approximate fixed point, and then storing the parameters of this map. Images are decoded by iterating the stored map to its fixed point. Fractal schemes have proved difficult to analyze using conventional tools and are poorly understood.

Why does fractal image compression work? More precisely, what is the implicit statistical description of image data implicit in fractal coding? This very fundamental question is the central issue I will address. I will introduce a new wavelet-based analytical framework for block-based fractal compression schemes. Within this framework I will draw upon insights from the well-established transform coder paradigm. I will show that fractal block coders are a form of Haar wavelet subtree quantization scheme. The behavior of wavelet subtree statistics for a simple statistical texture model motivates a generalization of fractal block coding to smooth wavelet bases. Our generalized coder outperforms the best fractal schemes in the literature by a significant margin. Our new framework also gives insight into the convergence properties of fractal block coders, and it leads us to develop an unconditionally convergent scheme with a fast decoding algorithm.

In conclusion, I will describe some recent work with Dennis Healy et al on the fast k-bases algorithm, a new joint classification and coding scheme with applications to efficient image coding and to fast medical image acquisition.

**Tea. ** High tea will be served at 3:30pm in the Lounge.

**Emmy's. ** Certain refreshments will be available at the Emmy's after the talk.