Charles J. Amick was an applied mathematician at the University of Chicago who died in 1991, at the age of 39. The Lecture Series was begun in 1993 as a means of honoring his painfully brief life.
2020 Speaker: Fraydoun Rezakhanlou (UC Berkeley)
Lecture 1: Small Random Perturbation of Dynamical System: Reversible Model
Monday, February 10, 2020, 4:00pm - Eckhart 202
Abstract: Dynamical systems that are perturbed by small random noises are known to exhibit metastable behavior. The average transition time between metastable states of a reversible diffusion process is described at the logarithmic scale by Arrhenius' law. Eyring-Kramers formula classically provides a subexponential prefactor to this exponential estimate in the case of reversible diffusions. A scaling limit of the diffusion yields a Markov chain whose states are metastable states, and whose jump rates are described with the aid of Eyring-Kramers formula.
Lecture 2: Small Random Perturbation of Dynamical System: Irreversible Model
Tuesday, February 11, 2020, 4:00pm - Eckhart 202
Abstract: In a more probabilisitc language, Arrhenius' law is a large deviation principle that can be established even for irreversible diffusions by the Freidlin-Wentzell theory. When the drift of this diffusion is gradient, the model is reversible and the metastable states correspond to the minima of the potential. Otherwise the quasi-potential of Freidlin and Wentzell plays the role of the potential. When the drift is certain pertubation of a gradient vector field, the metastability phenomena is expected to occur. Though this has not been rigorously verified except in dimension one. Some intersecting new behavior has been observed when the potential is a Hamiltonian vector field in dimension two.
Lecture 3: Metastability and Condensation
Wednesday, February 12, 2020, 4:00pm - Eckhart 202
Abstract: Zero Range Processes (ZRPs) are stochastic particle systems that are particularly tractable mathematically because of their simple interaction mechanism and explicit equilibrium states. If the particles reside on a periodic lattice of \(L\) many sites, we may recast the ZRP as a random walk on a \(L-1\)-dimensional lattice (a discrete analog of models we have discussed in the previous lectures). When the interaction between particles is sufficiently attractive, particles tend to pile up at one site, i.e., condensation occurs. As the number of particles \(N\), and the number of sites \(L\) increase with \(N>L>\rho_c\), for a critical density \(\rho_c\), then particles condense at a unique random site \(x(t)\), for most the time \(t\). After a suitable rescaling of the time, the location of the condensate would follow a macroscopic evolution that is given by a Lévy process. The Lévy measure of this process can be explicity described in terms of the microscopic details of the underlying ZRP.
2018 Speaker: L. Craig Evans (UC Berkeley)
Lecture 1: Riccati equations and nonlinear PDE
Tuesday, March 6, 2018, 4:00pm - Eckhart 202
Abstract: In this expository lecture, I will discuss some partial differential theory analogs of Riccati equation tricks from ordinary differential theory.
Lecture 2: Weak KAM 1: Formalism and Foundations
Wednesday, March 7, 2018, 4:00pm - Eckhart 202
Abstract: I will briefly describe both the Lagrangian and Hamiltonian approaches to so-called weak KAM theory and explain how interesting information about dynamics is encoded within certain associated nonlinear partial differential equations.
Lecture 3: Weak KAM 2: Recent Progress and Conjectures
Thursday, March 8, 2018, 4:00pm - Eckhart 202
Abstract: I will continue with various themes of the previous lecture and will, in particular, discuss some recent conjectures concerning weak KAM theory and distinguished solutions of related matrix Riccati equations.
2016 Speaker: Jennifer Chayes (Microsoft Research)
Lecture 1: Modeling and Estimating Massive Networks: Overview
Friday, October 28, 2016, 4:00pm–5:00pm, Ryerson 251
Abstract: There are numerous examples of sparse massive networks, including the Internet, WWW and online social networks. How do we model and learn these networks? In contrast to conventional learning problems, where we have many independent samples, it is often the case for these networks that we can get only one independent sample. How do we use a single snapshot today to learn a model for the network, and hence predict a similar, but larger network in the future? In the case of relatively small or moderately sized networks, it’s appropriate to model the network parametrically, and attempt to learn these parameters. For massive networks, a non-parametric representation is more appropriate. I review the theory of graph limits (graphons), developed over the last decade, to describe limits of dense graphs and, more recently, sparse graphs of unbounded degree, such as power-law graphs. I then show how to use these graphons to give consistent estimators of non-parametric models of sparse networks, and moreover how to do this in a way that protects the privacy of individuals on the network. This first lecture gives an overview of results, while the second two focus more on details and methods.
Lecture 2: Limits and Stochastic Models for Sparse Massive Networks
Monday, November 1, 2016, 4:00pm–5:00pm, Eckhart 202
Abstract: Graphons are obtained as limits of sequences graphs, and provide non-parametric ways of modeling and generating networks. In this lecture, I focus on analytical methods to obtain graphons as limits of sparse networks of unbounded degree. These networks include growing power-law networks in which vertices can disconnect from previous friends and contacts (known as non-projective sequences of networks), like Facebook and Google. A fundamental tool in the study of networks and other random structures is the Szemeredi Regularity Lemma, which has been used extensively in combinatorics and other fields to discover apparent order within random structures. We show how the conventional (dense network) form of the lemma breaks down for sparse networks, and how to extend it in the sparse case.
Lecture 3: Exchangeability and Estimation of Sparse Massive Networks
Tuesday, November 2, 2016, 4:00pm–5:00pm, Eckhart 206
Abstract: Graphons are obtained as limits of sequences graphs, and provide non-parametric ways of modeling and generating networks. In this final lecture, I give an alternative way to model massive sparse networks, in this case networks which retain all previous connections as they grow. While the approach of Lecture 2 was more analytic, this lecture focuses on statistical considerations. I show how to extend the classic de Finetti Theorem for infinite exchangeable sequences, and the later Aldous-Hoover Theorem for infinite exchangeable dense arrays, to the case of sparse arrays. I also provide proofs of how to use graphons to consistently estimate massive sparse networks, and in certain cases to do this in a way that preserves the privacy of the individuals on the network.
Past Amick Lecturers include: Andrew Majda, Joseph Keller, John Ball, Martin Kruskal, Paul Roberts, David Ruelle, John Guckenheimer, Percy Deift, Keith Moffatt, Ingrid Daubechies, Yann Brenier, Felix Otto, Claude Bardos, George Papanicolaou, Michael Brenner, Ronald Coifman, Pierre-Louis Lions, Claude LeBris, L. Mahadevan, Jeffrey Rauch, Peter Constantin, Emmanuel Candes, and Amit Singer.