Gautam Iyer : Anomalous diffusion in fast cellular flows
- Probability ( 117 Views )In '53, GI Taylor estimated the effective dispersion rate of a solute diffusing in the presence of a laminar flow in a pipe. It turns out that the length scales involved in typical pipes are too short for Taylor's result to apply. The goal of my talk will be to establish a preliminary estimate for the effective dispersion rate in a model problem at time scales much shorter than those required in Taylor's result. Precisely, I will study a diffusive tracer in the presence of a fast cellular flow. The main result (joint with A. Novikov) shows that the variance at intermediate time scales is of order $\sqrt{t}$. This was conjectured by W. Young, and is consistent with an anomalous diffusive behaviour.
Jasmine Foo : Modeling diversity in tumor populations and implications for drug resistance
- Probability ( 102 Views )In this talk I will discuss a branching process model developed to study intra-tumor diversity (i.e. the variation amongst the cells within a single tumor). This variation arises due to heritable (epi)genetic alterations which can confer changes in cellular fitness and drug response. In the asymptotic (t-> infinity) regime, we study the growth rates of the population as well as some ecological measures of diversity in the tumor. In the second half of the talk I will discuss applications of this model to studying the emergence of drug resistant populations in Chronic Myeloid Leukemia (CML). (Joint work w/K. Leder, J. Mayberry, R. Durrett, F. Michor)
Rene Carmona : Mean Field Games: theory and applications
- Probability ( 104 Views )We review the Mean Field Game (MFG) paradigm introduced independently by Caines-Huang-Malhame and Lasry Lyons ten years ago, and we illustrate the relevance for applications with a couple of examples (bird flocking and room exit). We then review the probabilistic approach based on Forward-Backward Stochastic Differential Equations (FBSDEs), and we derive the Master Equation from a version of the chain rule (Ito's formula) for functions over flows of probability measures. Finally, we give a new form to the extension of MFGs to the case of major and minor players and, at least in the finite state space case, we describe an application to virus contagion (e.g. cyber security).
Michael Grabchak : Tempered Stable Distributions: Properties and Extensions
- Probability ( 108 Views )Tempered stable distributions were introduced in Rosinski 2007 as models that look like infinite variance stable distributions in some central region, but they have lighter (i.e. tempered) tails. We extend this class of models to allow for more variety in the tails. While some cases no longer correspond to stable distributions they serve to make the class more flexible and in certain subclasses they have been shown to provide a good fit to data. To characterize the possible tails we give detailed results about finiteness of various moments. We also give necessary and sufficient conditions for the tails to be regularly varying. This last part allows us to characterize the domain of attraction to which a particular tempered stable distribution belongs. We then characterize the weak limits of sequences of tempered stable distributions. We will conclude by discussing a mechanism by which distributions that are stable-like in some central region but with lighter tails show up in applications.
Markos Katsoulakis : Accelerated Kinetic Monte Carlo methods: hierarchical parallel > algorithms and coarse-graining
- Probability ( 101 Views )In this talk we present two intimately related approaches in speeding-up molecular simulations via Monte Carlo simulations. First, we discuss coarse-graining algorithms for systems with complex, and often competing particle interactions, both in the equilibrium and non-equilibrium settings, which rely on multilevel sampling and communication. Second, we address mathematical, numerical and algorithmic issues arising in the parallelization of spatially distributed Kinetic Monte Carlo simulations, by developing a new hierarchical operator splitting of the underlying high-dimensional generator, as means of decomposing efficiently and systematically the computational load and communication between multiple processors. The common theme in both methods is the desire to identify and decompose the particle system in components that communicate minimally and thus local information can be either described by suitable coarse-variables (coarse-graining), or computed locally on a individual processors within a parallel architecture.
Andrea Agazzi : Large Deviations Theory for Chemical Reaction Networks
- Probability ( 100 Views )The dynamics of a set of chemical reactions are usually modeled by mass action kinetics as a set of algebraic ordinary differential equations. This model sees the state space of the system as a continuum, whereas chemical reactions represent interactions of a discrete set of molecules. We study large fluctuations of the stochastic mass action kinetics model through Freidlin-Wentzell theory. The application of such a theory to this framework requires justification, in particular because of the non-uniformily Lipschitz character of the model. We therefore find, using tools of Lyapunov stability theory, a set of sufficient conditions for the applicability of large deviations theory to this framework, and prove that such conditions are satisfied by a large class of chemical reaction networks identified exclusively on the base of their topological structure.
Jonathon Peterson : Quantitative CLTs for random walks in random environments
- Probability ( 96 Views )The classical central limit theorem (CLT) states that for sums of a large number of i.i.d. random variables with finite variance, the distribution of the rescaled sum is approximately Gaussian. However, the statement of the central limit theorem doesn't give any quantitative error estimates for this approximation. Under slightly stronger moment assumptions, quantitative bounds for the CLT are given by the Berry-Esseen estimates. In this talk we will consider similar questions for CLTs for random walks in random environments (RWRE). That is, for certain models of RWRE it is known that the position of the random walk has a Gaussian limiting distribution, and we obtain quantitative error estimates on the rate of convergence to the Gaussian distribution for such RWRE. This talk is based on joint works with Sungwon Ahn and Xiaoqin Guo.
Rick Durrett : Diffusion limit for the partner model at the critical value
- Probability ( 92 Views )The partner model is an SIS epidemic in a population with random formation and dissolution of partnerships, and disease transmission only occurs within partnerships. Foxall, Edwards, and van den Driessche found the critical value and studied the subcritical and supercritical regimes. Recently Foxall has shown that (if there are enough initial infecteds) then the critical model survives for time \(O(N^{1/2})\). Here we improve that result by proving the convergence of \(i_N(t)=I(tN^{1/2})/N^{1/2}\) to a limiting diffusion. We do this by showing that in the first O(1), this four dimensional process collapses to two dimensions: the number of SI and II partnerships are constant multiples of the the number of infected singles \(I_t\). The other variable \(Y_t\), the total number of singles, behaves like an Ornstein-Uhlenbeck process on a time scale O(1) and averages out of the limit theorem for \(i_N(t)\). This is joint work with Anirban Basak and Eric Foxall.
Shankar Bhamidi : Two philosophies for random graphs and networks: Local weak convergence and scaling limits
- Probability ( 99 Views )The last few years have witnessed an explosion in the number of mathematical models for random graphs and networks, as well as models for dynamics on these network models. In this context I would like to exhibit the power of two well known philosophies in attacking problems in random graphs and networks: First, local weak convergence: The idea of local neighborhoods of probabilistic discrete structures (such as random graphs) converging to the local neighborhood of limiting infinite objects has been known for a long time in the probability community and has proved to be remarkably effective in proving convergence results in many different situations. Here we shall give a wide range of examples of the above methodology. In particular, we shall show how the above methodology can be used to tackle problems of flows through random networks, where we have a random network with nodes communicating via least cost paths to other nodes. We shall show in some models on the completely connected network how the above methodology allows us to prove the convergence of the empirical distribution of edge flows, exhibiting how macroscopic order emerges from microscopic rules. Also, we shall show how for a wide variety of random trees (uniform random trees, preferential attachment trees arising from a wide variety of attachment schemes, models of trees from Statistical Physics etc) the above methodology shows the convergence of the spectral distribution of the adjacency matrix of theses trees to a limiting non random distribution function. Second, scaling limits: For the analysis of critical random graphs, one often finds that properly associated walks corresponding to the exploration of the graph encode a wide array of information (including the size of the maximal components). In this context we shall extend work of Aldous on Erdos-Renyi critical random graphs to the context of inhomogeneous random graph models. If time permits we shall describe the connection between these models and the multiplicative coalescent, arising from models of coagulation in the physical sciences.
Lingjiong Zhu : Self-Exciting Point Processes
- Probability ( 108 Views )Self-exciting point processes are simple point processes that have been widely used in neuroscience, sociology, finance and many other fields. In many contexts, self-exciting point processes can model the complex systems in the real world better than the standard Poisson processes. We will discuss the Hawkes process, the most studied self-exciting point process in the literature. We will talk about the limit theorems and asymptotics in different regimes. Extensions to Hawkes processes and other self-exciting point processes will also be discussed.
Davar Khoshnevisan : A macroscopic multifractal analysis of parabolic stochastic PDEs
- Probability ( 111 Views )We will show that the solutions to a large family of stochastic PDEs that behave as the linear heat equation develop large-scale space-time peaks on infinitely-many different scales. We formalize this assertion by appealing to the Barlow-Taylor theory of macroscopic fractals. We will also present some earlier work on fixed-time results for comparison purposes. This talk is based on a paper and a work in progress with Kunwoo Kim (Technion) and Yimin Xiao (Michigan State University).
Daniel Sanz-Alonso : Bayes as Optimization
- Probability ( 148 Views )In this talk I will revisit the idea of viewing the Bayesian update as a variational problem. I will show how the variational interpretation is helpful in establishing the convergence of Bayesian models, and in defining and analysing diffusion processes that have the posterior as invariant measure. I will illustrate the former by proving a consistency result for graph-based Bayesian semi-supervised learning in the large unlabelled data-set regime, and the latter by suggesting new optimality criteria for the choice of metric in Riemannian MCMC.
Dane Johnson : Large deviations, moderate deviations, and importance sampling
- Probability ( 104 Views )Importance sampling is an accelerated Monte Carlo algorithm that can reduce variance when estimating small probabilities. The design of the algorithm involves the choice of a change of measure, and based on this choice the performance can range from substantially better than standard Monte Carlo to substantially worse. One approach to choosing a change of measure involves embedding the problem of interest in a sequence of processes that satisfies a large deviations principle, and then basing the change of measure on subsolutions to the Hamilton-Jacobi-Bellman equation associated the large deviations rate function. This approach has the benefit of guaranteeing a certain level of asymptotic performance based on the subsolution, but different embeddings can lead to different rate functions, subsolutions, and consequently different algorithms. I will contrast the strengths and weaknesses of two different embeddings, one using a scaling commonly referred to as the standard large deviations scaling and the other using a scaling referred to as moderate deviations.
Matthew Kahle : Homology of geometric random complexes
- Probability ( 148 Views )There has been a flurry of recent activity in studying the topology of point cloud data. However, there is a feeling that we are lacking rigorous null hypotheses to compare with the results. This is one motivation for the following: Take n points, independently and identically distributed in R^d, according to some distribution (for example, a standard normal distribution). Connect them if they are close (within distance epsilon, a function of n), and then build the Cech complex or Rips complex. What can one say about the homology of this complex as n approaches infinity? Or the persistent homology with respect to the radius? Using a variety of techniques, including Poissonization, Stein's method, and discrete Morse theory, we are able to identify phase transitions, and for certain ranges of epsilon prove central limit theorems for the Betti numbers. This is joint work with Gunnar Carlsson and Persi Diaconis.
Junchi Li : Axelrods Model
- Probability ( 104 Views )Axelrod's model is a voter model in which individuals have multiple opinions and neighbors interact at a rate proprtional to the fraction of opinions they share. I will describe physuicists predictions about the behavior of this model, recent results of Lanchier in one dimension, and a new result I have proved about the two dimensional case.
Eric Foxall : The compulsive gambler with allowances
- Probability ( 119 Views )We consider a process in which a finite set of n agents continually receive a 1 dollar allowance and gamble their fortunes, all in, with one another at a constant rate. This is a variation on the existing compulsive gambler process; in that process, initial fortunes are prescribed and no further allowances are given out. For our process, we find that after some time the distribution of wealth settles into a pattern in which most people have only a few dollars, a few are very wealthy, and a single person possesses most of the cash currently present in the population. In addition, eventually the only way to attain first rank is by winning a bet against the current champion. Moreover, if agents play a fair game, i.e., the probability of winning a bet is proportional to the players' fortunes, the title of champion is assumed by every player infinitely often, although it changes less and less frequently as time goes on. Finally, by examining the process from both the perspective of typical fortune, and that of large fortune, we can go one step further and obtain two distinct limiting processes as n --> infty, with each one admitting a detailed description of its dynamics.
Josh Socolar : Exhaustive Percolation and Random Boolean Networks
- Probability ( 98 Views )The nature of dynamical processes occuring on a directed network can depend qualitatively on the logic implemented at each node. In large random networks where nodes act as Boolean gates, there is a phase transition from quiescent to chaotic behavior as the average degree and/or the probabilities of assigning the different logic functions are varied. To understand the behavior at the transition, we are led to a special type of percolation problem in which the relevant question is not whether a cascade spans the system, but whether it covers the entire system. I will introduce the problem of exhaustive percolation, outline a method for solving it, and describe its application to random Boolean networks.
Jim Nolen : Normal approximation for a random resistor network
- Probability ( 92 Views )I will describe a central limit theorem for the rate of energy dissipation in a random network of resistors. In the continuum setting the model is an elliptic PDE with random conductivity coefficient. In the large network limit, homogenization occurs and the random dissipation rate can be approximated well by a normal random variable having the same mean and variance. I'll give error estimates for this approximation in total variation norm which have optimal scaling. The analysis is based on Stein's method and a recent result of Sourav Chatterjee.
Santosh Vempala : Logconcave Random Graphs
- Probability ( 145 Views )We propose the following model of a random graph on $n$ vertices. Let F be a distribution in R_+^{n(n-1)/2} with a coordinate for every pair ij with 1 \le i,j \le n. Then G_{F,p} is the distribution on graphs with n vertices obtained by picking a random point X from F and defining a graph on n vertices whose edges are pairs ij for which X_{ij} \le p. The standard Erd\H{o}s-R\'{e}nyi model is the special case when F is uniform on the 0-1 unit cube. We determine basic properties such as the connectivity threshold for quite general distributions. We also consider cases where the X_{ij} are the edge weights in some random instance of a combinatorial optimization problem. By choosing suitable distributions, we can capture random graphs with interesting properties such as triangle-free random graphs and weighted random graphs with bounded total weight. This is joint work with Alan Frieze (CMU) and Juan Vera (Waterloo). The talk will be self-contained and no prior knowledge of random graphs is assumed.
Roberto I. Oliveira : Estimating graph parameters via multiple random walks
- Probability ( 132 Views )What can one say about a graph from multiple (short) random walk trajectories on it? In this talk we consider algorithms that only "see" walk trajectories and the degrees along the way. We will show that the number of vertices, edges and mixing time can be all estimated with a number of RW steps that is sublinear in the size of the graph and in its mixing or relaxation time. Our bounds on the number of RW steps are optimal up to constant or polylog factors. We also argue that such algorithms cannot "know when to stop", and discuss additional conditions that circumvent this limitation. To analyse our results, we rely on novel bounds for random walk intersections. The lower bounds come from a family of explicit constructions.
Pascal Maillard : Interval fragmentations with choice
- Probability ( 124 Views )Points fall into the unit interval according to a certain rule, splitting it up into fragments. An example rule is the following: at each step, two points are randomly drawn from the unit interval and the one that falls into the smaller (or larger) interval is discarded, while the other one is kept. This process is inspired by the so-called "power of choice" paradigm originating in the computer science literature on balanced load allocation models. The question of interest is how much the rule affects the geometry of the point cloud. With Elliot Paquette [1] we introduced a general version of this interval fragmentation model and showed that the empirical distribution of rescaled interval lengths converges almost surely to a deterministic probability measure. I will report on this work as well as on work in progress [2] where we show that the empirical measure of the points converges almost surely to the uniform distribution. The proofs involve techniques from stochastic approximation, non-linear integro-differential equations, ergodic theory for Markov processes and perturbations of semigroups on L^p spaces, amongst other things. [1] Maillard, P., & Paquette, E. (2016). Choices and intervals. Israel Journal of Mathematics, 212(1), 337–384. [2] Maillard, P., & Paquette, E. (in preparation). Interval fragmentations with choice: equidistribution and the evolution of tagged fragments
Shish Luo : Multiscale evolutionary dynamics: A measure-valued process perspective
- Probability ( 106 Views )Evolution by natural selection can act at multiple biological levels, often in opposing directions. This is particularly the case for pathogen evolution, which occurs both within the host it infects and via transmission between hosts, and for the evolution of cooperative behavior, where individually advantageous strategies are disadvantageous at the group level. In mathematical terms, these are multiscale systems characterized by stochasticity at each scale. We show how a simple and natural formulation of this can be viewed as a ball-and-urn (measure-valued) process. This equivalent process has very nice mathematical properties, namely it converges weakly to either (i) the solution of an analytically tractable integro-partial differential equation or (ii) a Fleming-Viot process. We can then study properties of these limiting objects to infer general properties of multilevel selection.
Didong Li : Learning & Exploiting Low-Dimensional Structure in High-Dimensional Data
- Probability ( 223 Views )Data lying in a high dimensional ambient space are commonly thought to have a much lower intrinsic dimension. In particular, the data may be concentrated near a lower-dimensional subspace or manifold. There is an immense literature focused on approximating the unknown subspace and the unknown density, and exploiting such approximations in clustering, data compression, and building of predictive models. Most of the literature relies on approximating subspaces and densities using a locally linear, and potentially multiscale, dictionary with Gaussian kernels. In this talk, we propose a simple and general alternative, which instead uses pieces of spheres, or spherelets, to locally approximate the unknown subspace. I will also introduce a curved kernel called the Fisher–Gaussian (FG) kernel which outperforms multivariate Gaussians in many cases. Theory is developed showing that spherelets can produce lower covering numbers and mean square errors for many manifolds, as well as the posterior consistency of the Dirichlet process mixture of the FG kernels. Time permitting, I will also talk about an ongoing project about stochastic differential geometry.
Wesley Pegden : The fractal nature of the Abelian Sandpile
- Probability ( 114 Views )The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor. Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research.