Quicklists
public 02:29:55

Leonid Berlyand : Flux norm approach to finite-dimensional homogenization approximation with non-separated scales and high contrast

  -   Applied Math and Analysis ( 152 Views )

PDF Abstract
Classical homogenization theory deals with mathematical models of strongly inhomogeneous media described by PDEs with rapidly oscillating coefficients of the form A(x/\epsilon), \epsilon → 0. The goal is to approximate this problem by a homogenized (simpler) PDE with slowly varying coefficients that do not depend on the small parameter \epsilon. The original problem has two scales: fine O(\epsilon) and coarse O(1), whereas the homogenized problem has only a coarse scale. The homogenization of PDEs with periodic or ergodic coefficients and well-separated scales is now well understood. In a joint work with H. Owhadi (Caltech) we consider the most general case of arbitrary L∞ coefficients, which may contain infinitely many scales that are not necessarily well-separated. Specifically, we study scalar and vectorial divergence-form elliptic PDEs with such coefficients. We establish two finite-dimensional approximations to the solutions of these problems, which we refer to as finite-dimensional homogenization approximations. We introduce a flux norm and establish the error estimate in this norm with an explicit and optimal error constant independent of the contrast and regularity of the coefficients. A proper generalization of the notion of cell problems is the key technical issue in our consideration. The results described above are obtained as an application of the transfer property as well as a new class of elliptic inequalities which we conjecture. These inequalities play the same role in our approach as the div-curl lemma in classical homogenization. These inequalities are closely related to the issue of H^2 regularity of solutions of elliptic non-divergent PDEs with non smooth coefficients.

public 01:29:58

Courtney Paquette : Algorithms for stochastic nonconvex and nonsmooth optimization

  -   Applied Math and Analysis ( 123 Views )

Nonsmooth and nonconvex loss functions are often used to model physical phenomena, provide robustness, and improve stability. While convergence guarantees in the smooth, convex settings are well-documented, algorithms for solving large-scale nonsmooth and nonconvex problems remain in their infancy.

I will begin by isolating a class of nonsmooth and nonconvex functions that can be used to model a variety of statistical and signal processing tasks. Standard statistical assumptions on such inverse problems often endow the optimization formulation with an appealing regularity condition: the objective grows sharply away from the solution set. We show that under such regularity, a variety of simple algorithms, subgradient and Gauss Newton like methods, converge rapidly when initialized within constant relative error of the optimal solution. We illustrate the theory and algorithms on the real phase retrieval problem, and survey a number of other applications, including blind deconvolution and covariance matrix estimation.

One of the main advantages of smooth optimization over its nonsmooth counterpart is the potential to use a line search for improved numerical performance. A long-standing open question is to design a line-search procedure in the stochastic setting. In the second part of the talk, I will present a practical line-search method for smooth stochastic optimization that has rigorous convergence guarantees and requires only knowable quantities for implementation. While traditional line-search methods rely on exact computations of the gradient and function values, our method assumes that these values are available up to some dynamically adjusted accuracy that holds with some sufficiently high, but fixed, probability. We show that the expected number of iterations to reach an approximate-stationary point matches the worst-case efficiency of typical first-order methods, while for convex and strongly convex objectives it achieves the rates of deterministic gradient descent.

public 01:34:53
public 01:34:46
public 01:14:52

Joshua Vogelstein : Consistent Graph Classification applied to Human Brain Connectome Data

  -   Mathematical Biology ( 171 Views )

Graphs are becoming a favorite mathematical object for representation of data. Yet, statistical pattern recognition has focused almost entirely on vector valued data in Euclidean space. Graphs, however, live in graph space, which is non-Euclidean. Thus, most inference techniques are not even defined for graph valued data. Previous work in the classification of graph-valued data typically follows one of two recipes. (1) Vectorize the adjacency matrices of the graphs, and apply standard machine learning techniques. (2) Compute some number of graph invariants (e.g., clustering coefficient, or degree distribution) for each graph, and then apply standard machine learning techniques. We follow a different recipe based in the probabilistic theory of pattern recognition. First, we define a joint graph-class model. Given this model, we derive classifiers which we prove are consistent; that is, they converge to the Bayes optimal classifier. Specifically, we build two consistent classifiers for graph valued data, a parametric and a non-parametric version. In a sense, these classifiers span the spectrum of complexity, the former is consistent for graphs sampled from relatively simple random graph distributions, the latter is consistent for graphs sampled from (nearly) any random graph distribution. Although both classifiers assume that all our graphs have labeled vertices, we generalize these results to also incorporate unlabeled graphs, as well as weighted and multigraphs. We apply these graph classifiers to human brain data. Specifically, using diffusion MRI, we can obtain large brain-graphs (10,000 vertices) for each subject, where vertices correspond to voxels. We then coarsen the graphs spatially to obtain smaller (70 vertex) graphs per subject. Using <50 subjects, we are able to achieve nearly 85% classification accuracy, with results interpretable to neurobiologists with regard to the brain regions of interest.

public 01:39:40

Frederic Lechenault : Experimental investigation of equilibration properties in model granular subsystems

  -   Nonlinear and Complex Systems ( 168 Views )

We experimentally investigate the statistical features of the stationary states reached by two idealized granular liquids able to exchange volume. The system consists in two binary mixtures of the same number of soft disks, hence covering the same area, but with different surface properties. The disks sit on a horizontal air table, which provides ultra low friction at the cell bottom, and are separated by a mobile wall. Energy is injected in the system by means of an array of randomly activated coil bumpers standing as the edges of the cell. Due to the energy injection, the system acts like a slow liquid and eventually jams at higher packing fraction. We characterize the macroscopic states by studying the motion of the piston. We find that its average position is different from one half, and a non monotonic function of the overall packing fraction, which reveals the crucial role played by the surface properties in the corresponding density of states. We then study the bulk statistics of the packing fraction and the dynamics in each subsystem. We find that the measured quantities do not equilibrate, and become dramatically different as the overall packing fraction is increased beyond the onset of supercooling. However, the local fluctuations of the packing fraction are uniquely determined by its average, and hence independent of the interaction between disks. We then focus on the mixing properties of such an assembly. We characterize mixing by computing the topological entropy of the braids formed by the stationary trajectories of the grains at each pressure. This quantity is shown to be well defined, very sensitive to onset of supercooling, reflecting the dynamical arrest of the assembly, and to equilibrate in the two subsystems. Joint work with Karen Daniels.

public 01:34:51

Math Slam : Math Slam

  -   Graduate/Faculty Seminar ( 116 Views )

Math Slam

public 01:09:19

Benoit Charbonneau : Hilbert series and K-polynomials

  -   Colloquium ( 235 Views )

public 59:53

Jason Ferguson : PRUV Research

  -   Presentations ( 167 Views )

public 01:29:47

Marija Vucelja : A glass transition in population genetics: Emergence of clones in populations

  -   Nonlinear and Complex Systems ( 208 Views )

The fields of evolution and population genetics are undergoing a renaissance, due to the abundance of sequencing data. On the other hand, the existing theories are often unable to explain the experimental findings. It is not clear what sets the time scales of evolution, whether for antibiotic resistance, an emergence of new animal species, or the diversification of life. The emerging picture of genetic evolution is that of a strongly interacting stochastic system with large numbers of components far from equilibrium. In this talk, I plan to focus on the clone competition and discuss the diversity of a random population that undergoes selection and recombination (sexual reproduction). Recombination reshuffles genetic material while selection amplifies the fittest genotypes. If recombination is more rapid than selection, a population consists of a diverse mixture of many genotypes, as is observed in many populations. In the opposite regime, selection can amplify individual genotypes into large clones, and the population reaches the so-called "clonal condensation". I hope to convince you that our work provides a qualitative explanation of clonal condensation. I will point out the similarity between clonal condensation and the freezing transition in the Random Energy Model of spin glasses. I will conclude with a summary of our present understanding of the clonal condensation phenomena and describe future directions and connections to statistical physics.

public 43:08

James Colliander : Crowdmark presentation

  -   Presentations ( 311 Views )

public 01:24:58

no seminar : Thanksgiving

  -   Probability ( 118 Views )