Elliot Cartee : Control-Theoretic Models of Environmental Crime
- Mathematical Biology ( 172 Views )We present two models of perpetrators' decision-making in extracting resources from a protected area. It is assumed that the authorities conduct surveillance to counter the extraction activities, and that perpetrators choose their post-extraction paths to balance the time/hardship of travel against the expected losses from a possible detection. In our first model, the authorities are assumed to use ground patrols and the protected resources are confiscated as soon as the extractor is observed with them. The perpetrators' path-planning is modeled using the optimal control of randomly-terminated process. In our second model, the authorities use aerial patrols, with the apprehension of perpetrators and confiscation of resources delayed until their exit from the protected area. In this case the path-planning is based on multi-objective dynamic programming. Our efficient numerical methods are illustrated on several examples with complicated geometry and terrain of protected areas, non-uniform distribution of protected resources, and spatially non-uniform detection rates due to aerial or ground patrols.
Leonid Berlyand : Flux norm approach to finite-dimensional homogenization approximation with non-separated scales and high contrast
- Applied Math and Analysis ( 152 Views )PDF Abstract
Classical homogenization theory deals with mathematical models of strongly
inhomogeneous media described by PDEs with rapidly oscillating coefficients
of the form A(x/\epsilon), \epsilon → 0. The goal is to approximate this problem by a
homogenized (simpler) PDE with slowly varying coefficients that do not depend
on the small parameter \epsilon. The original problem has two scales: fine
O(\epsilon) and coarse O(1), whereas the homogenized problem has only a coarse
scale.
The homogenization of PDEs with periodic or ergodic coefficients and
well-separated scales is now well understood. In a joint work with H. Owhadi
(Caltech) we consider the most general case of arbitrary L∞ coefficients,
which may contain infinitely many scales that are not necessarily well-separated.
Specifically, we study scalar and vectorial divergence-form elliptic PDEs with
such coefficients. We establish two finite-dimensional approximations to the
solutions of these problems, which we refer to as finite-dimensional homogenization
approximations. We introduce a flux norm and establish the error
estimate in this norm with an explicit and optimal error constant independent
of the contrast and regularity of the coefficients. A proper generalization of
the notion of cell problems is the key technical issue in our consideration.
The results described above are obtained as an application of the transfer
property as well as a new class of elliptic inequalities which we conjecture.
These inequalities play the same role in our approach as the div-curl lemma
in classical homogenization. These inequalities are closely related to the issue
of H^2 regularity of solutions of elliptic non-divergent PDEs with non smooth
coefficients.
Lan-Hsuan Huang : Constant mean curvature foliations for isolated systems in general relativity
- Geometry and Topology ( 125 Views )We will discuss the existence and uniqueness of the foliation by stable spheres with constant mean curvature for asymptotically flat manifolds satisfying the Regge-Teitelboim condition at infinity. This work generalizes the earlier results of Huisken/Yau, Ye, and Metzger. We will also discuss the concept of the center of mass in general relativity.
Courtney Paquette : Algorithms for stochastic nonconvex and nonsmooth optimization
- Applied Math and Analysis ( 123 Views )Nonsmooth and nonconvex loss functions are often used to model physical phenomena, provide robustness, and improve stability. While convergence guarantees in the smooth, convex settings are well-documented, algorithms for solving large-scale nonsmooth and nonconvex problems remain in their infancy.
I will begin by isolating a class of nonsmooth and nonconvex functions that can be used to model a variety of statistical and signal processing tasks. Standard statistical assumptions on such inverse problems often endow the optimization formulation with an appealing regularity condition: the objective grows sharply away from the solution set. We show that under such regularity, a variety of simple algorithms, subgradient and Gauss Newton like methods, converge rapidly when initialized within constant relative error of the optimal solution. We illustrate the theory and algorithms on the real phase retrieval problem, and survey a number of other applications, including blind deconvolution and covariance matrix estimation.
One of the main advantages of smooth optimization over its nonsmooth counterpart is the potential to use a line search for improved numerical performance. A long-standing open question is to design a line-search procedure in the stochastic setting. In the second part of the talk, I will present a practical line-search method for smooth stochastic optimization that has rigorous convergence guarantees and requires only knowable quantities for implementation. While traditional line-search methods rely on exact computations of the gradient and function values, our method assumes that these values are available up to some dynamically adjusted accuracy that holds with some sufficiently high, but fixed, probability. We show that the expected number of iterations to reach an approximate-stationary point matches the worst-case efficiency of typical first-order methods, while for convex and strongly convex objectives it achieves the rates of deterministic gradient descent.
Haizhao Yang : Data-driven fast algorithms in applied harmonic analysis and numerical linear algebra
- Applied Math and Analysis ( 116 Views )Exploring data structures (e.g, periodicity, sparsity, low-rankness) is a universal method in designing fast algorithms in scientific computing. In the first part of this talk, I will show how this idea is applied to the analysis of oscillatory data in applied harmonic analysis. These fast algorithms have been applied to data analysis ranging from materials science, medicine, and art. In the second part, I will discuss how this idea works in some basic numerical linear algebra routines like matrix multiplications and decompositions, with an emphasis in electronic structure calculation.
Jan Wehr : Noise-induced drift---theory and experiment
- Probability ( 120 Views )Recent experiments show that an overdamped Brownian particle in a diffusion gradient experiences an additional drift. Equivalently, the Langevin equation describing the particle's motion should be interpreted according to the "anti-Ito" definition of stochastic integrals. I will explain this effect mathematically by studying the zero-mass limit of the stochastic Newton's equation modeling the particle's motion and, using a multiscale expansion, extend the analysis to a wide class of equations, including systems with colored noise and delay terms, interpreting recent electrical circuit experiments. The results were obtained in a collaboration with experimental physicists in Stuttgart: Giovanni Volpe, Clemens Bechinger, Laurent Helden and Thomas Brettschneider, as well as with the mathematics graduate students at the University of Arizona: Scott Hottovy and Austin McDaniel.
Greg Baker : Accelerating Liquid Layers
- Applied Math and Analysis ( 174 Views )A pressure difference across a liquid layer will accelerate it. For incompressible and inviscid motion, it is possible to describe the motion of the surfaces through boundary integral techniques. In particular, dipole distributions can be used together with an external flow that specifies the acceleration. The classical Rayleigh-Taylor instability and the creation of bubbles at an orifice are two important applications. A new method for the numerical approximation of the boundary integrals removes the difficulties associate with surfaces in close proximity.
Joshua Vogelstein : Consistent Graph Classification applied to Human Brain Connectome Data
- Mathematical Biology ( 171 Views )Graphs are becoming a favorite mathematical object for representation of data. Yet, statistical pattern recognition has focused almost entirely on vector valued data in Euclidean space. Graphs, however, live in graph space, which is non-Euclidean. Thus, most inference techniques are not even defined for graph valued data. Previous work in the classification of graph-valued data typically follows one of two recipes. (1) Vectorize the adjacency matrices of the graphs, and apply standard machine learning techniques. (2) Compute some number of graph invariants (e.g., clustering coefficient, or degree distribution) for each graph, and then apply standard machine learning techniques. We follow a different recipe based in the probabilistic theory of pattern recognition. First, we define a joint graph-class model. Given this model, we derive classifiers which we prove are consistent; that is, they converge to the Bayes optimal classifier. Specifically, we build two consistent classifiers for graph valued data, a parametric and a non-parametric version. In a sense, these classifiers span the spectrum of complexity, the former is consistent for graphs sampled from relatively simple random graph distributions, the latter is consistent for graphs sampled from (nearly) any random graph distribution. Although both classifiers assume that all our graphs have labeled vertices, we generalize these results to also incorporate unlabeled graphs, as well as weighted and multigraphs. We apply these graph classifiers to human brain data. Specifically, using diffusion MRI, we can obtain large brain-graphs (10,000 vertices) for each subject, where vertices correspond to voxels. We then coarsen the graphs spatially to obtain smaller (70 vertex) graphs per subject. Using <50 subjects, we are able to achieve nearly 85% classification accuracy, with results interpretable to neurobiologists with regard to the brain regions of interest.
Sanjeevi Krishnan : Directed Poincare Duality
- Geometry and Topology ( 124 Views )The max-flow min-cut theorem, traditionally applied to problems of maximizing the flow of commodities along a network (e.g. oils in pipelines) and minimizing the costs of disrupting networks (e.g. damn construction), has found recent applications in information processing. In this talk, I will recast and generalize max-flow min-cut as a form of twisted Poincare Duality for spacetimes and more singular "directed spaces." Flows correspond to the top-dimensional homology, taking local coefficients and values in a sheaves of semigroups, on directed spaces. Cuts correspond to certain distinguished sections of a dualizing sheaf. Thus max-flow min-cut dualities extend to higher dimensional analogues of flows, higher dimensional analogues of directed graphs (e.g. dynamical systems), and constraints more complicated than upper bounds. I will describe the formal result, including a construction of directed sheaf homology, and some real-world applications.
Frederic Lechenault : Experimental investigation of equilibration properties in model granular subsystems
- Nonlinear and Complex Systems ( 168 Views )We experimentally investigate the statistical features of the stationary states reached by two idealized granular liquids able to exchange volume. The system consists in two binary mixtures of the same number of soft disks, hence covering the same area, but with different surface properties. The disks sit on a horizontal air table, which provides ultra low friction at the cell bottom, and are separated by a mobile wall. Energy is injected in the system by means of an array of randomly activated coil bumpers standing as the edges of the cell. Due to the energy injection, the system acts like a slow liquid and eventually jams at higher packing fraction. We characterize the macroscopic states by studying the motion of the piston. We find that its average position is different from one half, and a non monotonic function of the overall packing fraction, which reveals the crucial role played by the surface properties in the corresponding density of states. We then study the bulk statistics of the packing fraction and the dynamics in each subsystem. We find that the measured quantities do not equilibrate, and become dramatically different as the overall packing fraction is increased beyond the onset of supercooling. However, the local fluctuations of the packing fraction are uniquely determined by its average, and hence independent of the interaction between disks. We then focus on the mixing properties of such an assembly. We characterize mixing by computing the topological entropy of the braids formed by the stationary trajectories of the grains at each pressure. This quantity is shown to be well defined, very sensitive to onset of supercooling, reflecting the dynamical arrest of the assembly, and to equilibrate in the two subsystems. Joint work with Karen Daniels.
Jesse Kass : What is the limit of a line bundle on a nonnormal variety
- Algebraic Geometry ( 150 Views )On a nonnormal variety, the limit of a family of line bundles is not always a line bundle. What is the limit? I will present an answer to this question and give some applications. If time permits, I will discuss connections with NĂ©ron models, autoduality, and recent work of R. Hartshorne and C. Polini.
Xiaoqian Xu : Suppression of chemotactic explosion by mixing
- Applied Math and Analysis ( 165 Views )Chemotaxis plays a crucial role in a variety of processes in biology and ecology. One of the most studied PDE models of chemotaxis is given by Keller-Segel equation, which describes a population density of bacteria or mold which attract chemically to substance they secrete. However, solution of Keller-Segel equation can exhibit dramatic collapsing behavior. In other words, there exist initial data leading to finite time blow up. In this talk, we will discuss the possible effects resulting from interaction of chemotactic and fluid transport processes, namely we will consider the Keller-Segel equation with additional advection term modeling ambient fluid flow. We will prove that the presence of fluid can prevent the singularity formation. We will discuss two classes of flows that have the explosion arresting property. Both classes are known as very efficient mixers.
Gerandy Brito : Alons conjecture in random bipartite biregular graphs with applications.
- Probability ( 163 Views )This talk concerns to spectral gap in random regular graphs. We prove that almost all bipartite biregular graphs are almost Ramanujan by providing a tight upper bound for the second eigenvalue of its adjacency operator. The proof relies on a technique introduced recently by Massoullie, which we developed for random regular graphs. The same analysis allow us to recover hidden communities in random networks via spectral algorithms.
Matthew Hirn : Diffusion maps for changing data
- Applied Math and Analysis ( 111 Views )Recently there has been a large class of research that utilizes nonlinear mappings into low dimensional spaces in order to organize potentially high dimensional data. Examples include, but are not limited to, locally linear embedding (LLE), ISOMAP, Hessian LLE, Laplacian eigenmaps, and diffusion maps. In this talk we will focus on the latter, and in particular consider how to generalize diffusion maps to the setting in which we are given a data set that evolves over time or changes depending on some set of parameters. Along with describing the current theory, various synthetic and real world examples will be presented to illustrate these ideas in practice.
Paul Tupper : The Relation Between Shadowing and Approximation in Distribution
- Applied Math and Analysis ( 152 Views )In computational physics, molecular dynamics refers to the computer simulation of a material at the atomic level. I will consider classical deterministic molecular dynamics in which large Hamiltonian systems of ordinary differential equations are used, though many of the same issues arise with other models. Given its scientific importance there is very little rigorous justification of molecular dynamics. From the viewpoint of numerical analysis it is surprising that it works at all. The problem is that individual trajectories computed by molecular dynamics are accurate for only small time intervals, whereas researchers trust the results over very long time intervals. It has been conjectured that molecular dynamics trajectories are accurate over long time intervals in some weak statistical sense. Another conjecture is that numerical trajectories satisfy the shadowing property: that they are close over long time intervals to exact trajectories with different initial conditions. I will explain how these two views are actually equivalent to each other, after we suitably modify the concept of shadowing.
Giang Tran : Sparsity-Inducing Methods for Nonlinear Differential Equations
- Applied Math and Analysis ( 135 Views )Sparsity plays a central role in recent developments of many fields such as signal and image processing, compressed sensing, statistics, and optimization. In practice, sparsity is promoted through the additional of an L1 norm (or related quantity) as a constraint or penalty in a variational model. Motivated by the success of sparsity-inducing methods in imaging and information sciences, there is a growing interest in exploiting sparsity in dynamical systems and partial differential equations. In this talk, we will investigate the connections between compressed sensing, sparse optimization, and numerical methods for nonlinear differential equations. In particular, we will discuss about sparse modeling as well as the advantage of sparse optimization in solving various differential equations arising from physical and data sciences.
Jayce Getz : An approach to nonsolvable base change
- Presentations ( 198 Views )In the 1970's, inspired by the work of Saito and Shintani, Langlands gave a definitive treatment of base change for automorphic representations of the general linear group in two variables along prime degree extensions of number fields. To give some idea of the depth and utility of his work, one need only remark that some consequences of it were crucial in Wiles' proof of Fermat's last theorem. In this talk we will report on work in progress on base change for automorphic representations of GL(2) along nonsolvable Galois extensions of number fields. We will attempt to explain this assuming only a little algebraic number theory.
Marija Vucelja : A glass transition in population genetics: Emergence of clones in populations
- Nonlinear and Complex Systems ( 208 Views )The fields of evolution and population genetics are undergoing a renaissance, due to the abundance of sequencing data. On the other hand, the existing theories are often unable to explain the experimental findings. It is not clear what sets the time scales of evolution, whether for antibiotic resistance, an emergence of new animal species, or the diversification of life. The emerging picture of genetic evolution is that of a strongly interacting stochastic system with large numbers of components far from equilibrium. In this talk, I plan to focus on the clone competition and discuss the diversity of a random population that undergoes selection and recombination (sexual reproduction). Recombination reshuffles genetic material while selection amplifies the fittest genotypes. If recombination is more rapid than selection, a population consists of a diverse mixture of many genotypes, as is observed in many populations. In the opposite regime, selection can amplify individual genotypes into large clones, and the population reaches the so-called "clonal condensation". I hope to convince you that our work provides a qualitative explanation of clonal condensation. I will point out the similarity between clonal condensation and the freezing transition in the Random Energy Model of spin glasses. I will conclude with a summary of our present understanding of the clonal condensation phenomena and describe future directions and connections to statistical physics.