I will discuss variational problems arising in machine learning and their limits as the number of data points goes to infinity. Consider point clouds obtained as random samples of an underlying "ground-truth" measure. Graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points. Many machine learning tasks, such as clustering and classification, can be posed as minimizing functionals on such graphs. We consider functionals involving graph cuts and graph laplacians and their limits as the number of data points goes to infinity. In particular we establish for what graph constructions the minimizers of discrete problems converge to a minimizer of a functional defined in the continuum setting. The talk is primarily based on joint work with Nicolas Garcia Trillos, as well as on works with Xavier Bresson, Moritz Gerlach, Matthias Hein, Thomas Laurent, James von Brecht and Matt Thorpe.
Sparsity plays a central role in recent developments of many fields such as signal and image processing, compressed sensing, statistics, and optimization. In practice, sparsity is promoted through the additional of an L1 norm (or related quantity) as a constraint or penalty in a variational model. Motivated by the success of sparsity-inducing methods in imaging and information sciences, there is a growing interest in exploiting sparsity in dynamical systems and partial differential equations. In this talk, we will investigate the connections between compressed sensing, sparse optimization, and numerical methods for nonlinear differential equations. In particular, we will discuss about sparse modeling as well as the advantage of sparse optimization in solving various differential equations arising from physical and data sciences.
Many problems arising from scientific and engineering applications can be naturally formulated as optimization problems, most of which are nonconvex. For nonconvex problems, obtaining a local minimizer is computationally hard in theory, never mind the global minimizer. In practice, however, simple numerical methods often work surprisingly well in finding high-quality solutions for specific problems at hand.
In this talk, I will describe our recent effort in bridging the mysterious theory-practice gap for nonconvex optimization. I will highlight a family of nonconvex problems that can be solved to global optimality using simple numerical methods, independent of initialization. This family has the characteristic global structure that (1) all local minimizers are global, and (2) all saddle points have directional negative curvatures. Problems lying in this family cover various applications across machine learning, signal processing, scientific imaging, and more. I will focus on two examples we worked out: learning sparsifying bases for massive data and recovery of complex signals from phaseless measurements. In both examples, the benign global structure allows us to derive geometric insights and computational results that are inaccessible from previous methods. In contrast, alternative approaches to solving nonconvex problems often entail either expensive convex relaxation (e.g., solving large-scale semidefinite programs) or delicate problem-specific initializations.
Completing and enriching this framework is an active research endeavor that is being undertaken by several research communities. At the end of the talk, I will discuss open problems to be tackled to move forward.
An effort at UNC is involved in understanding key mechanisms in the lung related to defense against pathogens. In diseases ranging from Cystic Fibrosis to asthma, these mechanisms are highly compromised, requiring therapeutic strategies that one would like to be able to quantify or even predict in some way. The Virtual Lung Project has focused on one principal component of lung defense: "the mucus escalator" as it is called in physiology texts. My goal in this lecture, with apologies to Tina Turner, is to give a longwinded answer to "what's math got to do with it?", and at the same time to convey how this collaboration is influencing the applied mathematics experience at UNC.
In many applications, ranging from computer animation to biology, one wants to quantify how similar two surfaces are to each other. In the last few years, the Gromov-Haussdorff distance has been applied to this problem; this gives good results, but turns out to be very heavy computationally. This talk proposes a different approach, in which (disk-like) 2-dimensional surfaces (typically embedded in 3-dimensional Euclidean space) are first mapped conformally to the unit disk, and the corresponding conformal densities are then compared via optimal mass transportation,. This mass transportation problem differs from the standard case in that we require the solution to be invariant under global Moebius transformations. The metric we construct also defines meaningful intrinsic distances between pairs of "patches" in the two surfaces, which allows automatic alignment of the surfaces. Numerical experiments on "real-life" surfaces to demonstrate possible applications in natural sciences will be shown as well. This is joint work with Yaron Lipman.
It is well-known that the asymptotic complexity of matrix-matrix product and matrix inversion is given by the rank of a 3-tensor, recently shown to be at most O(n^2.3728639) by Le Gall. This approach is attractive as a rank decomposition of that 3-tensor gives an explicit algorithm that is guaranteed to be fastest possible and its tensor nuclear norm quantifies the optimal numerical stability. There is also an alternative approach due to Cohn and Umans that relies on embedding matrices into group algebras. We will see that the tensor decomposition and group algebra approaches, when combined, allow one to systematically discover fast(est) algorithms. We will determine the exact (as opposed to asymptotic) tensor ranks, and correspondingly the fastest algorithms, for products of Circulant, Toeplitz, Hankel, and other structured matrices. This is joint work with Ke Ye (Chicago).
Mathematical models incorporating random forcing, and the resulting stochastic differential equations (SDEs), are becoming increasingly important. However general principles and techniques for their robust and efficient numerical approximation are a very long way behind the corresponding ODE theory. In both cases the idea of adaptivity, that is using varying timesteps to improve convergence, is a key element. In this talk I will describe an approach based upon (low-order) Milstein-type methods using multiple error-controls. The idea is to monitor various terms in the truncation error, both deterministic and stochastic, and then to construct an algorithm that is robust enough to work efficiently in the presence of deterministic/diffusion-dominated regimes and differing accuracy requirements. Such an approach also has other benefits, such as improved numerical stability properties. No knowledge of stochastic calculus will be assumed.
Perspectives from computational algebra and numerical optimization are brought to bear on a scientific application and a data science application. In the first part of the talk, I will discuss cryo-electron microscopy (cryo-EM), an imaging technique to determine the 3-D shape of macromolecules from many noisy 2-D projections, recognized by the 2017 Chemistry Nobel Prize. Mathematically, cryo-EM presents a particularly rich inverse problem, with unknown orientations, extreme noise, big data and conformational heterogeneity. In particular, this motivates a general framework for statistical estimation under compact group actions, connecting information theory and group invariant theory. In the second part of the talk, I will discuss tensor rank decomposition, a higher-order variant of PCA broadly applicable in data science. A fast algorithm is introduced and analyzed, combining ideas of Sylvester and the power method.
In this talk we discuss an algorithm to overcome the curse of dimensionality, in possibly non-convex/time/state-dependent Hamilton-Jacobi partial differential equations. They may arise from optimal control and differential game problems, and are generally difficult to solve numerically in high dimensions.
A major contribution of our works is to consider an optimization problem over a single vector of the same dimension as the dimension of the HJ PDE instead. To do so, we consider the new approach using Hopf-type formulas. The sub-problems are now independent and they can be implemented in an embarrassingly parallel fashion. That is ideal for perfect scaling in parallel computing.
The algorithm is proposed to overcome the curse of dimensionality when solving high dimensional HJ PDE. Our method is expected to have application in control theory, differential game problems, and elsewhere. This approach can be extended to the computational of a Hamilton-Jacobi equation in the Wasserstein space, and is expected to have applications in mean field control problems, optimal transport and mean field games.
Irene Gamba : Approximations to boundary value problems for nonlinear collisional kinetic plasma models- Uploaded by root ( 79 Views )
We will discuss recent approximations to boundary value problems to non-linear systems of Boltzmann or Landau (for Coulombic interactions) equations coupled to the Poisson equation. The proposed approximation methods involve hybrid schemes of spectral and Galerkin type, were conservation of flow invariants are achieved by a constrain minimization problem. We will discuss some analytical and computational issues related to these approximations.
The first reaction-diffusion equation developed and studied is the Fisher-KPP equation. Introduced in 1937, this population-dynamics model accounts for the spatial spreading and growth of a species. Various generalizations of this model have been studied in the eighty years since its introduction, including a model with non-local reaction for the cane toads of Australia introduced by Benichou et. al. I will begin the talk by giving an extended introduction on the Fisher-KPP equation and the typical behavior of its solutions. Afterwards I will describe the new model for the cane toads equations and give new results regarding this model. In particular, I will show how the model may be viewed as a perturbation of a local equation using a new Harnack-type inequality and I will discuss the super-linear in time propagation of the toads. The talk is based on a joint work with Bouin and Ryzhik.
The emergence of massive data sets, over the past twenty or so years, has lead to the development of Randomized Numerical Linear Algebra. Fast and accurate randomized matrix algorithms are being designed for applications like machine learning, population genomics, astronomy, nuclear engineering, and optimal experimental design. We give a flavour of randomized algorithms for the solution of least squares/regression problems. Along the way we illustrate important concepts from numerical analysis (conditioning and pre-conditioning), probability (concentration inequalities), and statistics (sampling and leverage scores).
We report on an experimental and theoretical study of a molecularly thin polymer Langmuir layers on the surface of a Stokesian subfluid. Langmuir layers can have multiple phases (fluid, gas, liquid crystal, isotropic or anisotropic solid); at phase boundaries a line tension force is observed. By comparing theory and experiment we can estimate this line tension. We first consider two co-existing fluid phases; specifically a localized phase embedded in an infinite secondary phase. When the localized phase is stretched (by a transient stagnation flow), it takes the form of a bola consisting of two roughly circular reservoirs connected by a thin tether. This shape relaxes to the minimum energy configuration of a circular domain. The tether is never observed to rupture, even when it is more than a hundred times as long as it is thin. We model these experiments by taking previous descriptions of the full hydrodynamics (primarily those of Stone & McConnell and Lubensky & Goldstein), identifying the dominant effects via dimensional analysis, and reducing the system to a more tractable form. The result is a free boundary problem where motion is driven by the line tension of the domain and damped by the viscosity of the subfluid. The problem has a boundary integral formulation which allows us to numerically simulate the tether relaxation; comparison with the experiments allows us to estimate the line tension in the system. We also report on incorporating dipolar repulsion into the force balance and simulating the formation of "labyrinth" patterns.
Sleep-wake cycling is an example of switching between discrete states in mammalian brain. Based on the experimental data on the activity of populations of neurons, we develop a mathematical model. The model incorporates several different time scales: firing of action potentials (milliseconds), sleep and wake bout times (seconds), developmental time (days). Bifurcation diagrams in a deterministic dynamical system gives the occupancy time distributions in the corresponding stochastic system. The model correctly predicts that forebrain regions help to stabilize wake state and thus modifies the wake bout distribution.
The nascent technique of 4D printing has the potential to revolutionize manufacturing in fields ranging from organs-on-a-chip to architecture to soft robotics. By expanding the pallet of 3D printable materials to include the use stimuli responsive inks, 4D printing promises precise control over patterned shape transformations. With the goal of creating a new manufacturing technique, we have recently introduced a biomimetic printing platform that enables the direct control of local anisotropy into both the elastic moduli and the swelling response of the ink.
We have drawn inspiration from nastic plant movements to design a phytomimetic ink and printing process that enables patterned dynamic shape change upon exposure to water, and possibly other external stimuli. Our novel fiber-reinforced hydrogel ink enables local control over anisotropies not only in the elastic moduli, but more importantly in the swelling. Upon hydration, the hydrogel changes shape accord- ing the arbitrarily complex microstructure imparted during the printing process.
To use this process as a design tool, we must solve the inverse problem of prescribing the pattern of anisotropies required to generate a given curved target structure. We show how to do this by constructing a theory of anisotropic plates and shells that can respond to local metric changes induced by anisotropic swelling. A series of experiments corroborate our model by producing a range of target shapes inspired by the morphological diversity of flower petals.
The need to deduce reduced computational models from discrete observations of complex systems arises in many climate and engineering applications. The challenges come mainly from memory effects due to the unresolved scales and nonlinear interactions between resolved and unresolved scales, and from the difficulty in inference from discrete data.
We address these challenges by introducing a discrete-time stochastic parametrization framework, through which we construct discrete-time stochastic models that can take memory into account. We show by examples that the resulting stochastic reduced models that can capture the long-time statistics and can make accurate short-term predictions. The examples include the Lorenz 96 system (which is a simplified model of the atmosphere) and the Kuramoto-Sivashinsky equation of spatiotemporally chaotic dynamics.
The Patlak-Keller-Segel equations (PKS) are widely applied to model the chemotaxis phenomena in biology. It is well-known that if the total mass of the initial cell density is large enough, the PKS equations exhibit finite time blow-up. In this talk, I present some recent results on applying additional fluid flows to suppress chemotactic blow-up in the PKS equations. These are joint works with Jacob Bedrossian and Eitan Tadmor.
The heart is a coupled electro-fluid-mechanical system. The contractions of the cardiac muscle are stimulated and coordinated by the electrophysiology of the heart; these contractions in turn affect the electrical function of the heart by altering the macroscopic conductivity of the tissue and by influencing stretch-activated transmembrane ion channels. In this talk, I will present mathematical models and adaptive numerical methods for describing cardiac mechanics, fluid dynamics, and electrophysiology, as well as applications of these models and methods to cardiac fluid-structure and electro-mechanical interaction. I will also describe novel models of cardiac electrophysiology that go beyond traditional macroscopic (tissue-scale) descriptions of cardiac electrical impulse propagation by explicitly incorporating details of the cellular microstructure into the model equations. Standard models of cardiac electrophysiology, such as the monodomain or bidomain equations, account for this cellular microstructure in only a homogenized or averaged sense, and we have demonstrated that such homogenized models yield incorrect results in certain pathophysiological parameter regimes. To obtain accurate model predictions in these parameter regimes without resorting to a fully cellular model, we have developed an adaptive multiscale model of cardiac conduction that uses detailed cellular models only where needed, while resorting to the more efficient macroscale equations where those equations suffice. Applications of these methods will be presented to simulating cardiac and cardiovascular dynamics in whole heart models, as well as in detailed models of cardiac valves and novel models of aortic dissection. Necessary physiological details will be introduced as needed.
The detailed behavior of solutions to the biharmonic equation on regions with corners has been historically difficult to characterize. It is conjectured by Osher (and proven in certain special cases) that the Greens function for the biharmonic equation on regions with corners has infinitely many oscillations in the vicinity of each corner. In this talk, we show that, when the biharmonic equation is formulated as a boundary integral equation, the solutions are representable by rapidly convergent series of elementary functions which oscillate with a frequency proportional to the logarithm of the distance from the corner. These representations are used to construct highly accurate and efficient Nyström discretizations, significantly reducing the number of degrees of freedom required for solving the corresponding integral equations. We illustrate the performance of our method with several numerical examples.
Functionalized polymer membranes have a strong affinity for solvent, imbibing it to form charge-lined networks which serve as charge-selective ion conductions in a host of energy conversion applications. We present a continuum model, based upon a reformulation of the Cahn-Hilliard free energy, which incorporates solvation energy and counter-ion entropy to stabilize a host of network morphologies. We derive geometric evolution for co-dimension 1 bilayers and co-dimension two pore morphologies and show that the system possesses a simple mechanism for competitive evolution of co-existing networks through the common far-field chemical potential.
A common question arising in the study of graphs is how to partition nodes into well-connected clusters. One standard methodology is known as spectral clustering and utilizes an eigenvector embedding as a staring point for clustering the nodes. Given that embedding, we present a new algorithm for spectral clustering based on a column-pivoted QR factorization. Our method is simple to implement, direct, scalable, and requires no initial guess. We also provide theoretical justification for our algorithm and experimentally demonstrate that its performance tracks recent information theoretic bounds for exact recovery in the stochastic block model. Algorithmically, the origins of this work are in methods for building a localized basis for Kohn-Sham orbitals, and we briefly discuss those connections.
In order to understand the nonlinear stability of many types of time-periodic traveling waves on unbounded domains, one must overcome two main difficulties: the presence of zero eigenvalues that are embedded in the continuous spectrum and the time-periodicity of the associated linear operator. I will outline these issues and show how they can be overcome in the context of time-periodic Lax shocks in systems of viscous conservation laws. The method involves the development of a contour integral representation of the linear evolution, similar to that of a strongly continuous semigroup, and detailed pointwise estimates on the resultant Greens function, which are sufficient for proving nonlinear stability under the necessary assumption of spectral stability.
This talk focuses on two instances in which scientific fields outside mathematics benefit from incorporating the geometry of the data. In each instance, the application area motivates the need for new mathematical approaches and algorithms and leads to interesting new questions. (1) A method to determine and predict drug treatment effectiveness for patients based off their baseline information. This motivates building a function adapted diffusion operator on high-dimensional data X when the function F can only be evaluated on large subsets of X, and defining a localized filtration of F and estimation values of F at a finer scale than it is reliable naively. (2) The current empirical success of deep learning in imaging and medical applications, in which theory and understanding is lagging far behind. By assuming the data lie near low-dimensional manifolds and building local wavelet frames, we improve on existing theory that breaks down when the ambient dimension is large (the regime in which deep learning has seen the most success).
The use of PDE models to describe complex systems in the social sciences, such as socio-economic segregation and crime, has been popularized during the past decade. In this talk I will introduce some PDE models which can be seen as basic models for a variety of social phenomena. I will then discuss how these models can be used to explore and gain understanding of the real-world systems they describe. For example, we learn that a populations innate views toward criminal activity can play a significant role in the prevention of crime-wave propagation.
Edges are noticeable features in images which can be extracted from noisy data using different variational models. The analysis of such models leads to the question of representing general L^2-data as the divergence of uniformly bounded vector fields.
We use a multi-scale approach to construct uniformly bounded solutions of div(U)=f for general fs in the critical regularity space L^2(T^2). The study of this equation and related problems was motivated by results of Bourgain & Brezis. The intriguing critical aspect here is that although the problems are linear, construction of their solution is not. These constructions are special cases of a rather general framework for solving linear equations in critical regularity spaces. The solutions are realized in terms of nonlinear hierarchical representations $U = \sum_j u_j$ which we introduced earlier in the context of image processing, yielding a multi-scale decomposition of images U.
In this talk, I will present a kernel-free boundary integral (KFBI) method for the variable coefficient elliptic partial differential equation on complex domains. The KFBI method is a generalization of the standard boundary integral method. But, unlike the standard boundary integral method, the KFBI method does not need to know an analytical expression for the kernel of the boundary integral operator or the Green's function associated with the elliptic PDE. So it is not limited to the constant-coefficient PDEs. The KFBI method solves the discrete integral equations by an iterative method, in which only part of the matrix vector multiplication involves the discretization of the boundary integral. With the KFBI method, the evaluation of the boundary integral is replaced by interpolation from a structured grid based solution to an equivalent interface problem, which is solved quickly by a Fourier transform or geometric multigrid based fast elliptic solver. Numerical examples for Dirichlet and Neumann BVPs, interface problems with different conductivity constants and the Poisson-Boltzmann equations will be presented.
Electrophysiological recordings of neurons in the cortex of mammals reveal a ubiquitous high degree of irregularity of single neuron activity. The mechanisms and functional role of this irregular activity remain the subject of debate. Here, I will describe simplified models of networks of neurons, and analytical tools that can be used to understand their dynamics. Under some conditions, such networks can be described using a system of coupled Fokker-Planck equations (one for each class of neurons composing the network), in which the drift and diffusion terms depend on the probability flux at firing threshold. Provided specific conditions on network connectivity are satisfied, these models reproduce some of the landmark features observed in experiments (highly irregular firing at low rates, weak correlations between neurons, wide distributions of firing rates). Interestingly, these networks show a rich diversity of irregular states (chaotic or not, asynchronous or synchronous).
Finite difference methods are awkward for solving boundary value problems, such as the Dirichlet problem, with general boundaries, but they are well suited for interface problems, which have prescribed jumps in an unknown across a general interface or boundary. The two problems can be connected through potential theory: The Dirichlet boundary value problem is converted to an integral equation on the boundary, and the integrals can be thought of as solutions to interface problems. Wenjun Ying et al. have developed a practical method for solving the Dirichlet problem, and more general ones, by solving interface problems with finite difference methods and iterating to mimic the solution of the integral equation. We will describe some analysis which proves that a simplified version of Ying's method works. A recent view of classical potential theory leads to a finite difference version of the theory in which, remarkably, the crude discrete operators have much of the structure of the exact operators. This simplified method produces the Shortley-Weller solution of the Dirichlet problem. Details can be found at arxiv.org/abs/1803.08532 .