Quicklists
public 01:34:50

Selim Esedoglu : Algorithms for anisotropic mean curvature flow of networks, with applications to materials science

  -   Applied Math and Analysis ( 98 Views )

Motion by mean curvature for a network of surfaces arises in many applications. An important example is the evolution of microstructure in a polycrystalline material under heat treatment. Most metals and ceramics are of this type: They consist of many small single-crystal pieces of differing orientation, called grains, that are stuck together. A famous model proposed by Mullins in the 60s describes the dynamics of the network of surfaces that separate neighboring grains from one another in such a material as gradient descent for a weighted sum of the (possibly anisotropic) areas of the surfaces. The resulting dynamics is motion by weighted mean curvature for the surfaces in the network, together with certain conditions that need to be satisfied at junctions along which three or more surfaces may intersect. Typically, many topological changes occur during the evolution, as grains shrink and disappear, pinch off, or junctions collide. A very elegant algorithm -- known as threshold dynamics -- for the motion by mean curvature of a surface was given by Merriman, Bence, and Osher: It generates the whole evolution simply by alternating two very simple operations: convolution with a Gaussian kernel, and thresholding. It also works for networks, provided that all surfaces in the network have isotropic surface energies with equal weights. Its correct extension to the more general setting of unequal weights and possibly anisotropic (normal dependent) surface energies remained elusive, despite keen interest in this setting from materials scientists. In joint work with Felix Otto, we give a variational formulation of the original threshold dynamics algorithm by identifying a Lyapunov functional for it. In turn, the variational formulation shows how to extend the algorithm correctly to the more general settings that are of interest for materials scientists (joint work with Felix Otto and Matt Elsey). Examples of how to use the new algorithms to investigate unsettled questions about grain size distribution and its evolution will also be given.

public 01:14:48

Ben Murphy : Random Matrices, Spectral Measures, and Transport in Composite Media

  -   Applied Math and Analysis ( 112 Views )

We consider composite media with a broad range of scales, whose effective properties are important in materials science, biophysics, and climate modeling. Examples include random resistor networks, polycrystalline media, porous bone, the brine microstructure of sea ice, ocean eddies, melt ponds on the surface of Arctic sea ice, and the polar ice packs themselves. The analytic continuation method provides Stieltjes integral representations for the bulk transport coefficients of such systems, involving spectral measures of self-adjoint random operators which depend only on the composite geometry. On finite bond lattices or discretizations of continuum systems, these random operators are represented by random matrices and the spectral measures are given explicitly in terms of their eigenvalues and eigenvectors. In this lecture we will discuss various implications and applications of these integral representations. We will also discuss computations of the spectral measures of the operators, as well as statistical measures of their eigenvalues. For example, the effective behavior of composite materials often exhibits large changes associated with transitions in the connectedness or percolation properties of a particular phase. We demonstrate that an onset of connectedness gives rise to striking transitional behavior in the short and long range correlations in the eigenvalues of the associated random matrix. This, in turn, gives rise to transitional behavior in the spectral measures, leading to observed critical behavior in the effective transport properties of the media.

public 01:14:42

Rongjie Lai : Understanding Manifold-structured Data via Geometric Modeling and Learning

  -   Applied Math and Analysis ( 105 Views )

Analyzing and inferring the underlying global intrinsic structures of data from its local information are critical in many fields. In practice, coherent structures of data allow us to model data as low dimensional manifolds, represented as point clouds, in a possible high dimensional space. Different from image and signal processing which handle functions on flat domains with well-developed tools for processing and learning, manifold-structured data sets are far more challenging due to their complicated geometry. For example, the same geometric object can take very different coordinate representations due to the variety of embeddings, transformations or representations (imagine the same human body shape can have different poses as its nearly isometric embedding ambiguities). These ambiguities form an infinite dimensional isometric group and make higher-level tasks in manifold-structured data analysis and understanding even more challenging. To overcome these ambiguities, I will first discuss modeling based methods. This approach uses geometric PDEs to adapt the intrinsic manifolds structure of data and extracts various invariant descriptors to characterize and understand data through solutions of differential equations on manifolds. Inspired by recent developments of deep learning, I will also discuss our recent work of a new way of defining convolution on manifolds and demonstrate its potential to conduct geometric deep learning on manifolds. This geometric way of defining convolution provides a natural combination of modeling and learning on manifolds. It enables further applications of comparing, classifying and understanding manifold-structured data by combing with recent advances in deep learning.

public 01:34:50

Suncica Canic : Mathematical modeling for cardiovascular stenting

  -   Applied Math and Analysis ( 179 Views )

The speaker will talk about several projects that are taking place in an interdisciplinary endeavor between the researchers in the Mathematics Department at the University of Houston, the Texas Heart Institute, Baylor College of Medicine, the Mathematics Department at the University of Zagreb, and the Mathematics Department of the University of Lyon 1. The projects are related to non-surgical treatment of aortic abdominal aneurysm and coronary artery disease using endovascular prostheses called stents and stent-grafts. Through a collaboration between mathematicians, cardiovascular specialists and engineers we have developed a novel mathematical model to study blood flow in compliant (viscoelastic) arteries treated with stents and stent-grafts. The mathematical tools used in the derivation of the effective, reduced equations utilize asymptotic analysis and homogenization methods for porous media flows. The existence of a unique solution to the resulting fluid-structure interaction model is obtained by using novel techniques to study systems of mixed, hyperbolic-parabolic type. A numerical method, based on the finite element approach, was developed, and numerical solutions were compared with the experimental measurements. Experimental measurements based on ultrasound and Doppler methods were performed at the Cardiovascular Research Laboratory located at the Texas Heart Institute. Excellent agreement between the experiment and the numerical solution was obtained. This year marks a giant step forward in the development of medical devices and in the development of the partnership between mathematics and medicine: the FDA (the United States Food and Drug Administration) is getting ready to, for the first time, require mathematical modeling and numerical simulations to be used in the development of peripheral vascular devices. The speaker acknowledges research support from the NSF, NIH, and Texas Higher Education Board, and donations from Medtronic Inc. and Kent Elastomer Inc.

public 01:34:42

Jacob Bedrossian : Positive Lyapunov exponents for 2d Galerkin-Navier-Stokes with stochastic forcing

  -   Applied Math and Analysis ( 399 Views )

In this talk we discuss our recently introduced methods for obtaining strictly positive lower bounds on the top Lyapunov exponent of high-dimensional, stochastic differential equations such as the weakly-damped Lorenz-96 (L96) model or Galerkin truncations of the 2d Navier-Stokes equations (joint with Alex Blumenthal and Sam Punshon-Smith). This hallmark of chaos has long been observed in these models, however, no mathematical proof had previously been made for any type of deterministic or stochastic forcing. The method we proposed combines (A) a new identity connecting the Lyapunov exponents to a Fisher information of the stationary measure of the Markov process tracking tangent directions (the so-called "projective process"); and (B) an L1-based hypoelliptic regularity estimate to show that this (degenerate) Fisher information is an upper bound on some fractional regularity. For L96 and GNSE, we then further reduce the lower bound of the top Lyapunov exponent to proving that the projective process satisfies Hörmander's condition. I will also discuss the recent work of Sam Punshon-Smith and I on verifying this condition for the 2d Galerkin-Navier-Stokes equations in a rectangular, periodic box of any aspect ratio using some special structure of matrix Lie algebras and ideas from computational algebraic geometry.

public 01:34:47

Julia Kimbell : Applications of upper respiratory tract modeling to risk assessment, medicine, and drug delivery

  -   Applied Math and Analysis ( 146 Views )

The upper respiratory tract is the portal of entry for inhaled air and anything we breath in with it. For most of us, the nasal passages do most of the work cleansing, humidifying, and warming inhaled air using a lining of highly vascularized tissue coated with mucus. This tissue is susceptible to damage from inhaled material, can adversely affect life quality if deformed or diseased, and is a potential route of systemic exposure via circulating blood. To understand nasal physiology and the effects of inhalants on nasal tissue, information on airflow, gas uptake and particle deposition patterns is needed for both laboratory animals and humans. This information is often difficult to obtain in vivo but may be estimated with three-dimensional computational fluid dynamics (CFD) models. At CIIT Centers for Health Research (CIIT-CHR), CFD models of nasal airflow and inhaled gas and particle transport have been used to test hypotheses about mechanisms of toxicity, help extrapolate laboratory animal data to people, and make predictions for human health risk assessments, as well as study surgical interventions and nasal drug delivery. In this talk an overview of CIIT-CHR's nasal airflow modeling program will be given with the goal of illustrating how CFD modeling can help researchers clarify, organize, and understand the complex structure, function, physiology, pathobiology, and utility of the nasal airways.

public 01:34:32

Ioannis Kevrekidis : No Equations, No Variables, No Parameters, No Space, No Time -- Data, and the Crystal Ball Modeling of Complex/Multiscale Systems

  -   Applied Math and Analysis ( 176 Views )

Obtaining predictive dynamical equations from data lies at the heart of science and engineering modeling, and is the linchpin of our technology. In mathematical modeling one typically progresses from observations of the world (and some serious thinking!) first to selection of variables, then to equations for a model, and finally to the analysis of the model to make predictions. Good mathematical models give good predictions (and inaccurate ones do not) --- but the computational tools for analyzing them are the same: algorithms that are typically operating on closed form equations.
While the skeleton of the process remains the same, today we witness the development of mathematical techniques that operate directly on observations --- data, and appear to circumvent the serious thinking that goes into selecting variables and parameters and deriving accurate equations. The process then may appear to the user a little like making predictions by "looking into a crystal ball". Yet the "serious thinking" is still there and uses the same --- and some new --- mathematics: it goes into building algorithms that "jump directly" from data to the analysis of the model (which is now not available in closed form) so as to make predictions. Our work here presents a couple of efforts that illustrate this "new" path from data to predictions. It really is the same old path, but it is traveled by new means.

public 01:34:47

Alexandr Labovschii : High accuracy numerical methods for fluid flow problems and turbulence modeling

  -   Applied Math and Analysis ( 99 Views )

We present several high accuracy numerical methods for fluid flow problems and turbulence modeling.

First we consider a stabilized finite element method for the Navier-Stokes equations which has second order temporal accuracy. The method requires only the solution of one linear system (arising from an Oseen problem) per time step.

We proceed by introducing a family of defect correction methods for the time dependent Navier-Stokes equations, aiming at higher Reynolds' number. The method presented is unconditionally stable, computationally cheap and gives an accurate approximation to the quantities sought.

Next, we present a defect correction method with increased time accuracy. The method is applied to the evolutionary transport problem, it is proven to be unconditionally stable, and the desired time accuracy is attained with no extra computational cost.

We then turn to the turbulence modeling in coupled Navier-Stokes systems - namely, MagnetoHydroDynamics. We consider the mathematical properties of a model for the simulation of the large eddies in turbulent viscous, incompressible, electrically conducting flows. We prove existence, uniqueness and convergence of solutions for the simplest closed MHD model. Furthermore, we show that the model preserves the properties of the 3D MHD equations.

Lastly, we consider the family of approximate deconvolution models (ADM) for turbulent MHD flows. We prove existence, uniqueness and convergence of solutions, and derive a bound on the modeling error. We verify the physical properties of the models and provide the results of the computational tests.