public 01:34:43

Bruce Donald : Some mathematical and computational challenges arising in structural molecular biology

  -   Applied Math and Analysis ( 304 Views )

Computational protein design is a transformative field with exciting prospects for advancing both basic science and translational medical research. New algorithms blend discrete and continuous mathematics to address the challenges of creating designer proteins. I will discuss recent progress in this area and some interesting open problems. I will motivate this talk by discussing how, by using continuous geometric representations within a discrete optimization framework, broadly-neutralizing anti-HIV-1 antibodies were computationally designed that are now being tested in humans - the designed antibodies are currently in eight clinical trials (See https://clinicaltrials.gov/ct2/results?cond=&term=VRC07&cntry=&state=&city=&dist= ), one of which is Phase 2a (NCT03721510). These continuous representations model the flexibility and dynamics of biological macromolecules, which are an important structural determinant of function. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. These distributions are not fully constrained by the limited information from experiments, making the problem ill-posed in the sense of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem must be regularized by making (hopefully reasonable) assumptions. I will present new ways to both represent and visualize correlated inter-domain protein motions (See Figure). We use Bingham distributions, based on a quaternion fit to circular moments of a physics-based quadratic form. To find the optimal solution for the distribution, we designed an efficient, provable branch-and-bound algorithm that exploits the structure of analytical solutions to the trigonometric moment problem. Hence, continuous conformational PDFs can be determined directly from NMR measurements. The representation works especially well for multi-domain systems with broad conformational distributions. Ultimately, this method has parallels to other branches of applied mathematics that balance discrete and continuous representations, including physical geometric algorithms, robotics, computer vision, and robust optimization. I will advocate for using continuous distributions for protein modeling, and describe future work and open problems.

public 01:34:51

Bruce Pitman : CANCELLED

  -   Applied Math and Analysis ( 180 Views )


public 01:09:47

Casey Rodriguez : The Radiative Uniqueness Conjecture for Bubbling Wave Maps

  -   Applied Math and Analysis ( 191 Views )

One of the most fundamental questions in partial differential equations is that of regularity and the possible breakdown of solutions. We will discuss this question for solutions to a canonical example of a geometric wave equation; energy critical wave maps. Break-through works of Krieger-Schlag-Tataru, Rodnianski-Sterbenz and Rapha ̈el-Rodnianski produced examples of wave maps that develop singularities in finite time. These solutions break down by concentrating energy at a point in space (via bubbling a harmonic map) but have a regular limit, away from the singular point, as time approaches the final time of existence. The regular limit is referred to as the radiation. This mechanism of breakdown occurs in many other PDE including energy critical wave equations, Schro ̈dinger maps and Yang-Mills equations. A basic question is the following: • Can we give a precise description of all bubbling singularities for wave maps with the goal of finding the natural unique continuation of such solutions past the singularity? In this talk, we will discuss recent work (joint with J. Jendrej and A. Lawrie) which is the first to directly and explicitly connect the radiative component to the bubbling dynamics by constructing and classifying bubbling solutions with a simple form of prescribed radiation. Our results serve as an important first step in formulating and proving the following Radiative Uniqueness Conjecture for a large class of wave maps: every bubbling solution is uniquely characterized by it’s radiation, and thus, every bubbling solution can be uniquely continued past blow-up time while conserving energy.

public 01:34:50

Hongkai Zhao : Approximate Separability of Greens Function for Helmholtz Equation in the High Frequency Limit

  -   Applied Math and Analysis ( 183 Views )

Approximate separable representations of Green’s functions for differential operators is a basic and important question in the analysis of differential equations, the development of efficient numerical algorithms and imaging. Being able to approximate a Green’s function as a sum with few separable terms is equivalent to low rank properties of corresponding numerical solution operators. This will allow for matrix compression and fast solution techniques. Green's functions for coercive elliptic differential operators have been shown to be highly separable and the resulting low rank property for discretized system was explored to develop efficient numerical algorithms. However, the case of Helmholtz equation in the high frequency limit is more challenging both mathematically and numerically. We introduce new tools based on the study of relation between two Green’s functions with different source points and a tight dimension estimate for the best linear subspace approximating a set of almost orthogonal vectors to prove new lower bounds for the number of terms in the representation for the Green's function for Helmholtz operator in the high frequency limit. Upper bounds are also derived. We give explicit sharp estimates for cases that are common in practice and present numerical examples. This is a joint work with Bjorn Engquist.

public 01:14:39

Lucy Zhang : Modeling and Simulations of Fluid and Deformable-Structure Interactions in Bio-Mechanical Systems

  -   Applied Math and Analysis ( 164 Views )

Fluid-structure interactions exist in many aspects of our daily lives. Some biomedical engineering examples are blood flowing through a blood vessel and blood pumping in the heart. Fluid interacting with moving or deformable structures poses more numerical challenges for its complexity in dealing with transient and simultaneous interactions between the fluid and solid domains. To obtain stable, effective, and accurate solutions is not trivial. Traditional methods that are available in commercial software often generate numerical instabilities.

In this talk, a novel numerical solution technique, Immersed Finite Element Method (IFEM), is introduced for solving complex fluid-structure interaction problems in various engineering fields. The fluid and solid domains are fully coupled, thus yield accurate and stable solutions. The variables in the two domains are interpolated via a delta function that enables the use of non-uniform grids in the fluid domain, which allows the use of arbitrary geometry shapes and boundary conditions. This method extends the capabilities and flexibilities in solving various biomedical, traditional mechanical, and aerospace engineering problems with detailed and realistic mechanics analysis. Verification problems will be shown to validate the accuracy and effectiveness of this numerical approach. Several biomechanical problems will be presented: 1) blood flow in the left atrium and left atrial appendage which is the main source of blood in patients with atrial fibrillation. The function of the appendage is determined through fluid-structure interaction analysis, 2) examine blood cell and cell interactions under different flow shear rates. The formation of the cell aggregates can be predicted when given a physiologic shear rate.

public 01:24:58

Ju Sun : When Are Nonconvex Optimization Problems Not Scary?

  -   Applied Math and Analysis ( 156 Views )

Many problems arising from scientific and engineering applications can be naturally formulated as optimization problems, most of which are nonconvex. For nonconvex problems, obtaining a local minimizer is computationally hard in theory, never mind the global minimizer. In practice, however, simple numerical methods often work surprisingly well in finding high-quality solutions for specific problems at hand.

In this talk, I will describe our recent effort in bridging the mysterious theory-practice gap for nonconvex optimization. I will highlight a family of nonconvex problems that can be solved to global optimality using simple numerical methods, independent of initialization. This family has the characteristic global structure that (1) all local minimizers are global, and (2) all saddle points have directional negative curvatures. Problems lying in this family cover various applications across machine learning, signal processing, scientific imaging, and more. I will focus on two examples we worked out: learning sparsifying bases for massive data and recovery of complex signals from phaseless measurements. In both examples, the benign global structure allows us to derive geometric insights and computational results that are inaccessible from previous methods. In contrast, alternative approaches to solving nonconvex problems often entail either expensive convex relaxation (e.g., solving large-scale semidefinite programs) or delicate problem-specific initializations.

Completing and enriching this framework is an active research endeavor that is being undertaken by several research communities. At the end of the talk, I will discuss open problems to be tackled to move forward.

public 01:34:49

Xiantao Li : The Mori-Zwanzig formalism for the reduction of complex dynamics models

  -   Applied Math and Analysis ( 128 Views )

Mathematical models of complex physical processes often involve large number of degrees of freedom as well as events occurring on different time scales. Therefore, direct simulations based on these models face tremendous challenge. This focus of this talk is on the Mori-Zwanzig (MZ) projection formalism for reducing the dimension of a complex dynamical system. The goal is to mathematically derive a reduced model with much fewer variables, while still able to capture the essential properties of the system. In many cases, this formalism also eliminates fast modes and makes it possible to explore events over longer time scales. The models that are directly derived from the MZ projection are typically too abstract to be practically implemented. We will first discuss cases where the model can be simplified to generalized Langevin equations (GLE). Furthermore, we introduce systematic numerical approximations to the GLE, in which the fluctuation-dissipation theorem (FDT) is automatically satisfied. More importantly, these approximations lead to a hierarchy of reduced models with increasing accuracy, which would also be useful for an adaptive model refinement (AMR). Examples, including the NLS, atomistic models of materials defects, and molecular models of proteins, will be presented to illustrate the potential applications of the methods.

public 02:29:55

Leonid Berlyand : Flux norm approach to finite-dimensional homogenization approximation with non-separated scales and high contrast

  -   Applied Math and Analysis ( 164 Views )

PDF Abstract
Classical homogenization theory deals with mathematical models of strongly inhomogeneous media described by PDEs with rapidly oscillating coefficients of the form A(x/\epsilon), \epsilon → 0. The goal is to approximate this problem by a homogenized (simpler) PDE with slowly varying coefficients that do not depend on the small parameter \epsilon. The original problem has two scales: fine O(\epsilon) and coarse O(1), whereas the homogenized problem has only a coarse scale. The homogenization of PDEs with periodic or ergodic coefficients and well-separated scales is now well understood. In a joint work with H. Owhadi (Caltech) we consider the most general case of arbitrary L∞ coefficients, which may contain infinitely many scales that are not necessarily well-separated. Specifically, we study scalar and vectorial divergence-form elliptic PDEs with such coefficients. We establish two finite-dimensional approximations to the solutions of these problems, which we refer to as finite-dimensional homogenization approximations. We introduce a flux norm and establish the error estimate in this norm with an explicit and optimal error constant independent of the contrast and regularity of the coefficients. A proper generalization of the notion of cell problems is the key technical issue in our consideration. The results described above are obtained as an application of the transfer property as well as a new class of elliptic inequalities which we conjecture. These inequalities play the same role in our approach as the div-curl lemma in classical homogenization. These inequalities are closely related to the issue of H^2 regularity of solutions of elliptic non-divergent PDEs with non smooth coefficients.

public 01:29:58

Courtney Paquette : Algorithms for stochastic nonconvex and nonsmooth optimization

  -   Applied Math and Analysis ( 134 Views )

Nonsmooth and nonconvex loss functions are often used to model physical phenomena, provide robustness, and improve stability. While convergence guarantees in the smooth, convex settings are well-documented, algorithms for solving large-scale nonsmooth and nonconvex problems remain in their infancy.

I will begin by isolating a class of nonsmooth and nonconvex functions that can be used to model a variety of statistical and signal processing tasks. Standard statistical assumptions on such inverse problems often endow the optimization formulation with an appealing regularity condition: the objective grows sharply away from the solution set. We show that under such regularity, a variety of simple algorithms, subgradient and Gauss Newton like methods, converge rapidly when initialized within constant relative error of the optimal solution. We illustrate the theory and algorithms on the real phase retrieval problem, and survey a number of other applications, including blind deconvolution and covariance matrix estimation.

One of the main advantages of smooth optimization over its nonsmooth counterpart is the potential to use a line search for improved numerical performance. A long-standing open question is to design a line-search procedure in the stochastic setting. In the second part of the talk, I will present a practical line-search method for smooth stochastic optimization that has rigorous convergence guarantees and requires only knowable quantities for implementation. While traditional line-search methods rely on exact computations of the gradient and function values, our method assumes that these values are available up to some dynamically adjusted accuracy that holds with some sufficiently high, but fixed, probability. We show that the expected number of iterations to reach an approximate-stationary point matches the worst-case efficiency of typical first-order methods, while for convex and strongly convex objectives it achieves the rates of deterministic gradient descent.