Quicklists
public 01:34:43

Bruce Donald : Some mathematical and computational challenges arising in structural molecular biology

  -   Applied Math and Analysis ( 304 Views )

Computational protein design is a transformative field with exciting prospects for advancing both basic science and translational medical research. New algorithms blend discrete and continuous mathematics to address the challenges of creating designer proteins. I will discuss recent progress in this area and some interesting open problems. I will motivate this talk by discussing how, by using continuous geometric representations within a discrete optimization framework, broadly-neutralizing anti-HIV-1 antibodies were computationally designed that are now being tested in humans - the designed antibodies are currently in eight clinical trials (See https://clinicaltrials.gov/ct2/results?cond=&term=VRC07&cntry=&state=&city=&dist= ), one of which is Phase 2a (NCT03721510). These continuous representations model the flexibility and dynamics of biological macromolecules, which are an important structural determinant of function. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. These distributions are not fully constrained by the limited information from experiments, making the problem ill-posed in the sense of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem must be regularized by making (hopefully reasonable) assumptions. I will present new ways to both represent and visualize correlated inter-domain protein motions (See Figure). We use Bingham distributions, based on a quaternion fit to circular moments of a physics-based quadratic form. To find the optimal solution for the distribution, we designed an efficient, provable branch-and-bound algorithm that exploits the structure of analytical solutions to the trigonometric moment problem. Hence, continuous conformational PDFs can be determined directly from NMR measurements. The representation works especially well for multi-domain systems with broad conformational distributions. Ultimately, this method has parallels to other branches of applied mathematics that balance discrete and continuous representations, including physical geometric algorithms, robotics, computer vision, and robust optimization. I will advocate for using continuous distributions for protein modeling, and describe future work and open problems.

public 01:09:47

Casey Rodriguez : The Radiative Uniqueness Conjecture for Bubbling Wave Maps

  -   Applied Math and Analysis ( 191 Views )

One of the most fundamental questions in partial differential equations is that of regularity and the possible breakdown of solutions. We will discuss this question for solutions to a canonical example of a geometric wave equation; energy critical wave maps. Break-through works of Krieger-Schlag-Tataru, Rodnianski-Sterbenz and Rapha Ì?el-Rodnianski produced examples of wave maps that develop singularities in finite time. These solutions break down by concentrating energy at a point in space (via bubbling a harmonic map) but have a regular limit, away from the singular point, as time approaches the final time of existence. The regular limit is referred to as the radiation. This mechanism of breakdown occurs in many other PDE including energy critical wave equations, Schro Ì?dinger maps and Yang-Mills equations. A basic question is the following: â?¢ Can we give a precise description of all bubbling singularities for wave maps with the goal of finding the natural unique continuation of such solutions past the singularity? In this talk, we will discuss recent work (joint with J. Jendrej and A. Lawrie) which is the first to directly and explicitly connect the radiative component to the bubbling dynamics by constructing and classifying bubbling solutions with a simple form of prescribed radiation. Our results serve as an important first step in formulating and proving the following Radiative Uniqueness Conjecture for a large class of wave maps: every bubbling solution is uniquely characterized by itâ??s radiation, and thus, every bubbling solution can be uniquely continued past blow-up time while conserving energy.

public 01:34:43

Ruiwen Shu : Flocking hydrodynamics with external potentials

  -   Applied Math and Analysis ( 128 Views )

We study the large-time behavior of hydrodynamic model which describes the collective behavior of continuum of agents, driven by pairwise alignment interactions with additional external potential forcing. The external force tends to compete with alignment which makes the large time behavior very different from the original Cucker-Smale (CS) alignment model, and far more interesting. Here we focus on uniformly convex potentials. In the particular case of \emph{quadratic} potentials, we are able to treat a large class of admissible interaction kernels, $\phi(r) \gtrsim (1+r^2)^{-\beta}$ with `thin' tails $\beta \leq 1$ --- thinner than the usual `fat-tail' kernels encountered in CS flocking $\beta\leq\nicefrac{1}{2}$: we discover unconditional flocking with exponential convergence of velocities \emph{and} positions towards a Dirac mass traveling as harmonic oscillator. For general convex potentials, we impose a necessary stability condition, requiring large enough alignment kernel to avoid crowd dispersion. We prove, by hypocoercivity arguments, that both the velocities \emph{and} positions of smooth solution must flock. We also prove the existence of global smooth solutions for one and two space dimensions, subject to critical thresholds in initial configuration space. It is interesting to observe that global smoothness can be guaranteed for sub-critical initial data, independently of the apriori knowledge of large time flocking behavior.

public 01:34:52

Lek-Heng Lim : Multilinear Algebra and Its Applications

  -   Applied Math and Analysis ( 119 Views )

In mathematics, the study of multilinear algebra is largely limited to properties of a whole space of tensors --- tensor products of k vector spaces, modules, vector bundles, Hilbert spaces, operator algebras, etc. There is also a tendency to take an abstract coordinate-free approach. In most applications, instead of a whole space of tensors, we are often given just a single tensor from that space; and it usually takes the form of a hypermatrix, i.e.\ a k-dimensional array of numerical values that represents the tensor with respect to some coordinates/bases determined by the units and nature of measurements. How could one analyze this one single tensor then? If the order of the tensor k = 2, then the hypermatrix is just a matrix and we have access to a rich collection of tools: rank, determinant, norms, singular values, eigenvalues, condition number, etc. This talk is about the case when k > 2. We will see that one may often define higher-order analogues of common matrix notions rather naturally: tensor ranks, hyperdeterminants, tensor norms (Hilbert-Schmidt, spectral, Schatten, Ky Fan, etc), tensor eigenvalues and singular values, etc. We will discuss the utility as well as difficulties of various tensorial analogues of matrix problems. In particular we shall look at how tensors arise in a variety of applications including: computational complexity, control engineering, mathematical biology, neuroimaging, quantum computing, signal processing, spectroscopy, and statistics.

public 01:14:47

Cynthia Rudin : 1) Regulating Greed Over Time: An Important Lesson For Practical Recommender Systems and 2) Prediction Uncertainty and Optimal Experimental Design for Learning Dynamical Systems

  -   Applied Math and Analysis ( 113 Views )

I will present work from these two papers: 1) Regulating Greed Over Time. Stefano Traca and Cynthia Rudin. 2015 Finalist for 2015 IBM Service Science Best Student Paper Award 2) Prediction Uncertainty and Optimal Experimental Design for Learning Dynamical Systems. Chaos, 2016. Benjamin Letham, Portia A. Letham, Cynthia Rudin, and Edward Browne.
There is an important aspect of practical recommender systems that we noticed while competing in the ICML Exploration-Exploitation 3 data mining competition. The goal of the competition was to build a better recommender system for Yahoo!'s Front Page, which provides personalized new article recommendations. The main strategy we used was to carefully control the balance between exploiting good articles and exploring new ones in the multi-armed bandit setting. This strategy was based on our observation that there were clear trends over time in the click-through-rates of the articles. At certain times, we should explore new articles more often, and at certain times, we should reduce exploration and just show the best articles available. This led to dramatic performance improvements.
As it turns out, the observation we made in the Yahoo! data is in fact pervasive in settings where recommender systems are currently used. This observation is simply that certain times are more important than others for correct recommendations to be made. This affects the way exploration and exploitation (greed) should change in our algorithms over time. We thus formalize a setting where regulating greed over time can be provably beneficial. This is captured through regret bounds and leads to principled algorithms. The end result is a framework for bandit-style recommender systems in which certain times are more important than others for making a correct decision.
If time permits I will discuss work on measuring uncertainty in parameter estimation for dynamical systems. I will present "prediction deviation," a new metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provide a good fit for the observed data, yet have maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty.