Quicklists
public 01:34:52

Jill Pipher : Geometric discrepancy theory: directional discrepancy in 2-D

  -   Applied Math and Analysis ( 91 Views )

Discrepancy theory originated with some apparently simple questions about sequences of numbers. The discrepancy of an infinite sequence is a quantitative measure of how far it is from being uniformly distributed. Precisely, an infinite sequence { a1,a2, ...} is said to be uniformly distributed in [0, 1] if
lim_{n\to\infty} (1/n|{a1, a2,...an} intersect [s,t]|) = t-s.
If a sequence {ak} is uniformly distributed, then it is also the case that for all (Riemann) integrable functions f on [0, 1],
lim_{n\to\infty} (1/n\sum_{k=1}^n f(ak))=\int_0^1 f(x)dx.
Thus, uniformly distributed sequences provide good numerical schemes for approximating integrals. For example, if alpha is any irrational number in [0, 1], then the fractional part {alphak}:=ak is uniformly distributed. Classical Fourier analysis enters here, in the form of Weyl's criterion. The discrepancy of a sequence with respect to its first n entries is
D({ak},n) := sup_{s If a sequence {ak} is uniformly distributed then D({ak},n) divided by n goes to zero as n\to\infty. Van der Corput posed the following question: does there exist a sequence which is so uniformly distributed that D({ak},n) is bounded by a constant for all n? In 1945, Van Aardenne-Ehrenfest proved that the answer was: No. She proved that a lower bound existed for all sequences. Later, Roth showed that the discrepancy problem for sequences had an equivalent geometric formulation in terms of a notion of discrepancy in two dimensions. The problem in two dimensions, which is the focus of this talk, is this: Given a collection of N points in the unit square [0, 1]^2, how can we quantify the idea that it is uniformly distributed in the square? Which collections of points achieve a lowest possible discrepancy? There are many reasons to be interested in discrepancy theory, both pure and applied: sets of low discrepancy figure prominantly in numerical applications, from engineering to finance. This talk focuses primarily on theoretical issues involving measuring discrepancy in two and higher dimensions.
See
PDF.

public 01:34:52

Lek-Heng Lim : Multilinear Algebra and Its Applications

  -   Applied Math and Analysis ( 110 Views )

In mathematics, the study of multilinear algebra is largely limited to properties of a whole space of tensors --- tensor products of k vector spaces, modules, vector bundles, Hilbert spaces, operator algebras, etc. There is also a tendency to take an abstract coordinate-free approach. In most applications, instead of a whole space of tensors, we are often given just a single tensor from that space; and it usually takes the form of a hypermatrix, i.e.\ a k-dimensional array of numerical values that represents the tensor with respect to some coordinates/bases determined by the units and nature of measurements. How could one analyze this one single tensor then? If the order of the tensor k = 2, then the hypermatrix is just a matrix and we have access to a rich collection of tools: rank, determinant, norms, singular values, eigenvalues, condition number, etc. This talk is about the case when k > 2. We will see that one may often define higher-order analogues of common matrix notions rather naturally: tensor ranks, hyperdeterminants, tensor norms (Hilbert-Schmidt, spectral, Schatten, Ky Fan, etc), tensor eigenvalues and singular values, etc. We will discuss the utility as well as difficulties of various tensorial analogues of matrix problems. In particular we shall look at how tensors arise in a variety of applications including: computational complexity, control engineering, mathematical biology, neuroimaging, quantum computing, signal processing, spectroscopy, and statistics.

public 01:34:44

Yossi Farjoun : Solving Conservation Law and Balance Equations by Particle Management

  -   Applied Math and Analysis ( 101 Views )

Conservation equations are at the heart of many interesting and important problems. Examples come from physics, chemistry, biology, traffic and many more. Analytically, hyperbolic equations have a beautiful structure due to the existence of characteristics. These provide the possibility of transforming a conservation PDE into a system of ODE and thus greatly reducing the computational effort required to solve such problems. However, even in one dimension, one encounters problems after a short time.

The most obvious difficulty that needs to be dealt with has to do with the creation of shocks, or in other words, the crossing of characteristics. With a particle based method one would like to avoid a situation when one particle overtakes a neighboring one. However, since shocks are inherent to many hyperbolic equations and relevant to the problems that one would like to solve, it would be good not to ``smooth away'' the shock but rather find a good representation of it and a good solution for the offending particles.

In this talk I will present a new particle based method for solving (one dimensional, scalar) conservation law equations. The guiding principle of the method is the conservative property of the underlying equation. The basic method is conservative, entropy decreasing, variation diminishing and exact away from shocks. A recent extension allows solving equations with a source term, and also provides ``exact'' solutions to the PDE. The method compares favorably to other benchmark solvers, for example CLAWPACK, and requires less computation power to reach the same resolution. A few examples will be shown to illustrate the method, with its various extensions. Due to the current limitation to 1D scalar, the main application we are looking at is traffic flow on a large network. Though we still hope to manage to extend the method to either systems or higher dimensions (each of these extensions has its own set of difficulties), I would be happy to discuss further possible applications or suggestions for extensions.

public 01:34:51

Laura Miller : Scaling effects in heart development: Changes in bulk flow patterns and the resulting forces

  -   Applied Math and Analysis ( 92 Views )

When the heart tube first forms, the Reynolds number describing intracardial flow is only about 0.02. During development, the Reynolds number increases to roughly 1000. The heart continues to beat and drive the fluid during its entire development, despite significant changes in fluid dynamics. Early in development, the atrium and ventricle bulge out from the heart tube, and valves begin to form through the expansion of the endocardial cushions. As a result of changes in geometry, conduction velocities, and material properties of the heart wall, the fluid dynamics and resulting spatial patterns of shear stress and transmural pressure change dramatically. Recent work suggests that these transitions are significant because fluid forces acting on the cardiac walls, as well as the activity of myocardial cells which drive the flow, are necessary for correct chamber and valve morphogenesis.

In this presentation, computational fluid dynamics was used to explore how spatial distributions of the normal forces and shear stresses acting on the heart wall change as the endocardial cushions grow, as the Reynolds number increases, and as the cardiac wall increases in stiffness. The immersed boundary method was used to simulate the fluid-structure interaction between the cardiac wall and the blood in a simplified model of a two-dimensional heart. Numerical results are validated against simplified physical models. We find that the presence of chamber vortices is highly dependent upon cardiac cushion height and Reynolds number. Increasing cushion height also drastically increases the shear stress acting on the cushions and the normal forces acting on the chamber walls.

public 02:29:55

Leonid Berlyand : Flux norm approach to finite-dimensional homogenization approximation with non-separated scales and high contrast

  -   Applied Math and Analysis ( 152 Views )

PDF Abstract
Classical homogenization theory deals with mathematical models of strongly inhomogeneous media described by PDEs with rapidly oscillating coefficients of the form A(x/\epsilon), \epsilon → 0. The goal is to approximate this problem by a homogenized (simpler) PDE with slowly varying coefficients that do not depend on the small parameter \epsilon. The original problem has two scales: fine O(\epsilon) and coarse O(1), whereas the homogenized problem has only a coarse scale. The homogenization of PDEs with periodic or ergodic coefficients and well-separated scales is now well understood. In a joint work with H. Owhadi (Caltech) we consider the most general case of arbitrary L∞ coefficients, which may contain infinitely many scales that are not necessarily well-separated. Specifically, we study scalar and vectorial divergence-form elliptic PDEs with such coefficients. We establish two finite-dimensional approximations to the solutions of these problems, which we refer to as finite-dimensional homogenization approximations. We introduce a flux norm and establish the error estimate in this norm with an explicit and optimal error constant independent of the contrast and regularity of the coefficients. A proper generalization of the notion of cell problems is the key technical issue in our consideration. The results described above are obtained as an application of the transfer property as well as a new class of elliptic inequalities which we conjecture. These inequalities play the same role in our approach as the div-curl lemma in classical homogenization. These inequalities are closely related to the issue of H^2 regularity of solutions of elliptic non-divergent PDEs with non smooth coefficients.