We will start by defining the Jones polynomial of a knot, and discussing some of its applications. We will then explain a refinement of the Jones polynomial, called Khovanov homology, and give some applications of this refinement. We will conclude by discussing a further refinement, called a Khovanov homotopy type; this part is joint work with Sucharit Sarkar.
In his 1932 paper, Eugene Wigner introduced the now famous Wigner function in order to compute quantum corrections to classical equilibrium distributions. We show how to extend this program and compute semiclassical approximations to quantum mechanical equilibrium distributions for slow, semiclassical degrees of freedom coupled to fast, quantum mechanical degrees of freedom. The main examples are molecules and electrons in crystalline solids. Where we will focus on the thermodynamics of the Hofstadter model as an application of the general results. The semiclassical formulas contain, in addition to quantum corrections similar to those of Wigner, also modifications of the classical Hamiltonian system used in the approximation: The classical energy and the Liouville measure on classical phase space turn out to have non-trivial-expansions in the semiclassical parameter. This talk is based on joint work with Stefan Teufel.
In the 1970's, inspired by the work of Saito and Shintani, Langlands gave a definitive treatment of base change for automorphic representations of the general linear group in two variables along prime degree extensions of number fields. To give some idea of the depth and utility of his work, one need only remark that some consequences of it were crucial in Wiles' proof of Fermat's last theorem. In this talk we will report on work in progress on base change for automorphic representations of GL(2) along nonsolvable Galois extensions of number fields. We will attempt to explain this assuming only a little algebraic number theory.
My research focuses in three areas of evolutionary biology: the structure of viral populations, the evolution of drug resistance, and phylogenetics. Knowledge of the diversity of viral populations is important for understanding disease progression, vaccine design, and drug resistance, yet it is poorly understood. New technologies (pyrosequencing) allow us to read short, error-prone DNA sequences from an entire population at once. I will show how to assemble the reads into genomes using graph theory, allowing us to determine the population structure. Next, I will describe a new class of graphical models inspired by poset theory that describe the accumulation of (genetic) events with constraints on the order of occurrence. Applications of these models include calculating the risk of drug resistance in HIV and understanding cancer progression. Finally, I'll describe a polyhedral method for determining the sensitivity of phylogenetic algorithms to changes in the parameters. We will analyze several datasets where small changes in parameters lead to completely different trees and see how discrete geometry can be used to average out the uncertainty in parameter choice.
The probabilistic method has redefined functional analysis in high dimensions. Random spaces and operators are to analysis what random graphs are to combinatorics. They provide a wealth of examples that are otherwise hard to construct, suggest what situations we should view as typical, and they have far-reaching applications, most notably in convex geometry and computer science. With the increase of our knowledge about random structures we begin to wonder about their universality. Is there a limiting picture as the dimension increases to infinity? Is this picture unique and independent of the distribution? What are deterministic implications of probabilistic methods? This talk will survey progress on some of these problems, in particular a proof of the conjecture of Von Neumann and Goldstine on random operators and connections to the Littlewood-Offord problem in additive combinatorics.
For Markov random fields temporal mixing, the time it takes for the Glauber dynamics to approach it's stationary distribution, is closely related to phase transitions in the spatial mixing properties of the measure such as uniqueness and the reconstruction problem. Such questions connect ideas from probability, statistical physics and theoretical computer science. I will survey some recent progress in understanding the mixing time of the Glauber dynamics as well as related results on spatial mixing. Partially based on joint work with Elchanan Mossel
The Heegaard Floer package provides a robust tool for studying contact 3-manifolds and their subspaces. Within the sphere of Heegaard Floer homology, several invariants of Legendrian and transverse knots have been defined. The first such invariant, constructed by Ozsvath, Szabo and Thurston, was defined combinatorially using grid diagrams. The second invariant was obtained by geometric means using open book decompositions by Lisca, Ozsvath, Stipsicz and Szabo. We show that these two previously defined invariant agree. Along the way, we define a third, equivalent Legendrian/transverse invariant which arises naturally when studying transverse knots which are braided with respect to an open book decomposition.
A real matrix is totally nonnegative if every minor in it is nonnegative. The classical Edrei-Thoma theorem classifies totally nonnegative infinite Toeplitz matrices, and is related to problems in representation theory, combinatorics and probability. I will discuss progress towards two variations on this theorem to block-Toeplitz matrices, and to finite Toeplitz matrices. Both of these variations connect the classical theory to loop groups.
This paper reports the results of using two sets of ranking data, one from actual elections and the other from surveys of voters, to examine whether the outcomes of three-candidate vote-casting processes follow a discernible pattern. Six statistical models that make different assumptions about such a pattern are evaluated. Both data sets suggest that a spatial model describes an observable pattern much better than any of the other five models. The results imply that any conclusions about the probability of voting events reached on the basis of models other than the spatial modelfor example, on the basis of the impartial anonymous cultureare suspect. (Joint work with Florenz Plassmann)
This talk is motivated by the question "Given a stratified map, how are path components of the fiber organized?" Studying path components necessitates cosheaves, but the stratified assumption provides an elegant combinatorial description using MacPherson's entrance path category, which also controls the associated Leray sheaves. One of the goals of this talk will be to provide a self-contained exposition of these ideas, using a minimal amount of mathematical background. The talk will follow loosely a recent paper with Amit Patel, which is available on the arXiv as http://arxiv.org/abs/1603.01587 Connections with applied topology will also be described.
Generalized complex structures, introduced by Hitchin in 2003, interpolate between symplectic and complex structures. In this talk, I will discuss a reduction procedure unifying symplectic reduction and holomorphic quotients. In general, this reduction procedure presents new features that will be illustrated in examples (e.g., the generalized reduction of a symplectic structure can be complex, and 3-form twists can appear in the quotient). If time permits, I will also discuss a super-geometric viewpoint to generalized reduction.
Clouds and precipitation are among the most challenging aspects of weather and climate prediction. Moreover, our mathematical and physical understanding of clouds is far behind our understanding of a "dry" atmospheric where water vapor is neglected. In this talk, in working toward overcoming these challenges, we present new results on clouds and precipitation from two perspectives: first, in terms of the partial differential equations (PDEs) for atmospheric fluid dynamics, and second, in terms of stochastic models. A new asymptotic limit will be described, and it leads to new PDEs for a precipitating version of the quasi-geostrophic equations, now including phase changes of water. Also, a new energy will be presented for an atmosphere with phase changes, and it provides a generalization of the quadratic energy of a "dry" atmosphere. Finally, it will be shown that the statistics of clouds and precipitation can be described by stochastic differential equations and stochastic PDEs. As one application, it will be shown that, under global warming, the most significant change in precipitation statistics is seen in the largest events -- which become even larger and more probable -- and the distribution of event sizes conforms to the stochastic models.
Integral equation methods are frequently used in the numerical solution of elliptic boundary value problems. After giving a brief overview of the advantages and disadvantages of such methods vis-a-vis more direct techniques like finite element methods, I will discuss two problems which arise in integral equation methods. In both cases, I take a contrarian position. The first is the discretization of integral operators on singular domains (e.g., surfaces with edges and curves with corners). The consensus opinion holds that integral equations given on such domains are exceedingly difficult to discretize and that sophisticated analysis, often specific to a particular boundary value problem, is required. I will explain that, in fact, the efficient solution of a broad class of such problems can be effected using an elementary approach. Exterior scattering problems given on planar domains with tens of thousands of corner points can be solved to 12 digit accuracy on my two year old desktop computer in a matter of hours. The second problem I will discuss is the evaluation of the singular integrals which arise form the discretization of weakly singular integral operators given on surfaces. Exponentially convergent algorithms for evaluating these integrals have been described in the literature and it is widely regarded as a "solved" problem. I will explain why this is not so and describe an approach which yields only algebraic convergence, but nonetheless performs better in practice than standard exponentially convergent methods.
The Langlands program is a far-reaching collection of conjectures that relate different areas of mathematics including number theory and representation theory. A fundamental problem on the representation theory side of the Langlands program is the construction of all (irreducible, smooth, complex) representations of certain matrix groups, called p-adic groups. In my talk I will introduce p-adic groups and provide an overview of our understanding of their representations, with an emphasis on recent progress. I will also briefly discuss applications to other areas, e.g. to automorphic forms and the global Langlands program.
We argue that there exists a derived equivalence between Calabi-Yau threefolds obtained by taking hyperplane sections (of the appropriate codimension) of the Grassmannian G(2,7) and the Pfaffian Pf(7). The existence of such an equivalence has been conjectured in physics for almost ten years, as the two families of Calabi-Yau threefolds are believed to have the same mirror. It is the first example of a derived equivalence between Calabi-Yau threefolds which are provably non-birational.
In a recent paper, Brendle and Marques proved that on certain geodesic balls in the standard hemisphere, there does not exist small metric deformations of the standard metric which increase the scalar curvature in the interior and the mean curvature on the boundary. Such a result was motivated by the Euclidean and Hyperbolic positive mass theorems. More interestingly, this result is false on the hemisphere itself, which is shown by Brendle-Marques-Neves' remarkable counter example to the Min-Oo's conjecture. In this talk, we provide a few remarks to Brendle and Marques' theorem. We show that their theorem remains valid on slightly larger geodesic balls; it also holds on certain convex domains; moreover, with a volume constraint imposed, a variation of their theorem holds on the hemisphere. This is a joint work with Luen-Fai Tam.
Calculation of portfolio loss distributions is an important part of credit risk management in all large banking institutions. Mathematically, this calculation is tantamount to efficiently computing the probability distribution of the sum of a very large number of correlated random variables. Typical Monte Carlo aggregation models apply brute force computation to this problem and suffer from two main drawbacks: lack of speed and lack of transparency for further credit risk analysis. I will describe an attempt to ameliorate these drawbacks via an asymptotic probabilistic method based on the Central Limit Theorem. I will next describe capital allocation, a process of attributing risk to individual transactions or subportfolios of a given portfolio. In so doing, I will state axioms for coherent risk measures. These axioms place the notion of risk measurement and diversification on a firm mathematical foundation. I will then describe axioms for capital allocation via coherent risk measures, and illustrate the ideas with efficient computational formulae for allocating capital based on a couple of commonly used risk measures. In the course of this talk, which will be geared towards graduate students, I will attempt to give a flavor of industrial research and role of applied mathematics in industry.