Matroids are combinatorial devices designed to encoded the combinatorial structure of hyperplane arrangements. Combinatorialists have developed many invariants of matroids. I will explain that there is reason to believe that most of these invariants are related to computations in the K-theory of the Grassmannian. In particular, I will explain work of mine limiting the complexity of Hacking, Keel and Tevelev's "very stable pairs", which compactify the moduli of hyperplane arrangements. This talk should be understandable both to those who don't know matroids, and to those who don't know K-theory.
Essential dimension is an invariant introduced by Buhler and Reichstein to measure how many parameters are needed to define an algebraic object such as a field extension or an algebraic curve over a field. I will describe joint work with Vistoli and Reichstein which studies essential dimension in the case where the algebraic objects are represented by a stack. I will also give examples of applications in the theory of quadratic forms.
A celebrated 19th century result of Cayley and Salmon is that a smooth cubic surface over the complex numbers contains exactly 27 lines. By contrast, over the real numbers, the number of real lines depends on the surface. A classification was obtained by Segre, but it is a recent observation of Benedetti-Silhol, Finashin-Kharlamov, Horev-Solomon and Okonek-Teleman that a certain signed count of lines is always 3. We extend this count to an arbitrary field k using an Euler number in A1-homotopy theory. The resulting count is valued in the Grothendieck-Witt group of non-degenerate symmetric bilinear forms. (No knowledge of A1-homotopy theory will be assumed in the talk.) This is joint work with Jesse Kass.
A real matrix is totally nonnegative if every minor in it is nonnegative. The classical Edrei-Thoma theorem classifies totally nonnegative infinite Toeplitz matrices, and is related to problems in representation theory, combinatorics and probability. I will discuss progress towards two variations on this theorem to block-Toeplitz matrices, and to finite Toeplitz matrices. Both of these variations connect the classical theory to loop groups.
We will start by defining the Jones polynomial of a knot, and discussing some of its applications. We will then explain a refinement of the Jones polynomial, called Khovanov homology, and give some applications of this refinement. We will conclude by discussing a further refinement, called a Khovanov homotopy type; this part is joint work with Sucharit Sarkar.
We describe an automatic chaos verification scheme based on set oriented numerical methods, which is especially well suited to the study of area and volume preserving diffeomorphisms. The novel feature of the scheme is an iterative algorithm for approximating connecting orbits between collections of hyperbolic fixed and periodic points with greater and greater accuracy. The algorithm is geometric rather than graph theoretic in nature and, unlike existing methods, does not require the computation of chain recurrent sets. We give several example computations in dimension two and three.
This work is motivated by a fundamental problem in sensor networks -- the need to aggregate redundant sensor data across a network. We focus on a simple problem of enumerating targets with a network of sensors that can detect nearby targets, but cannot identify or localize them. We show a clear, clean relationship between this problem and the topology of constructable sheaves. In particular, an integration theory from sheaf theory that uses Euler characteristic as a measure provides a computable, robust, and powerful tool for data aggregation.
We consider the question: ``How bad can the deformation space of an object be?'' (Alternatively: ``What singularities can appear on a moduli space?'') The answer seems to be: ``Unless there is some a priori reason otherwise, the deformation space can be arbitrarily bad.'' We show this for a number of important moduli spaces. More precisely, up to smooth parameters, every singularity that can be described by equations with integer coefficients appears on moduli spaces parameterizing: smooth projective surfaces (or higher-dimensional manifolds); smooth curves in projective space (the space of stable maps, or the Hilbert scheme); plane curves with nodes and cusps; stable sheaves; isolated threefold singularities; and more. The objects themselves are not pathological, and are in fact as nice as can be. This justifies Mumford's philosophy that even moduli spaces of well-behaved objects should be arbitrarily bad unless there is an a priori reason otherwise. I will begin by telling you what ``moduli spaces'' and ``deformation spaces'' are. The complex-minded listener can work in the holomorphic category; the arithmetic listener can think in mixed or positive characteristic. This talk is intended to be (mostly) comprehensible to a broad audience.
"Numerical Approximation of Layer Potentials Along Curve Segments"
Let M be a manifold with non-vanishing vectorfield. The homology of the space of loops in M carries a natural Lie bialgebra structure described by Sullivan as string topology operations. If M is a surface, these operations where originally defined by Goldman and Turaev. We study formal descriptions of these Lie bialgebras. More precisely, for surfaces these Lie bialgebras are formal in the sense that they are isomorphic (after completion) to their algebraic analogues (Schedler's necklace Lie bialgebras) built from the homology of the surface. For higher dimensional manifolds we give a similar description that turns out to depend on the Chern-Simons partition function.
This talk is based on joint work with A. Alekseev, N. Kawazumi, Y. Kuno and T. Willwacher.
In a recent paper, Brendle and Marques proved that on certain geodesic balls in the standard hemisphere, there does not exist small metric deformations of the standard metric which increase the scalar curvature in the interior and the mean curvature on the boundary. Such a result was motivated by the Euclidean and Hyperbolic positive mass theorems. More interestingly, this result is false on the hemisphere itself, which is shown by Brendle-Marques-Neves' remarkable counter example to the Min-Oo's conjecture. In this talk, we provide a few remarks to Brendle and Marques' theorem. We show that their theorem remains valid on slightly larger geodesic balls; it also holds on certain convex domains; moreover, with a volume constraint imposed, a variation of their theorem holds on the hemisphere. This is a joint work with Luen-Fai Tam.
We argue that there exists a derived equivalence between Calabi-Yau threefolds obtained by taking hyperplane sections (of the appropriate codimension) of the Grassmannian G(2,7) and the Pfaffian Pf(7). The existence of such an equivalence has been conjectured in physics for almost ten years, as the two families of Calabi-Yau threefolds are believed to have the same mirror. It is the first example of a derived equivalence between Calabi-Yau threefolds which are provably non-birational.
FDS (fds.duke.edu) is a content management system (CMS) widely used across Duke for schools and departments to effectively maintain their faculty research and teaching related web pages and reports. In this talk we'll cover some fundamentals of FDS and give a short tutorial on the FDS templates. We hope this talk will help everyone (either webmasters, web developers and designers, and FDS group managers, or interested faculty/staff members) to use FDS better.
Abstract: Representation stability is an exciting new area that combines ideas from commutative algebra and representation theory. The meta-idea is to combine a sequence of objects together using some newly defined algebraic structure, and then to translate abstract properties about this structure to concrete properties about the original object of study. Finite generation is a particularly important property, which translates to the existence of bounds on algebraic invariants, or some predictable behavior. I'll discuss some examples coming from topology (configuration spaces) and algebraic geometry (secant varieties).
A Diophantine equation is a polynomial equation in several variables, generally with integer coefficients, like x3 + y3 = z3. Provably finding all integer solutions of a Diophantine equation is a storied mathematical problem that is easy to state and notoriously difficult to solve. The method of Chabauty--Coleman is one particularly successful technique for ruling out extraneous solutions of a certain class of Diophantine equations. The method is p-adic in nature, and involves producing p-adic analytic functions that vanish on all integer-valued solutions. I will discuss work with Katz and Zureick-Brown on finding uniform bounds on the number of rational points on a curve of fixed genus, defined over a number field, subject to a (conjecturally weak) restriction on its Jacobian. The same technique also makes progress on the uniform Manin-Mumford conjecture on the size of torsion packets on curves of fixed genus.
Calculation of portfolio loss distributions is an important part of credit risk management in all large banking institutions. Mathematically, this calculation is tantamount to efficiently computing the probability distribution of the sum of a very large number of correlated random variables. Typical Monte Carlo aggregation models apply brute force computation to this problem and suffer from two main drawbacks: lack of speed and lack of transparency for further credit risk analysis. I will describe an attempt to ameliorate these drawbacks via an asymptotic probabilistic method based on the Central Limit Theorem. I will next describe capital allocation, a process of attributing risk to individual transactions or subportfolios of a given portfolio. In so doing, I will state axioms for coherent risk measures. These axioms place the notion of risk measurement and diversification on a firm mathematical foundation. I will then describe axioms for capital allocation via coherent risk measures, and illustrate the ideas with efficient computational formulae for allocating capital based on a couple of commonly used risk measures. In the course of this talk, which will be geared towards graduate students, I will attempt to give a flavor of industrial research and role of applied mathematics in industry.
The Langlands program is a far-reaching collection of conjectures that relate different areas of mathematics including number theory and representation theory. A fundamental problem on the representation theory side of the Langlands program is the construction of all (irreducible, smooth, complex) representations of certain matrix groups, called p-adic groups. In my talk I will introduce p-adic groups and provide an overview of our understanding of their representations, with an emphasis on recent progress. I will also briefly discuss applications to other areas, e.g. to automorphic forms and the global Langlands program.
Integral equation methods are frequently used in the numerical solution of elliptic boundary value problems. After giving a brief overview of the advantages and disadvantages of such methods vis-a-vis more direct techniques like finite element methods, I will discuss two problems which arise in integral equation methods. In both cases, I take a contrarian position. The first is the discretization of integral operators on singular domains (e.g., surfaces with edges and curves with corners). The consensus opinion holds that integral equations given on such domains are exceedingly difficult to discretize and that sophisticated analysis, often specific to a particular boundary value problem, is required. I will explain that, in fact, the efficient solution of a broad class of such problems can be effected using an elementary approach. Exterior scattering problems given on planar domains with tens of thousands of corner points can be solved to 12 digit accuracy on my two year old desktop computer in a matter of hours. The second problem I will discuss is the evaluation of the singular integrals which arise form the discretization of weakly singular integral operators given on surfaces. Exponentially convergent algorithms for evaluating these integrals have been described in the literature and it is widely regarded as a "solved" problem. I will explain why this is not so and describe an approach which yields only algebraic convergence, but nonetheless performs better in practice than standard exponentially convergent methods.