Quicklists
public 01:34:32

Ioannis Kevrekidis : No Equations, No Variables, No Parameters, No Space, No Time -- Data, and the Crystal Ball Modeling of Complex/Multiscale Systems

  -   Applied Math and Analysis ( 176 Views )

Obtaining predictive dynamical equations from data lies at the heart of science and engineering modeling, and is the linchpin of our technology. In mathematical modeling one typically progresses from observations of the world (and some serious thinking!) first to selection of variables, then to equations for a model, and finally to the analysis of the model to make predictions. Good mathematical models give good predictions (and inaccurate ones do not) --- but the computational tools for analyzing them are the same: algorithms that are typically operating on closed form equations.
While the skeleton of the process remains the same, today we witness the development of mathematical techniques that operate directly on observations --- data, and appear to circumvent the serious thinking that goes into selecting variables and parameters and deriving accurate equations. The process then may appear to the user a little like making predictions by "looking into a crystal ball". Yet the "serious thinking" is still there and uses the same --- and some new --- mathematics: it goes into building algorithms that "jump directly" from data to the analysis of the model (which is now not available in closed form) so as to make predictions. Our work here presents a couple of efforts that illustrate this "new" path from data to predictions. It really is the same old path, but it is traveled by new means.

public 01:34:38
public 01:14:47

Cynthia Rudin : 1) Regulating Greed Over Time: An Important Lesson For Practical Recommender Systems and 2) Prediction Uncertainty and Optimal Experimental Design for Learning Dynamical Systems

  -   Applied Math and Analysis ( 102 Views )

I will present work from these two papers: 1) Regulating Greed Over Time. Stefano Traca and Cynthia Rudin. 2015 Finalist for 2015 IBM Service Science Best Student Paper Award 2) Prediction Uncertainty and Optimal Experimental Design for Learning Dynamical Systems. Chaos, 2016. Benjamin Letham, Portia A. Letham, Cynthia Rudin, and Edward Browne.
There is an important aspect of practical recommender systems that we noticed while competing in the ICML Exploration-Exploitation 3 data mining competition. The goal of the competition was to build a better recommender system for Yahoo!'s Front Page, which provides personalized new article recommendations. The main strategy we used was to carefully control the balance between exploiting good articles and exploring new ones in the multi-armed bandit setting. This strategy was based on our observation that there were clear trends over time in the click-through-rates of the articles. At certain times, we should explore new articles more often, and at certain times, we should reduce exploration and just show the best articles available. This led to dramatic performance improvements.
As it turns out, the observation we made in the Yahoo! data is in fact pervasive in settings where recommender systems are currently used. This observation is simply that certain times are more important than others for correct recommendations to be made. This affects the way exploration and exploitation (greed) should change in our algorithms over time. We thus formalize a setting where regulating greed over time can be provably beneficial. This is captured through regret bounds and leads to principled algorithms. The end result is a framework for bandit-style recommender systems in which certain times are more important than others for making a correct decision.
If time permits I will discuss work on measuring uncertainty in parameter estimation for dynamical systems. I will present "prediction deviation," a new metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provide a good fit for the observed data, yet have maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty.