Many systems employ sensors to interpret the environment. The target-tracking task is to gather sensor data from the environment and then to partition these data into tracks that are produced by the same target. The goal of sensor fusion is to gather data from a heterogeneous collection of sensors (e.g, audio and video) and fuse them together in a way that enriches the performance of the sensor network at some task of interest. This talk summarizes two recent efforts that incorporate mildly sophisticated mathematics into the general sensor arena, and also comments on the joys and pitfalls of trying to apply math for customers who care much more about the results than the math. First, a key problem in tracking is to 'connect the dots:' more precisely, to take a piece of sensor data at a given time and associate it with a previously-existing track (or to declare that this is a new object). We use topological data analysis (TDA) to form data-association likelihood scores, and integrate these scores into a well-respected algorithm called Multiple Hypothesis Tracking. Tests on simulated data show that the TDA adds significant value over baseline, especially in the context of noisy sensor data. Second, we propose a very general and entirely unsupervised sensor fusion pipeline that uses recent techniques from diffusion geometry and wavelet theory to compress and then fuse time series of arbitrary dimension arising from disparate sensor modalities. The goal of the pipeline is to differentiate classes of time-ordered behavior sequences, and we demonstrate its performance on a well-studied digit sequence database. This talk represents joint work with many people. including Chris Tralie, Nathan Borggren, Sang Chin, Jesse Clarke, Jonathan deSena, John Harer, Jay Hineman, Elizabeth Munch, Andrew Newman, Alex Pieloch, David Porter, David Rouse, Nate Strawn, Adam Watkins, Michael Williams, Lihan Yao, and Peter Zulch.