Quicklists
public 01:34:59

Andrew Brouwer : Harnessing environmental surveillance: mathematical modeling in the fight against polio

  -   Mathematical Biology ( 213 Views )

Israel experienced an outbreak of wild poliovirus type 1 (WPV1) in 2013-14, detected through environmental surveillance of the sewage system. No cases of acute flaccid paralysis were reported, and the epidemic subsided after a bivalent oral polio vaccination (bOPV) campaign. As we approach global eradication, polio will increasingly be detected only through environmental surveillance. However, we have lacked the theory to translate environmental surveillance into public health metrics; it is a priori unclear how much environmental surveillance can even say about population-level disease dynamics. We developed a framework to convert quantitative polymerase chain reaction (qPCR) cycle threshold data into scaled WPV1 and OPV1 concentrations for inference within a deterministic, compartmental infectious disease transmission model. We used differential algebra and profile likelihood techniques to perform identifiability analysis, that is, to assess how much information exists in the data for the model, and to quantify inference uncertainty. From the environmental surveillance data, we estimated the epidemic curve and transmission dynamics, determining that the outbreak likely happened much faster than previously thought. Our mathematical modeling approach brings public health relevance to environmental data that, if systematically collected, can guide eradication efforts.

public 01:14:52

Joshua Vogelstein : Consistent Graph Classification applied to Human Brain Connectome Data

  -   Mathematical Biology ( 193 Views )

Graphs are becoming a favorite mathematical object for representation of data. Yet, statistical pattern recognition has focused almost entirely on vector valued data in Euclidean space. Graphs, however, live in graph space, which is non-Euclidean. Thus, most inference techniques are not even defined for graph valued data. Previous work in the classification of graph-valued data typically follows one of two recipes. (1) Vectorize the adjacency matrices of the graphs, and apply standard machine learning techniques. (2) Compute some number of graph invariants (e.g., clustering coefficient, or degree distribution) for each graph, and then apply standard machine learning techniques. We follow a different recipe based in the probabilistic theory of pattern recognition. First, we define a joint graph-class model. Given this model, we derive classifiers which we prove are consistent; that is, they converge to the Bayes optimal classifier. Specifically, we build two consistent classifiers for graph valued data, a parametric and a non-parametric version. In a sense, these classifiers span the spectrum of complexity, the former is consistent for graphs sampled from relatively simple random graph distributions, the latter is consistent for graphs sampled from (nearly) any random graph distribution. Although both classifiers assume that all our graphs have labeled vertices, we generalize these results to also incorporate unlabeled graphs, as well as weighted and multigraphs. We apply these graph classifiers to human brain data. Specifically, using diffusion MRI, we can obtain large brain-graphs (10,000 vertices) for each subject, where vertices correspond to voxels. We then coarsen the graphs spatially to obtain smaller (70 vertex) graphs per subject. Using <50 subjects, we are able to achieve nearly 85% classification accuracy, with results interpretable to neurobiologists with regard to the brain regions of interest.

public 01:14:53

Stephan Huckemann : Statistical challenges in shape prediction of biomolecules

  -   Mathematical Biology ( 176 Views )

The three-dimensional higher-order structure of biomolecules determines their functionality. While assessing primary structure is fairly easily accessible, reconstruction of higher order structure is costly. It often requires elaborate correction of atomic clashes, frequently not fully successful. Using RNA data, we describe a purely statistical method, learning error correction, drawing power from a two-scale approach. Our microscopic scale describes single suites by dihedral angles of individual atom bonds; here, addressing the challenge of torus principal component analysis (PCA) leads to a fundamentally new approach to PCA building on principal nested spheres by Jung et al. (2012). Based on an observed relationship with a mesoscopic scale, landmarks describing several suites, we use Fréchet means for angular shape and size-and-shape, correcting within-suite-backbone-to-backbone clashes. We validate this method by comparison to reconstructions obtained from simulations approximating biophysical chemistry and illustrate its power by the RNA example of SARS-CoV-2.

This is joint work with Benjamin Eltzner, Kanti V. Mardia and Henrik Wiechers.

Literature:

Eltzner, B., Huckemann, S. F., Mardia, K. V. (2018): Torus principal component analysis with applications to RNA structure. Ann. Appl. Statist. 12(2), 1332?1359.

Jung, S., Dryden, I. L., Marron, J. S. (2012): Analysis of principal nested spheres. Biometrika, 99 (3), 551-568

Mardia, K. V., Wiechers, H., Eltzner, B., Huckemann, S. F. (2022). Principal component analysis and clustering on manifolds. Journal of Multivariate Analysis, 188, 104862, https://www.sciencedirect.com/science/article/pii/S0047259X21001408

Wiechers, H., Eltzner, B., Mardia, K. V., Huckemann, S. F. (2021). Learning torus PCA based classification for multiscale RNA backbone structure correction with application to SARS-CoV-2. To appear in the Journal of the Royal Statistical Society, Series C, bioRxiv https://doi.org/10.1101/2021.08.06.455406

public 01:14:48

Steven Baer : Multiscale Modeling of Neural Subcircuits and Feedback Mechanisms in the Outer Plexiform Layer of the Retina

  -   Mathematical Biology ( 143 Views )

Visual processing begins in the outer plexiform layer of the retina, where
bipolar, horizontal, and photoreceptor cells interact. In vertebrates, the
onset of dim backgrounds can enhance small spot flicker responses of
retinal horizontal cells. This flicker response is called background-
induced flicker enhancement. The underlying mechanism for the feedback
is unclear but competing hypotheses have been proposed. One is the GABA
hypothesis, which states that the inhibitory neurotransmitter GABA,
released from horizontal cells, mediates the feedback by blocking calcium
channels. Another is the ephaptic hypothesis, which contends that calcium
entry is regulated by changes in the electrical potential within the
intersynaptic space between cones and horizontal cells. In this study, a
continuum spine model of cone-horizontal cell synaptic circuitry is
formulated. The model captures two spatial scales - the scale of an
individual synapse and the scale of the receptive field involving hundreds
to thousands of synapses. We show that the ephaptic mechanism produces
reasonable qualitative agreement with the temporal dynamics exhibited by
flicker enhancement experiments. We find that although GABA produces
enhancement, this mechanism alone is insufficient to reproduce the
experimental results. We view this multiscale continuum approach as a
first step in formulating a multi-layer mathematical model of retinal
circuitry, which would include the other ‘brain nuclei’ within the retina:
the inner plexiform layer where bipolar, amacrine, interplexiform, and
ganglion cells interact.