Quicklists
public 01:24:58

Ken Kamrin : A hierarchy of continuum models for granular flow

  -   Applied Math and Analysis ( 98 Views )

Granular materials are common in everyday life but are historically difficult to model. This has direct ramifications owing to the prominent role granular media play in multiple industries and terrain dynamics. One can attempt to track every grain with discrete particle methods, but realistic systems are often too large for this approach and a continuum model is desired. However, granular media display unusual behaviors that complicate the continuum treatment: they can behave like solid, flow like liquid, or separate into a "gas", and the rheology of the flowing state displays remarkable subtleties that have been historically difficult to model. To address these challenges, in this talk we develop a family of continuum models and solvers, permitting quantitative modeling capabilities for a variety of applications, ranging from general problems to specific techniques for problems of intrusion, impact, driving, and locomotion in grains.

To calculate flows in general cases, a rather significant nonlocal effect is evident, which is well-described with our recent nonlocal model accounting for grain cooperativity within the flow rule. This model enables us to capture a number of seemingly disparate manifestations of particle size-effects in granular flows including: (i) the wide shear-band widths observed in many inhomogeneous flows, (ii) the apparent strengthening exhibited in thin layers of grains, and (iii) the fluidization observed due to far-away motion of a boundary. On the other hand, to model only intrusion forces on submerged objects, we will show, and explain why, many of the experimentally observed results can be captured from a much simpler tension-free frictional plasticity model. This approach gives way to some surprisingly simple general tools, including the granular Resistive Force Theory, and a broad set of scaling laws inherent to the problem of granular locomotion. These scalings are validated experimentally and in discrete particle simulations suggesting a new down-scaled paradigm for granular locomotive design, on earth and beyond, to be used much like scaling laws in fluid mechanics.

public 01:14:44

Johann Guilleminot : Stochastic Modeling and Simulations of Random Fields in Computational Nonlinear Mechanics

  -   Applied Math and Analysis ( 94 Views )

Accounting for system-parameter and model uncertainties in computational models is a highly topical issue at the interface of computational mechanics, materials science and probability theory. In addition to the construction of efficient (e.g. Galerkin-type) stochastic solvers, the construction, calibration and validation of probabilistic representations are now widely recognized as key ingredients for performing accurate and robust simulations. This talk is specifically focused on the modeling and simulation of spatially-dependent properties in both linear and nonlinear frameworks. Information-theoretic models for matrix-valued random fields are first introduced. These representations are typically used, in solid mechanics, to define tensor-valued coefficients in elliptic stochastic partial differential operators. The main concepts and tools are illustrated, throughout this part, by considering the modeling of elasticity tensors fluctuating over nonpolyhedral geometries, as well as the modeling and identification of random interfaces in polymer nanocomposites. The latter application relies, in particular, on a statistical inverse problem coupling large-scale Molecular Dynamics simulations and a homogenization procedure. We then address the probabilistic modeling of strain energy functions in nonlinear elasticity. Here, constraints related to the polyconvexity of the potential are notably taken into account in order to ensure the existence of a stochastic solution. The proposed framework is finally exemplified by considering the modeling of various soft biological tissues, such as human brain and liver tissues.

public 01:34:32

Ioannis Kevrekidis : No Equations, No Variables, No Parameters, No Space, No Time -- Data, and the Crystal Ball Modeling of Complex/Multiscale Systems

  -   Applied Math and Analysis ( 184 Views )

Obtaining predictive dynamical equations from data lies at the heart of science and engineering modeling, and is the linchpin of our technology. In mathematical modeling one typically progresses from observations of the world (and some serious thinking!) first to selection of variables, then to equations for a model, and finally to the analysis of the model to make predictions. Good mathematical models give good predictions (and inaccurate ones do not) --- but the computational tools for analyzing them are the same: algorithms that are typically operating on closed form equations.
While the skeleton of the process remains the same, today we witness the development of mathematical techniques that operate directly on observations --- data, and appear to circumvent the serious thinking that goes into selecting variables and parameters and deriving accurate equations. The process then may appear to the user a little like making predictions by "looking into a crystal ball". Yet the "serious thinking" is still there and uses the same --- and some new --- mathematics: it goes into building algorithms that "jump directly" from data to the analysis of the model (which is now not available in closed form) so as to make predictions. Our work here presents a couple of efforts that illustrate this "new" path from data to predictions. It really is the same old path, but it is traveled by new means.