1,662 research outputs found

    Machine Learning and Inference Methods for Surrogate Modeling and Inexpensive Characterization of Elastodynamic Systems

    Get PDF
    This thesis has two main focuses: (1) surrogate modeling of elastodynamic systems, and (2) inference methods for the inexpensive characterization of elastodynamic systems. Elastodynamics is the study of how and why materials move and deform when they are subject to time-varying loads, covering a wide range of applications from architected materials, to telecommunications, seismology, sound isolation, non-destructive evaluation, and medical imaging. Here, we explore how to more efficiently model elastodynamics, and what we can infer about our environment from observing them. The next generation of material engineering aims to leverage advanced multi-functional control over elastodynamic behaviors, but is currently limited by the large computational cost of purely physics-based modeling methods. Surrogate models aim to alleviate this cost by providing a data-driven approach to evaluate engineered material systems more efficiently. However, most current surrogate models lack certain useful traits, diminishing their potential for real-world use. This thesis begins by surveying the current state of surrogate modeling techniques, and establishes a set of state-of-the-art traits that greatly augment the utility of surrogate models, offering a perspective for the future direction of the field. Next, a data-driven surrogate model based on Gaussian process regression for the computation of dispersion relations is developed, GPR-dispersion. The model exhibits several of the aforementioned traits, including representation invariance, data efficiency, incorporating direct use of physical theories, and the provision of both uncertainty estimates on its predictions and gradients for compatibility with gradient-based design optimization methods. GPR-dispersion is evaluated in comparison against both deep learning and traditional physics-based models. The thesis then pivots to inference methods for the inexpensive characterization of material systems via partial observation of elastodynamic behaviors. Tissue stiffness is a tremendously important biomarker for a long list of health conditions, but often needs to be evaluated in a medical clinic with expensive equipment and highly trained workers. At-home health monitoring is a major next-generation goal of healthcare, but the trajectories of current consumer-grade sensor technology and biomarker inference methods have not yet fully intersected. Inspired by a related work (Visual Vibration Tomography), Visual Surface Wave Tomography (VSWT) is proposed. VSWT observes partial information about the surface waves of layered elastodynamic systems (such as biological tissue) through monocular video to infer subsurface constitutive and geometrical information. Simulated experiments are presented to evaluate the accuracy, sensitivity, and limits of the method under ideal conditions. Real-world experimental results are presented using phantom materials that emulate biological tissue to demonstrate a practical proof of concept.</p

    How to See Hidden Patterns in Metamaterials with Interpretable Machine Learning

    Full text link
    Metamaterials are composite materials with engineered geometrical micro- and meso-structures that can lead to uncommon physical properties, like negative Poisson's ratio or ultra-low shear resistance. Periodic metamaterials are composed of repeating unit-cells, and geometrical patterns within these unit-cells influence the propagation of elastic or acoustic waves and control dispersion. In this work, we develop a new interpretable, multi-resolution machine learning framework for finding patterns in the unit-cells of materials that reveal their dynamic properties. Specifically, we propose two new interpretable representations of metamaterials, called shape-frequency features and unit-cell templates. Machine learning models built using these feature classes can accurately predict dynamic material properties. These feature representations (particularly the unit-cell templates) have a useful property: they can operate on designs of higher resolutions. By learning key coarse scale patterns that can be reliably transferred to finer resolution design space via the shape-frequency features or unit-cell templates, we can almost freely design the fine resolution features of the unit-cell without changing coarse scale physics. Through this multi-resolution approach, we are able to design materials that possess target frequency ranges in which waves are allowed or disallowed to propagate (frequency bandgaps). Our approach yields major benefits: (1) unlike typical machine learning approaches to materials science, our models are interpretable, (2) our approaches leverage multi-resolution properties, and (3) our approach provides design flexibility.Comment: Under revie

    Visual Vibration Tomography: Estimating Interior Material Properties from Monocular Video

    Full text link
    An object's interior material properties, while invisible to the human eye, determine motion observed on its surface. We propose an approach that estimates heterogeneous material properties of an object from a monocular video of its surface vibrations. Specifically, we show how to estimate Young's modulus and density throughout a 3D object with known geometry. Knowledge of how these values change across the object is useful for simulating its motion and characterizing any defects. Traditional non-destructive testing approaches, which often require expensive instruments, generally estimate only homogenized material properties or simply identify the presence of defects. In contrast, our approach leverages monocular video to (1) identify image-space modes from an object's sub-pixel motion, and (2) directly infer spatially-varying Young's modulus and density values from the observed modes. We demonstrate our approach on both simulated and real videos

    On Aethalometer measurement uncertainties and an instrument correction factor for the Arctic

    Get PDF
    Several types of filter-based instruments are used to estimate aerosol light absorption coefficients. Two significant results are presented based on Aethalometer measurements at six Arctic stations from 2012 to 2014. First, an alternative method of post-processing the Aethalometer data is presented, which reduces measurement noise and lowers the detection limit of the instrument more effectively than box-car averaging. The biggest benefit of this approach can be achieved if instrument drift is minimised. Moreover, by using an attenuation threshold criterion for data post-processing, the relative uncertainty from the electronic noise of the instrument is kept constant. This approach results in a time series with a variable collection time (Delta t) but with a constant relative uncertainty with regard to electronic noise in the instrument. An additional advantage of this method is that the detection limit of the instrument will be lowered at small aerosol concentrations at the expense of temporal resolution, whereas there is little to no loss in temporal resolution at high aerosol concentrations (>2.1-6.7Mm(-1) as measured by the Aethalometers). At high aerosol concentrations, minimising the detection limit of the instrument is less critical. Additionally, utilising co-located filter-based absorption photometers, a correction factor is presented for the Arctic that can be used in Aethalometer corrections available in literature. The correction factor of 3.45 was calculated for low-elevation Arctic stations. This correction factor harmonises Aethalometer attenuation coefficients with light absorption coefficients as measured by the co-located light absorption photometers. Using one correction factor for Arctic Aethalometers has the advantage that measurements between stations become more inter-comparable.Peer reviewe

    WW Production Cross Section and W Branching Fractions in e+e- Collisions at 189 GeV

    Get PDF
    From a data sample of 183 pb^-1 recorded at a center-of-mass energy of roots = 189 GeV with the OPAL detector at LEP, 3068 W-pair candidate events are selected. Assuming Standard Model W boson decay branching fractions, the W-pair production cross section is measured to be sigmaWW = 16.30 +- 0.34(stat.) +- 0.18(syst.) pb. When combined with previous OPAL measurements, the W boson branching fraction to hadrons is determined to be 68.32 +- 0.61(stat.) +- 0.28(syst.) % assuming lepton universality. These results are consistent with Standard Model expectations.Comment: 22 pages, 5 figures, submitted to Phys. Lett.

    A measurement of the tau mass and the first CPT test with tau leptons

    Full text link
    We measure the mass of the tau lepton to be 1775.1+-1.6(stat)+-1.0(syst.) MeV using tau pairs from Z0 decays. To test CPT invariance we compare the masses of the positively and negatively charged tau leptons. The relative mass difference is found to be smaller than 3.0 10^-3 at the 90% confidence level.Comment: 10 pages, 4 figures, Submitted to Phys. Letts.

    Measurement of the B0 Lifetime and Oscillation Frequency using B0->D*+l-v decays

    Full text link
    The lifetime and oscillation frequency of the B0 meson has been measured using B0->D*+l-v decays recorded on the Z0 peak with the OPAL detector at LEP. The D*+ -> D0pi+ decays were reconstructed using an inclusive technique and the production flavour of the B0 mesons was determined using a combination of tags from the rest of the event. The results t_B0 = 1.541 +- 0.028 +- 0.023 ps, Dm_d = 0.497 +- 0.024 +- 0.025 ps-1 were obtained, where in each case the first error is statistical and the second systematic.Comment: 17 pages, 4 figures, submitted to Phys. Lett.

    A Measurement of Rb using a Double Tagging Method

    Get PDF
    The fraction of Z to bbbar events in hadronic Z decays has been measured by the OPAL experiment using the data collected at LEP between 1992 and 1995. The Z to bbbar decays were tagged using displaced secondary vertices, and high momentum electrons and muons. Systematic uncertainties were reduced by measuring the b-tagging efficiency using a double tagging technique. Efficiency correlations between opposite hemispheres of an event are small, and are well understood through comparisons between real and simulated data samples. A value of Rb = 0.2178 +- 0.0011 +- 0.0013 was obtained, where the first error is statistical and the second systematic. The uncertainty on Rc, the fraction of Z to ccbar events in hadronic Z decays, is not included in the errors. The dependence on Rc is Delta(Rb)/Rb = -0.056*Delta(Rc)/Rc where Delta(Rc) is the deviation of Rc from the value 0.172 predicted by the Standard Model. The result for Rb agrees with the value of 0.2155 +- 0.0003 predicted by the Standard Model.Comment: 42 pages, LaTeX, 14 eps figures included, submitted to European Physical Journal

    First Measurement of Z/gamma* Production in Compton Scattering of Quasi-real Photons

    Full text link
    We report the first observation of Z/gamma* production in Compton scattering of quasi-real photons. This is a subprocess of the reaction e+e- to e+e-Z/gamma*, where one of the final state electrons is undetected. Approximately 55 pb-1 of data collected in the year 1997 at an e+e- centre-of-mass energy of 183 GeV with the OPAL detector at LEP have been analysed. The Z/gamma* from Compton scattering has been detected in the hadronic decay channel. Within well defined kinematic bounds, we measure the product of cross-section and Z/gamma* branching ratio to hadrons to be (0.9+-0.3+-0.1) pb for events with a hadronic mass larger than 60 GeV, dominated by (e)eZ production. In the hadronic mass region between 5 GeV and 60 GeV, dominated by (e)egamma* production, this product is found to be (4.1+-1.6+-0.6) pb. Our results agree with the predictions of two Monte Carlo event generators, grc4f and PYTHIA.Comment: 18 pages, LaTeX, 5 eps figures included, submitted to Physics Letters
    • …
    corecore