12,991 research outputs found

    In-situ crack and keyhole pore detection in laser directed energy deposition through acoustic signal and deep learning

    Full text link
    Cracks and keyhole pores are detrimental defects in alloys produced by laser directed energy deposition (LDED). Laser-material interaction sound may hold information about underlying complex physical events such as crack propagation and pores formation. However, due to the noisy environment and intricate signal content, acoustic-based monitoring in LDED has received little attention. This paper proposes a novel acoustic-based in-situ defect detection strategy in LDED. The key contribution of this study is to develop an in-situ acoustic signal denoising, feature extraction, and sound classification pipeline that incorporates convolutional neural networks (CNN) for online defect prediction. Microscope images are used to identify locations of the cracks and keyhole pores within a part. The defect locations are spatiotemporally registered with acoustic signal. Various acoustic features corresponding to defect-free regions, cracks, and keyhole pores are extracted and analysed in time-domain, frequency-domain, and time-frequency representations. The CNN model is trained to predict defect occurrences using the Mel-Frequency Cepstral Coefficients (MFCCs) of the lasermaterial interaction sound. The CNN model is compared to various classic machine learning models trained on the denoised acoustic dataset and raw acoustic dataset. The validation results shows that the CNN model trained on the denoised dataset outperforms others with the highest overall accuracy (89%), keyhole pore prediction accuracy (93%), and AUC-ROC score (98%). Furthermore, the trained CNN model can be deployed into an in-house developed software platform for online quality monitoring. The proposed strategy is the first study to use acoustic signals with deep learning for insitu defect detection in LDED process.Comment: 36 Pages, 16 Figures, accepted at journal Additive Manufacturin

    Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics

    Get PDF
    Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts. In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact pp-values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited. In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical R2R^2 in least squares regression. In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions

    Percolation and electrical conduction in random systems of curved linear objects on a plane: computer simulations along with a mean-field approach

    Full text link
    Using computer simulations, we have studied the percolation and the electrical conductance of two-dimensional, random percolating networks of curved, zero-width metallic nanowires. We mimicked the curved nanowires using circular arcs. The percolation threshold decreased as the aspect ratio of the arcs increased. Comparison with published data on the percolation threshold of symmetric quadratic B\'{e}zier curves suggests that, when the percolation of slightly curved wires is simulated, the particular choice of curve to mimic the shape of real-world wires is of little importance. Considering the electrical properties, we took into account both the nanowire resistance per unit length and the junction (nanowire/nanowire contact) resistance. Using a mean-field approximation (MFA), we derived the total electrical conductance of the nanowire-based networks as a function of their geometrical and physical parameters. The MFA predictions have been confirmed by our Monte Carlo numerical simulations. For our random homogeneous and isotropic systems of conductive curved wires, the electric conductance decreased as the wire shape changed from a stick to a ring when the wire length remained fixed.Comment: 8 pages, 7 figures, 2 tables, 32 Refs.; Supplemental Material: 9 pages, 2 figures, 2 Ref

    Search for third generation vector-like leptons with the ATLAS detector

    Get PDF
    The Standard Model of particle physics provides a concise description of the building blocks of our universe in terms of fundamental particles and their interactions. It is an extremely successful theory, providing a plethora of predictions that precisely match experimental observation. In 2012, the Higgs boson was observed at CERN and was the last particle predicted by the Standard Model that had yet-to-be discovered. While this added further credibility to the theory, the Standard Model appears incomplete. Notably, it only accounts for 5% of the energy density of the universe (the rest being ``dark matter'' and ``dark energy''), it cannot resolve the gravitational force with quantum theory, it does not explain the origin of neutrino masses and cannot account for matter/anti-matter asymmetry. The most plausible explanation is that the theory is an approximation and new physics remains. Vector-like leptons are well-motivated by a number of theories that seek to provide closure on the Standard Model. They are a simple addition to the Standard Model and can help to resolve a number of discrepancies without disturbing precisely measured observables. This thesis presents a search for vector-like leptons that preferentially couple to tau leptons. The search was performed using proton-proton collision data from the Large Hadron Collider collected by the ATLAS experiment from 2015 to 2018 at center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 139 inverse femtobarns. Final states of various lepton multiplicities were considered to isolate the vector-like lepton signal against Standard Model and instrumental background. The major backgrounds mimicking the signal are from WZ, ZZ, tt+Z production and from mis-identified leptons. A number of boosted decision trees were used to improve rejection power against background where the signal was measured using a binned-likelihood estimator. No excess relative to the Standard Model was observed. Exclusion limits were placed on vector-like leptons in the mass range of 130 to 898 GeV

    Full stack development toward a trapped ion logical qubit

    Get PDF
    Quantum error correction is a key step toward the construction of a large-scale quantum computer, by preventing small infidelities in quantum gates from accumulating over the course of an algorithm. Detecting and correcting errors is achieved by using multiple physical qubits to form a smaller number of robust logical qubits. The physical implementation of a logical qubit requires multiple qubits, on which high fidelity gates can be performed. The project aims to realize a logical qubit based on ions confined on a microfabricated surface trap. Each physical qubit will be a microwave dressed state qubit based on 171Yb+ ions. Gates are intended to be realized through RF and microwave radiation in combination with magnetic field gradients. The project vertically integrates software down to hardware compilation layers in order to deliver, in the near future, a fully functional small device demonstrator. This thesis presents novel results on multiple layers of a full stack quantum computer model. On the hardware level a robust quantum gate is studied and ion displacement over the X-junction geometry is demonstrated. The experimental organization is optimized through automation and compressed waveform data transmission. A new quantum assembly language purely dedicated to trapped ion quantum computers is introduced. The demonstrator is aimed at testing implementation of quantum error correction codes while preparing for larger scale iterations.Open Acces

    Brain simulation as a cloud service: The Virtual Brain on EBRAINS

    Get PDF
    open access articleThe Virtual Brain (TVB) is now available as open-source services on the cloud research platform EBRAINS (ebrains.eu). It offers software for constructing, simulating and analysing brain network models including the TVB simulator; magnetic resonance imaging (MRI) processing pipelines to extract structural and functional brain networks; combined simulation of large-scale brain networks with small-scale spiking networks; automatic con- version of user-specified model equations into fast simulation code; simulation-ready brain models of patients and healthy volunteers; Bayesian parameter optimization in epilepsy patient models; data and software for mouse brain simulation; and extensive educational material. TVB cloud services facilitate reproducible online collabo- ration and discovery of data assets, models, and software embedded in scalable and secure workflows, a precondition for research on large cohort data sets, better generalizability, and clinical translation

    Novel strategies for the modulation and investigation of memories in the hippocampus

    Full text link
    Disruptions of the memory systems in the brain are linked to the manifestation of many neuropsychiatric diseases such as Alzheimer’s disease, depression, and post-traumatic stress disorder. The limited efficacy of current treatments necessities the development of more effective therapies. Neuromodulation has proven effective in a variety of neurological diseases and could be an attractive solution for memory disorders. However, the application of neuromodulation requires a more detailed understanding of the network dynamics associated with memory formation and recall. In this work, we applied a combination of optical and computational tools in the development of a novel strategy for the modulation of memories, and have expanded its application for interrogation of the hippocampal circuitry underlying memory processing in mice. First, we developed a closed-loop optogenetic stimulation platform to activate neurons implicated in memory processing (engram neurons) with a high temporal resolution. We applied this platform to modulate the activity of engram neurons and assess memory processing with respect to synchronous network activity. The results of our investigation support the proposal that encoding new information and recalling stored memories occur during distinct epochs of hippocampal network-wide oscillations. Having established the high efficacy of the modulation of engram neurons’ activity in a closed-loop fashion, we sought to combine it with two-photon imaging to enable high spatial resolution interrogation of hippocampal circuitry. We developed a behavioral apparatus for head-fixed engram modulation and the assessment of memory recall in immobile animals. Moreover, through the optimization of dual color two-photon imaging, we improved the ability to monitor activity of neurons in the subfields of the hippocampus with cellular specificity. The platform created here will be applied to investigate the effects of engram reactivation on downstream projections targets with high spatial and cell subtype specificity. Following these lines of investigations will enhance our understanding of memory modulation and could lead to novel neuromodulation treatments for neurological disorders associated with memory malfunctioning

    Elasto-plastic deformations within a material point framework on modern GPU architectures

    Get PDF
    Plastic strain localization is an important process on Earth. It strongly influ- ences the mechanical behaviour of natural processes, such as fault mechanics, earthquakes or orogeny. At a smaller scale, a landslide is a fantastic example of elasto-plastic deformations. Such behaviour spans from pre-failure mech- anisms to post-failure propagation of the unstable material. To fully resolve the landslide mechanics, the selected numerical methods should be able to efficiently address a wide range of deformation magnitudes. Accurate and performant numerical modelling requires important compu- tational resources. Mesh-free numerical methods such as the material point method (MPM) or the smoothed-particle hydrodynamics (SPH) are particu- larly computationally expensive, when compared with mesh-based methods, such as the finite element method (FEM) or the finite difference method (FDM). Still, mesh-free methods are particularly well-suited to numerical problems involving large elasto-plastic deformations. But, the computational efficiency of these methods should be first improved in order to tackle complex three-dimensional problems, i.e., landslides. As such, this research work attempts to alleviate the computational cost of the material point method by using the most recent graphics processing unit (GPU) architectures available. GPUs are many-core processors originally designed to refresh screen pixels (e.g., for computer games) independently. This allows GPUs to delivers a massive parallelism when compared to central processing units (CPUs). To do so, this research work first investigates code prototyping in a high- level language, e.g., MATLAB. This allows to implement vectorized algorithms and benchmark numerical results of two-dimensional analysis with analytical solutions and/or experimental results in an affordable amount of time. After- wards, low-level language such as CUDA C is used to efficiently implement a GPU-based solver, i.e., ep2-3De v1.0, can resolve three-dimensional prob- lems in a decent amount of time. This part takes advantages of the massive parallelism of modern GPU architectures. In addition, a first attempt of GPU parallel computing, i.e., multi-GPU codes, is performed to increase even more the performance and to address the on-chip memory limitation. Finally, this GPU-based solver is used to investigate three-dimensional granular collapses and is compared with experimental evidences obtained in the laboratory. This research work demonstrates that the material point method is well suited to resolve small to large elasto-plastic deformations. Moreover, the computational efficiency of the method can be dramatically increased using modern GPU architectures. These allow fast, performant and accurate three- dimensional modelling of landslides, provided that the on-chip memory limi- tation is alleviated with an appropriate parallel strategy
    corecore