43,471 research outputs found

    Contextual normalization applied to aircraft gas turbine engine diagnosis

    Get PDF
    Diagnosing faults in aircraft gas turbine engines is a complex problem. It involves several tasks, including rapid and accurate interpretation of patterns in engine sensor data. We have investigated contextual normalization for the development of a software tool to help engine repair technicians with interpretation of sensor data. Contextual normalization is a new strategy for employing machine learning. It handles variation in data that is due to contextual factors, rather than the health of the engine. It does this by normalizing the data in a context-sensitive manner. This learning strategy was developed and tested using 242 observations of an aircraft gas turbine engine in a test cell, where each observation consists of roughly 12,000 numbers, gathered over a 12 second interval. There were eight classes of observations: seven deliberately implanted classes of faults and a healthy class. We compared two approaches to implementing our learning strategy: linear regression and instance-based learning. We have three main results. (1) For the given problem, instance-based learning works better than linear regression. (2) For this problem, contextual normalization works better than other common forms of normalization. (3) The algorithms described here can be the basis for a useful software tool for assisting technicians with the interpretation of sensor data

    Fast Matrix Factorization for Online Recommendation with Implicit Feedback

    Full text link
    This paper contributes improvements on both the effectiveness and efficiency of Matrix Factorization (MF) methods for implicit feedback. We highlight two critical issues of existing works. First, due to the large space of unobserved feedback, most existing works resort to assign a uniform weight to the missing data to reduce computational complexity. However, such a uniform assumption is invalid in real-world settings. Second, most methods are also designed in an offline setting and fail to keep up with the dynamic nature of online data. We address the above two issues in learning MF models from implicit feedback. We first propose to weight the missing data based on item popularity, which is more effective and flexible than the uniform-weight assumption. However, such a non-uniform weighting poses efficiency challenge in learning the model. To address this, we specifically design a new learning algorithm based on the element-wise Alternating Least Squares (eALS) technique, for efficiently optimizing a MF model with variably-weighted missing data. We exploit this efficiency to then seamlessly devise an incremental update strategy that instantly refreshes a MF model given new feedback. Through comprehensive experiments on two public datasets in both offline and online protocols, we show that our eALS method consistently outperforms state-of-the-art implicit MF methods. Our implementation is available at https://github.com/hexiangnan/sigir16-eals.Comment: 10 pages, 8 figure

    Removal Energies and Final State Interaction in Lepton Nucleus Scattering

    Full text link
    We investigate the binding energy parameters that should be used in modeling electron and neutrino scattering from nucleons bound in a nucleus within the framework of the impulse approximation. We discuss the relation between binding energy, missing energy, removal energy (ϵ\epsilon), spectral functions and shell model energy levels and extract updated removal energy parameters from ee^{\prime}p spectral function data. We address the difference in parameters for scattering from bound protons and neutrons. We also use inclusive e-A data to extract an empirical parameter UFSI((q3+k)2)U_{FSI}( (\vec q_3+\vec k)^2) to account for the interaction of final state nucleons (FSI) with the optical potential of the nucleus. Similarly we use VeffV_{eff} to account for the Coulomb potential of the nucleus. With three parameters ϵ\epsilon, UFSI((q3+k)2)U_{FSI}( (\vec q_3+\vec k)^2) and VeffV_{eff} we can describe the energy of final state electrons for all available electron QE scattering data. The use of the updated parameters in neutrino Monte Carlo generators reduces the systematic uncertainty in the combined removal energy (with FSI corrections) from ±\pm 20 MeV to ±\pm 5 MeV.Comment: 21 pages, 22 Figures, 11 Tables, Accepted for publication in Eur. Phys. J. C. 2019, all fits to Optical potential redone with respect to (q3+k)^

    Momentum Distribution in Nuclear Matter and Finite Nuclei

    Get PDF
    A simple method is presented to evaluate the effects of short-range correlations on the momentum distribution of nucleons in nuclear matter within the framework of the Green's function approach. The method provides a very efficient representation of the single-particle Green's function for a correlated system. The reliability of this method is established by comparing its results to those obtained in more elaborate calculations. The sensitivity of the momentum distribution on the nucleon-nucleon interaction and the nuclear density is studied. The momentum distributions of nucleons in finite nuclei are derived from those in nuclear matter using a local-density approximation. These results are compared to those obtained directly for light nuclei like 16O^{16}O.Comment: 17 pages REVTeX, 10 figures ps files adde

    Improving the Efficiency of Genomic Selection

    Get PDF
    We investigate two approaches to increase the efficiency of phenotypic prediction from genome-wide markers, which is a key step for genomic selection (GS) in plant and animal breeding. The first approach is feature selection based on Markov blankets, which provide a theoretically-sound framework for identifying non-informative markers. Fitting GS models using only the informative markers results in simpler models, which may allow cost savings from reduced genotyping. We show that this is accompanied by no loss, and possibly a small gain, in predictive power for four GS models: partial least squares (PLS), ridge regression, LASSO and elastic net. The second approach is the choice of kinship coefficients for genomic best linear unbiased prediction (GBLUP). We compare kinships based on different combinations of centring and scaling of marker genotypes, and a newly proposed kinship measure that adjusts for linkage disequilibrium (LD). We illustrate the use of both approaches and examine their performances using three real-world data sets from plant and animal genetics. We find that elastic net with feature selection and GBLUP using LD-adjusted kinships performed similarly well, and were the best-performing methods in our study.Comment: 17 pages, 5 figure

    Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning

    Full text link
    Cinemagraphs are a compelling way to convey dynamic aspects of a scene. In these media, dynamic and still elements are juxtaposed to create an artistic and narrative experience. Creating a high-quality, aesthetically pleasing cinemagraph requires isolating objects in a semantically meaningful way and then selecting good start times and looping periods for those objects to minimize visual artifacts (such a tearing). To achieve this, we present a new technique that uses object recognition and semantic segmentation as part of an optimization method to automatically create cinemagraphs from videos that are both visually appealing and semantically meaningful. Given a scene with multiple objects, there are many cinemagraphs one could create. Our method evaluates these multiple candidates and presents the best one, as determined by a model trained to predict human preferences in a collaborative way. We demonstrate the effectiveness of our approach with multiple results and a user study.Comment: To appear in ICCV 2017. Total 17 pages including the supplementary materia

    Selected Topics in High Energy Semi-Exclusive Electro-Nuclear Reactions

    Get PDF
    We review the present status of the theory of high energy reactions with semi-exclusive nucleon electro-production from nuclear targets. We demonstrate how the increase of transferred energies in these reactions opens a complete new window in studying the microscopic nuclear structure at small distances. The simplifications in theoretical descriptions associated with the increase of the energies are discussed. The theoretical framework for calculation of high energy nuclear reactions based on the effective Feynman diagram rules is described in details. The result of this approach is the generalized eikonal approximation (GEA), which is reduced to Glauber approximation when nucleon recoil is neglected. The method of GEA is demonstrated in the calculation of high energy electro-disintegration of the deuteron and A=3 targets. Subsequently we generalize the obtained formulae for A>3 nuclei. The relation of GEA to the Glauber theory is analyzed. Then based on the GEA framework we discuss some of the phenomena which can be studied in exclusive reactions, these are: nuclear transparency and short-range correlations in nuclei. We illustrate how light-cone dynamics of high-energy scattering emerge naturally in high energy electro-nuclear reactions.Comment: LaTex file with 51 pages and 23 eps figure

    Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Full text link
    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.Comment: 13 pages, 2 figure
    corecore