2,415 research outputs found

    Study on the radiative decays of hch_c via intermediate meson loops model

    Full text link
    Recently, the BESIII Collaboration reported two new decay processes hc(1P)→γηh_c(1P)\to \gamma \eta and γη′\gamma \eta^\prime. Inspired by this measurement, we propose to study the radiative decays of hch_c via intermediate charmed meson loops in an effective Lagrangian approach. With the acceptable cutoff parameter range, the calculated branching ratios of hc(1P)→γηh_c(1P)\to \gamma \eta and γη′\gamma \eta^\prime are orders of 10−4∼10−310^{-4}\sim 10^{-3} and 10−3∼10−210^{-3} \sim 10^{-2}, respectively. The ratio Rhc=B(hc→γη)/B(hc→γη′)R_{h_c}= \mathcal{B}( h_c\to \gamma \eta )/\mathcal{B}( h_c\to \gamma \eta^\prime ) can reproduce the experimental measurements with the commonly acceptable α\alpha range. This ratio provide us some information on the η−η′\eta-\eta^\prime mixing, which may be helpful for us to test SU(3)-flavor symmetries in QCD.Comment: 11 pages, 5 figures, accepted for publication in EPJ

    Joint Visual Denoising and Classification using Deep Learning

    Full text link
    Visual restoration and recognition are traditionally addressed in pipeline fashion, i.e. denoising followed by classification. Instead, observing correlations between the two tasks, for example clearer image will lead to better categorization and vice visa, we propose a joint framework for visual restoration and recognition for handwritten images, inspired by advances in deep autoencoder and multi-modality learning. Our model is a 3-pathway deep architecture with a hidden-layer representation which is shared by multi-inputs and outputs, and each branch can be composed of a multi-layer deep model. Thus, visual restoration and classification can be unified using shared representation via non-linear mapping, and model parameters can be learnt via backpropagation. Using MNIST and USPS data corrupted with structured noise, the proposed framework performs at least 20\% better in classification than separate pipelines, as well as clearer recovered images. The noise model and the reproducible source code is available at {\url{https://github.com/ganggit/jointmodel}}.Comment: 5 pages, 7 figures, ICIP 201

    Word Recognition with Deep Conditional Random Fields

    Full text link
    Recognition of handwritten words continues to be an important problem in document analysis and recognition. Existing approaches extract hand-engineered features from word images--which can perform poorly with new data sets. Recently, deep learning has attracted great attention because of the ability to learn features from raw data. Moreover they have yielded state-of-the-art results in classification tasks including character recognition and scene recognition. On the other hand, word recognition is a sequential problem where we need to model the correlation between characters. In this paper, we propose using deep Conditional Random Fields (deep CRFs) for word recognition. Basically, we combine CRFs with deep learning, in which deep features are learned and sequences are labeled in a unified framework. We pre-train the deep structure with stacked restricted Boltzmann machines (RBMs) for feature learning and optimize the entire network with an online learning algorithm. The proposed model was evaluated on two datasets, and seen to perform significantly better than competitive baseline models. The source code is available at https://github.com/ganggit/deepCRFs.Comment: 5 pages, published in ICIP 2016. arXiv admin note: substantial text overlap with arXiv:1412.339

    Tracking a Line for Automatic Piloting System of Drone

    Get PDF
    Nowadays, the FAA is discussing about open the low area for registered flight, so the Drone for Amazon will be the possible. In addition, Amazon is developing its own drone for future delivery method. In this project, we implement automatic piloting system for AR. Drone by tracking a line with on-board camera. This project uses AR. Drone for our implementation of the line tracker because there exist several available SDKs based on Wi-Fi network and two HD quality cameras. For the SDK, YA. Drone from University in Hamburg, Germany is employed and it is combined with Image Processing Technology in order for a drone to track the line. The preliminary results show that a drone can successfully track various types of lines behind a drone, such as straight, 90 degree, crank, circle and arbitrary lines. Using the automatic piloting system in this project, a drone can send something from one room to other room, and eventually deliver an item outdoor

    Monitoring soil moisture dynamics and energy fluxes using geostationary satellite data

    Get PDF
    • …
    corecore