1,209 research outputs found

    Spectral-spatial classification of hyperspectral images: three tricks and a new supervised learning setting

    Get PDF
    Spectral-spatial classification of hyperspectral images has been the subject of many studies in recent years. In the presence of only very few labeled pixels, this task becomes challenging. In this paper we address the following two research questions: 1) Can a simple neural network with just a single hidden layer achieve state of the art performance in the presence of few labeled pixels? 2) How is the performance of hyperspectral image classification methods affected when using disjoint train and test sets? We give a positive answer to the first question by using three tricks within a very basic shallow Convolutional Neural Network (CNN) architecture: a tailored loss function, and smooth- and label-based data augmentation. The tailored loss function enforces that neighborhood wavelengths have similar contributions to the features generated during training. A new label-based technique here proposed favors selection of pixels in smaller classes, which is beneficial in the presence of very few labeled pixels and skewed class distributions. To address the second question, we introduce a new sampling procedure to generate disjoint train and test set. Then the train set is used to obtain the CNN model, which is then applied to pixels in the test set to estimate their labels. We assess the efficacy of the simple neural network method on five publicly available hyperspectral images. On these images our method significantly outperforms considered baselines. Notably, with just 1% of labeled pixels per class, on these datasets our method achieves an accuracy that goes from 86.42% (challenging dataset) to 99.52% (easy dataset). Furthermore we show that the simple neural network method improves over other baselines in the new challenging supervised setting. Our analysis substantiates the highly beneficial effect of using the entire image (so train and test data) for constructing a model.Comment: Remote Sensing 201

    Link Mining for Kernel-based Compound-Protein Interaction Predictions Using a Chemogenomics Approach

    Full text link
    Virtual screening (VS) is widely used during computational drug discovery to reduce costs. Chemogenomics-based virtual screening (CGBVS) can be used to predict new compound-protein interactions (CPIs) from known CPI network data using several methods, including machine learning and data mining. Although CGBVS facilitates highly efficient and accurate CPI prediction, it has poor performance for prediction of new compounds for which CPIs are unknown. The pairwise kernel method (PKM) is a state-of-the-art CGBVS method and shows high accuracy for prediction of new compounds. In this study, on the basis of link mining, we improved the PKM by combining link indicator kernel (LIK) and chemical similarity and evaluated the accuracy of these methods. The proposed method obtained an average area under the precision-recall curve (AUPR) value of 0.562, which was higher than that achieved by the conventional Gaussian interaction profile (GIP) method (0.425), and the calculation time was only increased by a few percent

    Applicative Bidirectional Programming with Lenses

    Get PDF
    A bidirectional transformation is a pair of mappings between source and view data objects, one in each direction. When the view is modified, the source is updated accordingly with respect to some laws. One way to reduce the development and maintenance effort of bidirectional transformations is to have specialized languages in which the resulting programs are bidirectional by construction---giving rise to the paradigm of bidirectional programming. In this paper, we develop a framework for applicative-style and higher-order bidirectional programming, in which we can write bidirectional transformations as unidirectional programs in standard functional languages, opening up access to the bundle of language features previously only available to conventional unidirectional languages. Our framework essentially bridges two very different approaches of bidirectional programming, namely the lens framework and Voigtlander’s semantic bidirectionalization, creating a new programming style that is able to bag benefits from both

    Minimum output entropy of bosonic channels: a conjecture

    Full text link
    The von Neumann entropy at the output of a bosonic channel with thermal noise is analyzed. Coherent-state inputs are conjectured to minimize this output entropy. Physical and mathematical evidence in support of the conjecture is provided. A stronger conjecture--that output states resulting from coherent-state inputs majorize the output states from other inputs--is also discussed.Comment: 15 pages, 12 figure

    Electrical Characterization of 1.8 MeV Proton-Bombarded ZnO

    Get PDF
    We report on the electrical characterization of single-crystal ZnO and Au Schottky contacts formed thereon before and after bombarding them with 1.8 MeV protons. From capacitance–voltage measurements, we found that ZnO is remarkably resistant to high-energy proton bombardment and that each incident proton removes about two orders of magnitude less carriers than in GaN. Deep level transient spectroscopy indicates a similar effect: the two electron traps detected are introduced in extremely low rates. One possible interpretation of these results is that the primary radiation-induced defects in ZnO may be unstable at room temperature and anneal out without leaving harmful defects that are responsible for carrier compensation

    The transrectus sheath preperitoneal mesh repair for inguinal hernia: technique, rationale, and results of the first 50 cases

    Get PDF
    Item does not contain fulltextINTRODUCTION: Laparoscopic and endoscopic hernia repair popularized the preperitoneal mesh position due to promising results concerning less chronic pain. However, considerable proportions of severe adverse events, learning curves, or added costs have to be taken into account. Therefore, open preperitoneal mesh techniques may have more advantages. The open approach to the preperitoneal space (PPS) according to transrectus sheath preperitoneal (TREPP) mesh repair is through the sheath of the rectus abdominus muscle. This technique provides an excellent view of the PPS and facilitates elective or acute hernia reduction and mesh positioning under direct vision. In concordance with the promising transinguinal preperitoneal inguinal hernia repair experiences in the literature, we investigated the feasibility of TREPP. METHODS: A rationale description of the surgical technique, available level of evidence for thoughts behind technical considerations. Furthermore, a descriptive report of the clinical outcomes of our pilot case series including 50 patients undergoing the TREPP mesh repair. RESULTS: A consecutive group of our first 50 patients were operated with the TREPP technique. No technical problems were experienced during the development of this technique. No conversions to Lichtenstein repair were necessary. No recurrences and no chronic pain after a mean follow-up of 2 years were notable findings. CONCLUSION: This description of the technique shows that the TREPP mesh repair might be a promising method because of the complete preperitoneal view, the short learning curve, and the stay-away-from-the-nerves principle. The rationale of the TREPP repair is discussed in detail.1 juni 201

    Monte Carlo Methods for Rough Free Energy Landscapes: Population Annealing and Parallel Tempering

    Full text link
    Parallel tempering and population annealing are both effective methods for simulating equilibrium systems with rough free energy landscapes. Parallel tempering, also known as replica exchange Monte Carlo, is a Markov chain Monte Carlo method while population annealing is a sequential Monte Carlo method. Both methods overcome the exponential slowing associated with high free energy barriers. The convergence properties and efficiency of the two methods are compared. For large systems, population annealing initially converges to equilibrium more rapidly than parallel tempering for the same amount of computational work. However, parallel tempering converges exponentially and population annealing inversely in the computational work so that ultimately parallel tempering approaches equilibrium more rapidly than population annealing.Comment: 10 pages, 3 figure

    More unlabelled data or label more data? A study on semi-supervised laparoscopic image segmentation

    Get PDF
    Improving a semi-supervised image segmentation task has the option of adding more unlabelled images, labelling the unlabelled images or combining both, as neither image acquisition nor expert labelling can be considered trivial in most clinical applications. With a laparoscopic liver image segmentation application, we investigate the performance impact by altering the quantities of labelled and unlabelled training data, using a semi-supervised segmentation algorithm based on the mean teacher learning paradigm. We first report a significantly higher segmentation accuracy, compared with supervised learning. Interestingly, this comparison reveals that the training strategy adopted in the semi-supervised algorithm is also responsible for this observed improvement, in addition to the added unlabelled data. We then compare different combinations of labelled and unlabelled data set sizes for training semi-supervised segmentation networks, to provide a quantitative example of the practically useful trade-off between the two data planning strategies in this surgical guidance application
    • …
    corecore