175,015 research outputs found

    No Spare Parts: Sharing Part Detectors for Image Categorization

    Get PDF
    This work aims for image categorization using a representation of distinctive parts. Different from existing part-based work, we argue that parts are naturally shared between image categories and should be modeled as such. We motivate our approach with a quantitative and qualitative analysis by backtracking where selected parts come from. Our analysis shows that in addition to the category parts defining the class, the parts coming from the background context and parts from other image categories improve categorization performance. Part selection should not be done separately for each category, but instead be shared and optimized over all categories. To incorporate part sharing between categories, we present an algorithm based on AdaBoost to jointly optimize part sharing and selection, as well as fusion with the global image representation. We achieve results competitive to the state-of-the-art on object, scene, and action categories, further improving over deep convolutional neural networks

    Detector adaptation by maximising agreement between independent data sources

    Get PDF
    Traditional methods for creating classifiers have two main disadvantages. Firstly, it is time consuming to acquire, or manually annotate, the training collection. Secondly, the data on which the classifier is trained may be over-generalised or too specific. This paper presents our investigations into overcoming both of these drawbacks simultaneously, by providing example applications where two data sources train each other. This removes both the need for supervised annotation or feedback, and allows rapid adaptation of the classifier to different data. Two applications are presented: one using thermal infrared and visual imagery to robustly learn changing skin models, and another using changes in saturation and luminance to learn shadow appearance parameters

    An Overview of Classifier Fusion Methods

    Get PDF
    A number of classifier fusion methods have been recently developed opening an alternative approach leading to a potential improvement in the classification performance. As there is little theory of information fusion itself, currently we are faced with different methods designed for different problems and producing different results. This paper gives an overview of classifier fusion methods and attempts to identify new trends that may dominate this area of research in future. A taxonomy of fusion methods trying to bring some order into the existing “pudding of diversities” is also provided

    Secretory vesicles are preferentially targeted to areas of low molecular SNARE density

    Get PDF
    Intercellular communication is commonly mediated by the regulated fusion, or exocytosis, of vesicles with the cell surface. SNARE (soluble N-ethymaleimide sensitive factor attachment protein receptor) proteins are the catalytic core of the secretory machinery, driving vesicle and plasma membrane merger. Plasma membrane SNAREs (tSNAREs) are proposed to reside in dense clusters containing many molecules, thus providing a concentrated reservoir to promote membrane fusion. However, biophysical experiments suggest that a small number of SNAREs are sufficient to drive a single fusion event. Here we show, using molecular imaging, that the majority of tSNARE molecules are spatially separated from secretory vesicles. Furthermore, the motilities of the individual tSNAREs are constrained in membrane micro-domains, maintaining a non-random molecular distribution and limiting the maximum number of molecules encountered by secretory vesicles. Together our results provide a new model for the molecular mechanism of regulated exocytosis and demonstrate the exquisite organization of the plasma membrane at the level of individual molecular machines

    An Overview of Classifier Fusion Methods

    Get PDF
    A number of classifier fusion methods have been recently developed opening an alternative approach leading to a potential improvement in the classification performance. As there is little theory of information fusion itself, currently we are faced with different methods designed for different problems and producing different results. This paper gives an overview of classifier fusion methods and attempts to identify new trends that may dominate this area of research in future. A taxonomy of fusion methods trying to bring some order into the existing “pudding of diversities” is also provided

    Bayesian correlated clustering to integrate multiple datasets

    Get PDF
    Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct – but often complementary – information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured via parameters that describe the agreement among the datasets. Results: Using a set of 6 artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real S. cerevisiae datasets. In the 2-dataset case, we show that MDI’s performance is comparable to the present state of the art. We then move beyond the capabilities of current approaches and integrate gene expression, ChIP-chip and protein-protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques – as well as to non-integrative approaches – demonstrate that MDI is very competitive, while also providing information that would be difficult or impossible to extract using other methods

    Novel inferences of ionisation & recombination for particle/power balance during detached discharges using deuterium Balmer line spectroscopy

    Full text link
    The physics of divertor detachment is determined by divertor power, particle and momentum balance. This work provides a novel analysis technique of the Balmer line series to obtain a full particle/power balance measurement of the divertor. This supplies new information to understand what controls the divertor target ion flux during detachment. Atomic deuterium excitation emission is separated from recombination quantitatively using Balmer series line ratios. This enables analysing those two components individually, providing ionisation/recombination source/sinks and hydrogenic power loss measurements. Probabilistic Monte Carlo techniques were employed to obtain full error propagation - eventually resulting in probability density functions for each output variable. Both local and overall particle and power balance in the divertor are then obtained. These techniques and their assumptions have been verified by comparing the analysed synthetic diagnostic 'measurements' obtained from SOLPS simulation results for the same discharge. Power/particle balance measurements have been obtained during attached and detached conditions on the TCV tokamak.Comment: The analysis results of this paper were formerly in arXiv:1810.0496

    Video Classification With CNNs: Using The Codec As A Spatio-Temporal Activity Sensor

    Get PDF
    We investigate video classification via a two-stream convolutional neural network (CNN) design that directly ingests information extracted from compressed video bitstreams. Our approach begins with the observation that all modern video codecs divide the input frames into macroblocks (MBs). We demonstrate that selective access to MB motion vector (MV) information within compressed video bitstreams can also provide for selective, motion-adaptive, MB pixel decoding (a.k.a., MB texture decoding). This in turn allows for the derivation of spatio-temporal video activity regions at extremely high speed in comparison to conventional full-frame decoding followed by optical flow estimation. In order to evaluate the accuracy of a video classification framework based on such activity data, we independently train two CNN architectures on MB texture and MV correspondences and then fuse their scores to derive the final classification of each test video. Evaluation on two standard datasets shows that the proposed approach is competitive to the best two-stream video classification approaches found in the literature. At the same time: (i) a CPU-based realization of our MV extraction is over 977 times faster than GPU-based optical flow methods; (ii) selective decoding is up to 12 times faster than full-frame decoding; (iii) our proposed spatial and temporal CNNs perform inference at 5 to 49 times lower cloud computing cost than the fastest methods from the literature.Comment: Accepted in IEEE Transactions on Circuits and Systems for Video Technology. Extension of ICIP 2017 conference pape
    corecore