22 research outputs found

    Self-adjusted active contours using multi-directional texture cues

    Full text link
    Parameterization is an open issue in active contour research, associated with the cumbersome and time-consuming process of empirical adjustment. This work introduces a novel framework for self-adjustment of region-based active contours, based on multi-directional texture cues. The latter are mined by applying filtering transforms characterized by multi-resolution, anisotropy, localization and directionality. This process yields to entropy-based image “heatmaps”, used to weight the regularization and data fidelity terms, which guide contour evolution. Experimental evaluation is performed on a large benchmark dataset as well as on textured images. Τhe segmentation results demonstrate that the proposed framework is capable of accelerating contour convergence, maintaining a segmentation quality which is comparable to the one obtained by empirically adjusted active contours

    Towards key-frame extraction methods for 3D video: a review

    Get PDF
    The increasing rate of creation and use of 3D video content leads to a pressing need for methods capable of lowering the cost of 3D video searching, browsing and indexing operations, with improved content selection performance. Video summarisation methods specifically tailored for 3D video content fulfil these requirements. This paper presents a review of the state-of-the-art of a crucial component of 3D video summarisation algorithms: the key-frame extraction methods. The methods reviewed cover 3D video key-frame extraction as well as shot boundary detection methods specific for use in 3D video. The performance metrics used to evaluate the key-frame extraction methods and the summaries derived from those key-frames are presented and discussed. The applications of these methods are also presented and discussed, followed by an exposition about current research challenges on 3D video summarisation methods

    Computational characterization of thyroid tissue in the radon domain

    No full text
    This paper investigates a novel computational approach to thyroid tissue characterization in ultrasound images. It is based on the hypothesis that tissues in thyroid ultrasound images may be differentiated by directionality patterns. These patterns may not be always distinguishable by the human eye because of the dominant image noise. The encoding of the directional patterns in the thyroid ultrasound images is realized by means of Radon Transform features. A representative set of ultrasound images, acquired from 66 patients was constructed to perform experiments that test the validity of the initial hypothesis. Supervised classification experiments showed that the proposed approach is capable of discriminating normal and nodular thyroid tissues, whereas nodular tissues can be further characterized as of high or low malignancy risk

    3D object partial matching using panoramic views

    No full text
    In this paper, a methodology for 3D object partial matching and retrieval based on range image queries is presented. The proposed methodology addresses the retrieval of complete 3D objects based on artificially created range image queries which represent partial views. The core methodology relies upon Dense SIFT descriptors computed on panoramic views. Performance evaluation builds upon the standard measures and a challenging 3D pottery dataset originated from the Hampson Archeological Museum collection. © 2013 Springer-Verlag

    An LBP-Based Active Contour Algorithm for Unsupervised Texture Segmentation

    No full text
    This paper presents a novel algorithm for unsupervised texture segmentation. The proposed algorithm incorporates the Local Binary Pattern operator under a segmentation framework based on the Active Contour Without Edges model. The experiments performed, show that it can be used for fast segmentation of two-textured images, outperforming recent texture segmentation algorithms, with a segmentation quality that reaches 99 % on average. 1

    Image Analysis Framework for Infection Monitoring

    No full text

    Hybrid representation of sensor data for the classification of driving behaviour

    No full text
    Monitoring driving behaviour is important in controlling driving risk, fuel consump-tion, and CO2 emissions. Recent advances in machine learning, which include several variants of convolutional neural networks (CNNs), and recurrent neural networks (RNNs), such as long short-term memory (LSTM) and gated recurrent unit (GRU) networks, could be valuable for the development of objective and efficient computational tools in this direction. The main idea in this work is to complement data-driven classification of driving behaviour with rules derived from domain knowledge. In this light, we present a hybrid representation approach, which employs NN-based time-series encoding and rule-guided event detection. Histograms derived from the output of these two components are concatenated, normalized, and used to train a standard support vector machine (SVM). For the NN-based component, CNN-based, LSTM-based, and GRU-based variants are investigated. The CNN-based variant uses image-like representations of sensor measurements, whereas the RNN-based variants (LSTM and GRU) directly process sensor measurements in the form of time-series. Experimental evaluation on three datasets leads to the conclusion that the proposed approach outperforms a state-of-the-art camera-based approaches in distinguishing between normal and aggressive driving behaviour without using data derived from a camera. Moreover, it is demonstrated that both NN-guided time-series encoding and rule-guided event detection contribute to overall classification accuracy. © 2021 by the authors. Licensee MDPI, Basel, Switzerland
    corecore