17 research outputs found

    A Framework of Vertebra Segmentation Using the Active Shape Model-Based Approach

    Get PDF
    We propose a medical image segmentation approach based on the Active Shape Model theory. We apply this method for cervical vertebra detection. The main advantage of this approach is the application of a statistical model created after a training stage. Thus, the knowledge and interaction of the domain expert intervene in this approach. Our application allows the use of two different models, that is, a global one (with several vertebrae) and a local one (with a single vertebra). Two modes of segmentation are also proposed: manual and semiautomatic. For the manual mode, only two points are selected by the user on a given image. The first point needs to be close to the lower anterior corner of the last vertebra and the second near the upper anterior corner of the first vertebra. These two points are required to initialize the segmentation process. We propose to use the Harris corner detector combined with three successive filters to carry out the semiautomatic process. The results obtained on a large set of X-ray images are very promising

    Bounded Simplex-Structured Matrix Factorization: Algorithms, Identifiability and Applications

    Full text link
    In this paper, we propose a new low-rank matrix factorization model dubbed bounded simplex-structured matrix factorization (BSSMF). Given an input matrix XX and a factorization rank rr, BSSMF looks for a matrix WW with rr columns and a matrix HH with rr rows such that XWHX \approx WH where the entries in each column of WW are bounded, that is, they belong to given intervals, and the columns of HH belong to the probability simplex, that is, HH is column stochastic. BSSMF generalizes nonnegative matrix factorization (NMF), and simplex-structured matrix factorization (SSMF). BSSMF is particularly well suited when the entries of the input matrix XX belong to a given interval; for example when the rows of XX represent images, or XX is a rating matrix such as in the Netflix and MovieLens datasets where the entries of XX belong to the interval [1,5][1,5]. The simplex-structured matrix HH not only leads to an easily understandable decomposition providing a soft clustering of the columns of XX, but implies that the entries of each column of WHWH belong to the same intervals as the columns of WW. In this paper, we first propose a fast algorithm for BSSMF, even in the presence of missing data in XX. Then we provide identifiability conditions for BSSMF, that is, we provide conditions under which BSSMF admits a unique decomposition, up to trivial ambiguities. Finally, we illustrate the effectiveness of BSSMF on two applications: extraction of features in a set of images, and the matrix completion problem for recommender systems.Comment: 14 pages, new title, new numerical experiments on synthetic data, clarifications of several parts of the paper, run times adde

    Real time web-based toolbox for computer vision

    Get PDF
    The last few years have been strongly marked by the presence of multimedia data (images and videos) in our everyday lives. These data are characterized by a fast frequency of creation and sharing since images and videos can come from different devices such as cameras, smartphones or drones. The latter are generally used to illustrate objects in different situations (airports, hospitals, public areas, sport games, etc.). As result, image and video processing algorithms have got increasing importance for several computer vision applications such as motion tracking, event detection and recognition, multimedia indexation and medical computer-aided diagnosis methods. In this paper, we propose a real time cloud-based toolbox (platform) for computer vision applications. This platform integrates a toolbox of image and video processing algorithms that can be run in real time and in a secure way. The related libraries and hardware drivers are automatically integrated and configured in order to offer to users an access to the different algorithms without the need to download, install and configure software or hardware. Moreover, the platform offers the access to the integrated applications from multiple users thanks to the use of Docker (Merkel, 2014) containers and images. Experimentations were conducted within three kinds of algorithms: 1. image processing toolbox. 2. Video processing toolbox. 3. 3D medical methods such as computer-aided diagnosis for scoliosis and osteoporosis.  These experimentations demonstrated the interest of our platform for sharing our scientific contributions related to computer vision domain. The scientific researchers could be able to develop and share easily their applications fastly and in a safe way

    Using matrix factorisation for the prediction of electrical quantities

    Get PDF
    The prediction task is attracting more and more attention among the power system community. Accurate predictions of electrical quantities up to a few hours ahead (e.g. renewable production, electrical load etc.) are for instance crucial for distribution system operators to operate their network in the presence of a high share of renewables, or for energy producers to maximise their profits by optimising their portfolio management. In the literature, statistical approaches are usually proposed to predict electrical quantities. In the present study, the authors present a novel method based on matrix factorisation. The authors' approach is inspired by the literature on data mining and knowledge discovery and the methodologies involved in recommender systems. The idea is to transpose the problem of predicting ratings in a recommender system to a problem of forecasting electrical quantities in a power system. Preliminary results on a real wind speed dataset tend to show that the matrix factorisation model provides similar results than auto regressive integrated models in terms of accuracy (MAE and RMSE). The authors' approach is nevertheless highly scalable and can deal with noisy data (e.g. missing data)

    Adapted Collaborative Filtering Algorithms through Diversity and Novelty

    No full text
    In today’s society, the quantity of available data is exploding. Recommender systems are tools that enable processing those data. With the use of such tools, drawbacks about the quality of the recommended content - e.g. poor diversity and novelty - are envisioned. This paper presents two approaches that modify a classical user-based collaborative filtering process with the aim of improving diversity and/or novelty while maintaining a good level of recall. The first approach uses information about item popularity to alter the process. Only unpopular items are taken into account in he neighborhood determination, leading to more novelty in the recommendation lists. The second approach - a reranking technique- allows deciding the level of similarity within the recommendation lists proposed to users. That criteria switch impacts both diversity and novelty

    Towards interpretable machine learning models for diagnosis aid: A case study on attention deficit/hyperactivity disorder.

    No full text
    Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder that has heavy consequences on a child's wellbeing, especially in the academic, psychological and relational planes. The current evaluation of the disorder is supported by clinical assessment and written tests. A definitive diagnosis is usually made based on the DSM-V criteria. There is a lot of ongoing research on ADHD, in order to determine the neurophysiological basis of the disorder and to reach a more objective diagnosis. The advent of Machine Learning (ML) opens up promising prospects for the development of systems able to predict a diagnosis from phenotypic and neuroimaging data. This was the reason why the ADHD-200 contest was launched a few years ago. Based on the publicly available ADHD-200 collection, participants were challenged to predict ADHD with the best possible predictive accuracy. In the present work, we propose instead a ML methodology which primarily places importance on the explanatory power of a model. Such an approach is intended to achieve a fair trade-off between the needs of performance and interpretability expected from medical diagnosis aid systems. We applied our methodology on a data sample extracted from the ADHD-200 collection, through the development of decision trees which are valued for their readability. Our analysis indicates the relevance of the limbic system for the diagnosis of the disorder. Moreover, while providing explanations that make sense, the resulting decision tree performs favorably given the recent results reported in the literature

    Three-dimensional spine model reconstruction using one-class SVM regularization

    No full text
    Statistical shape models have become essential for medical image registration or segmentation and are used in many biomedical applications. These models are often based on Gaussian distributions learned from a training set. We propose in this paper a shape model which does not rely on the estimation of a Gaussian distribution, but on similarities computed with a kernel function. Our model takes advantage of the one-class support vector machine (OCSVM) to do so. In this context, we propose in this paper a method for reconstructing the spine of scoliotic patients using OCSVM regularization. Current state-of-the-art methods use conventional statistical shape models, and the reconstruction is commonly processed by minimizing a Mahalanobis distance. Nevertheless, when a shape differs significantly from the statistical model, the associated Mahalanobis distance often overstates the need for statistical regularization. We show that OCSVM regularization is more robust and is less sensitive to weak landmarks definition and is hardly influenced by the presence of outliers in the training data. The proposed OCSVM model applied to 3-D spine reconstruction was evaluated on real patient data, and results showed that our approach allows precise reconstruction.Peer reviewed: YesNRC publication: Ye

    Recommender systems: the case of repeated interaction in matrix factorization

    No full text
    This work presents a new matrix factorization recommender system approach, that takes repeated interaction into account. We analyze if and how users' repeated interaction behavior---such as repeat purchases---can be integrated into a recommender system. We develop a method that takes advantage of this additional data dimension that is studied in many other fields to derive useful conclusions. Furthermore, we empirically test our method on real-life retailer data and on the Last.fm dataset. We compare our algorithm with popular matrix factorization approaches. Results indicate that our method manages to slightly outperform the existing methods
    corecore