278 research outputs found

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    Fuzzy model predictive control. Complexity reduction by functional principal component analysis

    Get PDF
    En el Control Predictivo basado en Modelo, el controlador ejecuta una optimización en tiempo real para obtener la mejor solución para la acción de control. Un problema de optimización se resuelve para identificar la mejor acción de control que minimiza una función de coste relacionada con las predicciones de proceso. Debido a la carga computacional de los algoritmos, el control predictivo sujeto a restricciones, no es adecuado para funcionar en cualquier plataforma de hardware. Las técnicas de control predictivo son bien conocidos en la industria de proceso durante décadas. Es cada vez más atractiva la aplicación de técnicas de control avanzadas basadas en modelos a otros muchos campos tales como la automatización de edificios, los teléfonos inteligentes, redes de sensores inalámbricos, etc., donde las plataformas de hardware nunca se han conocido por tener una elevada potencia de cálculo. El objetivo principal de esta tesis es establecer una metodología para reducir la complejidad de cálculo al aplicar control predictivo basado en modelos no lineales sujetos a restricciones, utilizando como plataforma, sistemas de hardware de baja potencia de cálculo, permitiendo una implementación basado en estándares de la industria. La metodología se basa en la aplicación del análisis de componentes principales funcionales, proporcionando un enfoque matemáticamente elegante para reducir la complejidad de los sistemas basados en reglas, como los sistemas borrosos y los sistemas lineales a trozos. Lo que permite reducir la carga computacional en el control predictivo basado en modelos, sujetos o no a restricciones. La idea de utilizar sistemas de inferencia borrosos, además de permitir el modelado de sistemas no lineales o complejos, dota de una estructura formal que permite la implementación de la técnica de reducción de la complejidad mencionada anteriormente. En esta tesis, además de las contribuciones teóricas, se describe el trabajo realizado con plantas reales en los que se han llevado a cabo tareas de modelado y control borroso. Uno de los objetivos a cubrir en el período de la investigación y el desarrollo de la tesis ha sido la experimentación con sistemas borrosos, su simplificación y aplicación a sistemas industriales. La tesis proporciona un marco de conocimiento práctico, basado en la experiencia.In Model-based Predictive Control, the controller runs a real-time optimisation to obtain the best solution for the control action. An optimisation problem is solved to identify the best control action that minimises a cost function related to the process predictions. Due to the computational load of the algorithms, predictive control subject to restric- tions is not suitable to run on any hardware platform. Predictive control techniques have been well known in the process industry for decades. The application of advanced control techniques based on models is becoming increasingly attractive in other fields such as building automation, smart phones, wireless sensor networks, etc., as the hardware platforms have never been known to have high computing power. The main purpose of this thesis is to establish a methodology to reduce the computational complexity of applying nonlinear model based predictive control systems subject to constraints, using as a platform hardware systems with low computational power, allowing a realistic implementation based on industry standards. The methodology is based on applying the functional principal component analysis, providing a mathematically elegant approach to reduce the complexity of rule-based systems, like fuzzy and piece wise affine systems, allowing the reduction of the computational load on modelbased predictive control systems, subject or not subject to constraints. The idea of using fuzzy inference systems, in addition to allowing nonlinear or complex systems modelling, endows a formal structure which enables implementation of the aforementioned complexity reduction technique. This thesis, in addition to theoretical contributions, describes the work done with real plants on which tasks of modeling and fuzzy control have been carried out. One of the objectives to be covered for the period of research and development of the thesis has been training with fuzzy systems and their simplification and application to industrial systems. The thesis provides a practical knowledge framework, based on experience

    Trajectory based video analysis in multi-camera setups

    Get PDF
    PhDThis thesis presents an automated framework for activity analysis in multi-camera setups. We start with the calibration of cameras particularly without overlapping views. An algorithm is presented that exploits trajectory observations in each view and works iteratively on camera pairs. First outliers are identified and removed from observations of each camera. Next, spatio-temporal information derived from the available trajectory is used to estimate unobserved trajectory segments in areas uncovered by the cameras. The unobserved trajectory estimates are used to estimate the relative position of each camera pair, whereas the exit-entrance direction of each object is used to estimate their relative orientation. The process continues and iteratively approximates the configuration of all cameras with respect to each other. Finally, we refi ne the initial configuration estimates with bundle adjustment, based on the observed and estimated trajectory segments. For cameras with overlapping views, state-of-the-art homography based approaches are used for calibration. Next we establish object correspondence across multiple views. Our algorithm consists of three steps, namely association, fusion and linkage. For association, local trajectory pairs corresponding to the same physical object are estimated using multiple spatio-temporal features on a common ground plane. To disambiguate spurious associations, we employ a hybrid approach that utilises the matching results on the image plane and ground plane. The trajectory segments after association are fused by adaptive averaging. Trajectory linkage then integrates segments and generates a single trajectory of an object across the entire observed area. Finally, for activities analysis clustering is applied on complete trajectories. Our clustering algorithm is based on four main steps, namely the extraction of a set of representative trajectory features, non-parametric clustering, cluster merging and information fusion for the identification of normal and rare object motion patterns. First we transform the trajectories into a set of feature spaces on which Meanshift identi es the modes and the corresponding clusters. Furthermore, a merging procedure is devised to re fine these results by combining similar adjacent clusters. The fi nal common patterns are estimated by fusing the clustering results across all feature spaces. Clusters corresponding to reoccurring trajectories are considered as normal, whereas sparse trajectories are associated to abnormal and rare events. The performance of the proposed framework is evaluated on standard data-sets and compared with state-of-the-art techniques. Experimental results show that the proposed framework outperforms state-of-the-art algorithms both in terms of accuracy and robustness

    Semi-Automatic Segmentation of Normal Female Pelvic Floor Structures from Magnetic Resonance Images

    Get PDF
    Stress urinary incontinence (SUI) and pelvic organ prolapse (POP) are important health issues affecting millions of American women. Investigation of the cause of SUI and POP requires a better understand of the anatomy of female pelvic floor. In addition, pre-surgical planning and individualized treatment plans require development of patient-specific three-dimensional or virtual reality models. The biggest challenge in building those models is to segment pelvic floor structures from magnetic resonance images because of their complex shapes, which make manual segmentation labor-intensive and inaccurate. In this dissertation, a quick and reliable semi-automatic segmentation method based on a shape model is proposed. The model is built on statistical analysis of the shapes of structures in a training set. A local feature map of the target image is obtained by applying a filtering pipeline, including contrast enhancement, noise reduction, smoothing, and edge extraction. With the shape model and feature map, automatic segmentation is performed by matching the model to the border of the structure using an optimization technique called evolution strategy. Segmentation performance is evaluated by calculating a similarity coefficient between semi-automatic and manual segmentation results. Taguchi analysis is performed to investigate the significance of segmentation parameters and provide tuning trends for better performance. The proposed method was successfully tested on both two-dimensional and three-dimensional image segmentation using the levator ani and obturator muscles as examples. Although the method is designed for segmentation of female pelvic floor structures, it can also be applied to other structures or organs without large shape variatio

    Semi-Automatic Segmentation of Normal Female Pelvic Floor Structures from Magnetic Resonance Images

    Get PDF
    Stress urinary incontinence (SUI) and pelvic organ prolapse (POP) are important health issues affecting millions of American women. Investigation of the cause of SUI and POP requires a better understand of the anatomy of female pelvic floor. In addition, pre-surgical planning and individualized treatment plans require development of patient-specific three-dimensional or virtual reality models. The biggest challenge in building those models is to segment pelvic floor structures from magnetic resonance images because of their complex shapes, which make manual segmentation labor-intensive and inaccurate. In this dissertation, a quick and reliable semi-automatic segmentation method based on a shape model is proposed. The model is built on statistical analysis of the shapes of structures in a training set. A local feature map of the target image is obtained by applying a filtering pipeline, including contrast enhancement, noise reduction, smoothing, and edge extraction. With the shape model and feature map, automatic segmentation is performed by matching the model to the border of the structure using an optimization technique called evolution strategy. Segmentation performance is evaluated by calculating a similarity coefficient between semi-automatic and manual segmentation results. Taguchi analysis is performed to investigate the significance of segmentation parameters and provide tuning trends for better performance. The proposed method was successfully tested on both two-dimensional and three-dimensional image segmentation using the levator ani and obturator muscles as examples. Although the method is designed for segmentation of female pelvic floor structures, it can also be applied to other structures or organs without large shape variatio

    Semi-Automatic Segmentation of Normal Female Pelvic Floor Structures from Magnetic Resonance Images

    Get PDF
    Stress urinary incontinence (SUI) and pelvic organ prolapse (POP) are important health issues affecting millions of American women. Investigation of the cause of SUI and POP requires a better understand of the anatomy of female pelvic floor. In addition, pre-surgical planning and individualized treatment plans require development of patient-specific three-dimensional or virtual reality models. The biggest challenge in building those models is to segment pelvic floor structures from magnetic resonance images because of their complex shapes, which make manual segmentation labor-intensive and inaccurate. In this dissertation, a quick and reliable semi-automatic segmentation method based on a shape model is proposed. The model is built on statistical analysis of the shapes of structures in a training set. A local feature map of the target image is obtained by applying a filtering pipeline, including contrast enhancement, noise reduction, smoothing, and edge extraction. With the shape model and feature map, automatic segmentation is performed by matching the model to the border of the structure using an optimization technique called evolution strategy. Segmentation performance is evaluated by calculating a similarity coefficient between semi-automatic and manual segmentation results. Taguchi analysis is performed to investigate the significance of segmentation parameters and provide tuning trends for better performance. The proposed method was successfully tested on both two-dimensional and three-dimensional image segmentation using the levator ani and obturator muscles as examples. Although the method is designed for segmentation of female pelvic floor structures, it can also be applied to other structures or organs without large shape variatio

    Processing and analysis of chromatographic data

    Get PDF
    Data pre-processing and analysis techniques are investigated for the analysis of one- and two-dimensional chromatographic data. Pre-processing, in particular alignment, is of paramount importance when employing multivariate chemometric methods as these techniques highlight variance, or changes between samples at corresponding variables (i.e. retention times). Principal components analysis (PCA) was employed to evaluate the effectiveness of alignment. Two methods, correlation optimised warping and icoshift were compared for the alignment of high performance liquid chromatography (HPLC) metabolite data. PCA was then employed as an exploratory technique to investigate the influence of phosphite on the secondary metabolites associated with Lupinus angustifolius roots inoculated with the pathogen, Phytophthora cinnamomi. In a second application, HPLC with acidic potassium permanganate chemiluminescence detection was evaluated for the analysis of Australian wines from different geographic origins and vintages. Linear discriminant analysis and quadratic discriminant analysis were used to classify red and white wines according to geographic origin. In the analysis of wine vintage, partial least squares and principal components regression were compared for the modelling of sample composition with wine age. Finally, software was developed for quality control (QC) of flavours and fragrances using comprehensive two-dimensional gas chromatography (GC×GC). The software aims to automatically align and compare a sample chromatogram to a reference chromatogram. A simple method of partitioning the two-dimensional pattern space was employed to select reference control points. Corresponding control points in a sample chromatogram were identified using a triangle-pattern matching algorithm. The reference and sample control points were then used to calculate the translation, scaling and rotation operations for an affine transform, which is applied to the complete sample peak list in order to align reference and sample peaks. Comparison of reference and sample chromatograms was achieved through the use of fuzzy logic. It is concluded that the pre-processing and chemometric methods investigated here are valuable tools for the analysis of chromatographic data. The developed GC×GC software was successfully employed to analyse real flavour samples for QC purposes

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Deep Time-Series Clustering: A Review

    Get PDF
    We present a comprehensive, detailed review of time-series data analysis, with emphasis on deep time-series clustering (DTSC), and a case study in the context of movement behavior clustering utilizing the deep clustering method. Specifically, we modified the DCAE architectures to suit time-series data at the time of our prior deep clustering work. Lately, several works have been carried out on deep clustering of time-series data. We also review these works and identify state-of-the-art, as well as present an outlook on this important field of DTSC from five important perspectives

    Segmentation of images by color features: a survey

    Get PDF
    En este articulo se hace la revisión del estado del arte sobre la segmentación de imagenes de colorImage segmentation is an important stage for object recognition. Many methods have been proposed in the last few years for grayscale and color images. In this paper, we present a deep review of the state of the art on color image segmentation methods; through this paper, we explain the techniques based on edge detection, thresholding, histogram-thresholding, region, feature clustering and neural networks. Because color spaces play a key role in the methods reviewed, we also explain in detail the most commonly color spaces to represent and process colors. In addition, we present some important applications that use the methods of image segmentation reviewed. Finally, a set of metrics frequently used to evaluate quantitatively the segmented images is shown
    corecore