1,927 research outputs found

    A multi-objective approach for the segmentation issue

    Get PDF
    Special Issue: Multi-objective metaheuristics for multi-disciplinary engineering applicationsThis work presents and formalizes an explicit multi-objective evolutionary approach for the segmentation issue according to Piecewise Linear Representation, which consists in the approximation of a given digital curve by a set of linear models minimizing the representation error and the number of such models required. Available techniques are focused on the minimization of the quality of the obtained approximation, being the cost of that approximation considered, in general, only for certain comparison purposes. The multi-objective nature of the problem is analysed and its treatment in available works reviewed, presenting an a posteriori approach based on an evolutionary algorithm. Three representative curves are included in the data set, comparing the proposed technique to nine different techniques. The performance of the presented approach is tested according to single and multiobjective perspectives. The statistical tests carried out show that the experimental results are, in general, significantly better than available approaches from both perspectives.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02.Publicad

    ALGORITHMIC METHODS FOR SEGMENTATION OF TIME SERIES: AN OVERVIEW

    Get PDF
    Adaptive and innovative application of classical data mining principles and techniques in time series analysis has resulted in development of a concept known as time series data mining. Since the time series are present in all areas of business and scientific research, attractiveness of mining of time series datasets should not be seen only in the context of the research challenges in the scientific community, but also in terms of usefulness of the research results, as a support to the process of business decision-making. A fundamental component in the mining process of time series data is time series segmentation. As a data mining research problem, segmentation is focused on the discovery of rules in movements of observed phenomena in a form of interpretable, novel, and useful temporal patterns. In this Paper, a comprehensive review of the conceptual determinations, including the elements of comparative analysis, of the most commonly used algorithms for segmentation of time series, is being considered

    An investigation into the recurring patterns of forex time series data

    Get PDF
    Countless theories have been developed by both researchers and financial analyst in an attempt to explain the fluctuation of forex price. By obtaining an intimate understanding of the forex market, traders will hopefully be able to forecast and react to forex price oscillations on-the-fly towards making a profitable investment. In this paper, an investigation into the underlying theory that there exists repeating patterns within the time series data which forms the basis of technical analysis is conducted. The assumption that certain patterns do develop over time and the forex market does not fluctuate in a random manner is used to establish the fact that history repeats itself in forex trading. The patterns and repetitions unveiled within the forex historical data would be an important element for forex forecasting

    Counting number of cells and cell segmentation using advection-diffusion equations

    Get PDF
    summary:We develop a method for counting number of cells and extraction of approximate cell centers in 2D and 3D images of early stages of the zebra-fish embryogenesis. The approximate cell centers give us the starting points for the subjective surface based cell segmentation. We move in the inner normal direction all level sets of nuclei and membranes images by a constant speed with slight regularization of this flow by the (mean) curvature. Such multi- scale evolutionary process is represented by a geometrical advection-diffusion equation which gives us at a certain scale the desired information on the number of cells. For solving the problems computationally we use flux-based finite volume level set method developed by Frolkovič and Mikula in [FM1] and semi-implicit co-volume subjective surface method given in [CMSSg, MSSgCVS, MSSgchapter]. Computational experiments on testing and real 2D and 3D embryogenesis images are presented and the results are discussed

    Anomaly Detection Based on Sensor Data in Petroleum Industry Applications

    Get PDF
    Anomaly detection is the problem of finding patterns in data that do not conform to an a priori expected behavior. This is related to the problem in which some samples are distant, in terms of a given metric, from the rest of the dataset, where these anomalous samples are indicated as outliers. Anomaly detection has recently attracted the attention of the research community, because of its relevance in real-world applications, like intrusion detection, fraud detection, fault detection and system health monitoring, among many others. Anomalies themselves can have a positive or negative nature, depending on their context and interpretation. However, in either case, it is important for decision makers to be able to detect them in order to take appropriate actions. The petroleum industry is one of the application contexts where these problems are present. The correct detection of such types of unusual information empowers the decision maker with the capacity to act on the system in order to correctly avoid, correct or react to the situations associated with them. In that application context, heavy extraction machines for pumping and generation operations, like turbomachines, are intensively monitored by hundreds of sensors each that send measurements with a high frequency for damage prevention. In this paper, we propose a combination of yet another segmentation algorithm (YASA), a novel fast and high quality segmentation algorithm, with a one-class support vector machine approach for efficient anomaly detection in turbomachines. The proposal is meant for dealing with the aforementioned task and to cope with the lack of labeled training data. As a result, we perform a series of empirical studies comparing our approach to other methods applied to benchmark problems and a real-life application related to oil platform turbomachinery anomaly detection.This work was partially funded by the Brazilian National Council for Scientific and Technological Development projects CNPq BJT 407851/2012-7 and CNPq PVE 314017/2013-5 and projects MINECO TEC 2012-37832-C02-01, CICYT TEC 2011-28626-C02-02.Publicad

    A Survey on Deep Learning-based Architectures for Semantic Segmentation on 2D images

    Full text link
    Semantic segmentation is the pixel-wise labelling of an image. Since the problem is defined at the pixel level, determining image class labels only is not acceptable, but localising them at the original image pixel resolution is necessary. Boosted by the extraordinary ability of convolutional neural networks (CNN) in creating semantic, high level and hierarchical image features; excessive numbers of deep learning-based 2D semantic segmentation approaches have been proposed within the last decade. In this survey, we mainly focus on the recent scientific developments in semantic segmentation, specifically on deep learning-based methods using 2D images. We started with an analysis of the public image sets and leaderboards for 2D semantic segmantation, with an overview of the techniques employed in performance evaluation. In examining the evolution of the field, we chronologically categorised the approaches into three main periods, namely pre-and early deep learning era, the fully convolutional era, and the post-FCN era. We technically analysed the solutions put forward in terms of solving the fundamental problems of the field, such as fine-grained localisation and scale invariance. Before drawing our conclusions, we present a table of methods from all mentioned eras, with a brief summary of each approach that explains their contribution to the field. We conclude the survey by discussing the current challenges of the field and to what extent they have been solved.Comment: Updated with new studie

    Automated Feature Extraction from Large Cardiac Electrophysiological Data Sets, and a Population Dynamics Approach to the Distribution of Space Debris in Low-Earth Orbit

    Get PDF
    We present two applications of mathematics to relevant real-world situations. In the first chapter, we discuss an automated method for the extraction of useful data from large file-size readings of cardiac data. We begin by describing the history of electrophysiology and the background of the work\u27s setting, wherein a new multi-electrode array-based application for the long-term recording of action potentials from electrogenic cells makes large-scale readings of relevant data possible, opening the way for exciting cardiac electrophysiology studies in health and disease. With hundreds of simultaneous electrode recordings being acquired over a period of days, the main challenge becomes achieving reliable signal identification and quantification. In the context of this method of data collection, we set out to develop an algorithm capable of automatically extracting regions of high-quality action potentials from terabyte size experimental results and to map the trains of action potentials into a low-dimensional feature space for analysis. We establish that our automatic segmentation algorithm finds regions of acceptable action potentials in large data sets of electrophysiological readings. We use spectral methods and support vector machines to classify our readings and to extract relevant features. We are able to show that action potentials from the same cell site can be recorded over days without detrimental effects to the cell membrane. The variability between measurements 24 h apart is comparable to the natural variability of the features at a single time point. this work contributes towards a non-invasive approach for cardiomyocyte functional maturation, as well as developmental, pathological and pharmacological studies. As the human-derived cardiac model tissue has the genetic makeup of its donor, a powerful tool for individual drug toxicity screening emerges. In the second chapter we consider the population of objects, largely considered debris, in the region of outer space close to Earth. the presence of this debris in Earth\u27s orbit poses a significant risk to human activity in outer space. This debris population continues to grow due to ground launches, loss of external parts from space ships, and uncontrollable collisions between objects. We examine the background of human space launch, the current methods of tracking objects, and modelling work done to date. We propose a diffusion-collision model for the evolution of debris density in Low-Earth Orbit (LEO) and its dependence on ground-launch policy, to arrive at a computationally feasible continuum-based model. We parametrize this model and test it against data from publicly available object catalogs to examine timescales for uncontrolled growth. Finally, we consider sensible launch policies and cleanup strategies and how they reduce the future risk of collisions with active satellites or space ships, along with considering extensions of the model to directly account for launch policy determination through minimization of certain functionals
    corecore