13 research outputs found

    A SOM-based Chan–Vese model for unsupervised image segmentation

    Get PDF
    Active Contour Models (ACMs) constitute an efficient energy-based image segmentation framework. They usually deal with the segmentation problem as an optimization problem, formulated in terms of a suitable functional, constructed in such a way that its minimum is achieved in correspondence with a contour that is a close approximation of the actual object boundary. However, for existing ACMs, handling images that contain objects characterized by many different intensities still represents a challenge. In this paper, we propose a novel ACM that combines—in a global and unsupervised way—the advantages of the Self-Organizing Map (SOM) within the level set framework of a state-of-the-art unsupervised global ACM, the Chan–Vese (C–V) model. We term our proposed model SOM-based Chan– Vese (SOMCV) active contourmodel. It works by explicitly integrating the global information coming from the weights (prototypes) of the neurons in a trained SOM to help choosing whether to shrink or expand the current contour during the optimization process, which is performed in an iterative way. The proposed model can handle images that contain objects characterized by complex intensity distributions, and is at the same time robust to the additive noise. Experimental results show the high accuracy of the segmentation results obtained by the SOMCV model on several synthetic and real images, when compared to the Chan–Vese model and other image segmentation models

    Active Contour Model driven by Globally Signed Region Pressure Force

    No full text
    One of the most popular and widely used global active contour models (ACM) is the region-based ACM, which relies on the assumption of homogeneous intensity in the regions of interest. As a result, most often than not, when images violate this assumption the performance of this method is limited. Thus, handling images that contain foreground objects characterized by multiple intensity classes present a challenge. In this paper, we propose a novel active contour model based on a new Signed Pressure Force (SPF) function which we term Globally Signed Region Pressure Force (GSRPF). It is designed to incorporate, in a global fashion, the skewness of the intensity distribution of the region of interest (ROI). It can accurately modulate the signs of the pressure force inside and outside the contour, it can handle images with multiple intensity classes in the foreground, it is robust to additive noise, and offers high efficiency and rapid convergence. The proposed GSRPF is robust to contour initialization and has the ability to stop the curve evolution close to even ill-defined (weak) edges. Our model provides a parameter-free environment to allow minimum user intervention, and offers both local and global segmentation properties. Experimental results on several synthetic and real images demonstrate the high accuracy of the segmentation results in comparison to other methods adopted from the literature

    Scalable Algorithms for Missing Value Imputation

    No full text
    Statistical Imputation Techniques have been proposed mainly with the aim of predicting the missing values in the incomplete sets as an essential step in any data analysis framework. K-means-based Imputation, as a representative statistical imputation method, has been producing satisfied results in terms of effectiveness and efficiency in handling popular and freely available data set (e.g., Bupa, Breast Cancer, Pima, etc.). The main idea of K-means based methods is to impute the missing value relying on the prototypes of the representative class and the similarity of the data. However, such kinds of methods share the same limitations of the K-means as data mining technique. In this paper and motivated by such drawbacks, we introduce simple and efficient imputation methods based on K-means to deal with the missing data from various classes of data sets. Our proposed methods give higher accuracy than the one given by the standard K-means

    Virtual machine consolidation enhancement using hybrid regression algorithms

    No full text
    Cloud computing data centers are growing rapidly in both number and capacity to meet the increasing demands for highly-responsive computing and massive storage. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. The reason for this extremely high energy consumption is not just the quantity of computing resources and the power inefficiency of hardware, but rather lies in the inefficient usage of these resources. VM consolidation involves live migration of VMs hence the capability of transferring a VM between physical servers with a close to zero down time. It is an effective way to improve the utilization of resources and increase energy efficiency in cloud data centers. VM consolidation consists of host overload/underload detection, VM selection and VM placement. Most of the current VM consolidation approaches apply either heuristic-based techniques, such as static utilization thresholds, decision-making based on statistical analysis of historical data; or simply periodic adaptation of the VM allocation. Most of those algorithms rely on CPU utilization only for host overload detection. In this paper we propose using hybrid factors to enhance VM consolidation. Specifically we developed a multiple regression algorithm that uses CPU utilization, memory utilization and bandwidth utilization for host overload detection. The proposed algorithm, Multiple Regression Host Overload Detection (MRHOD), significantly reduces energy consumption while ensuring a high level of adherence to Service Level Agreements (SLA) since it gives a real indication of host utilization based on three parameters (CPU, Memory, Bandwidth) utilizations instead of one parameter only (CPU utilization). Through simulations we show that our approach reduces power consumption by 6 times compared to single factor algorithms using random workload. Also using PlanetLab workload traces we show that MRHOD improves the ESV metric by about 24% better than other single factor regression algorithms (LR and LRR). Also we developed Hybrid Local Regression Host Overload Detection algorithm (HLRHOD) that is based on local regression using hybrid factors. It outperforms the single factor algorithms

    From Pixels to Deposits: Porphyry Mineralization With Multispectral Convolutional Neural Networks

    Get PDF
    Mineral exploration is essential to ensure a sustainable supply of raw materials for modern living and the transition to green. It implies a series of expensive operations that aim to identify areas with natural mineral concentration in the crust of the Earth. The rapid advances in artificial intelligence and remote sensing techniques can help in significantly reducing the cost of these operations. Here, we produce a robust intelligent mineral exploration model that can fingerprint potential locations of porphyry deposits, which are the world's most important source of copper and molybdenum and major source of gold, silver, and tin. We present a deep learning pipeline for assessing multispectral imagery from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with the objective of identifying hydrothermal alterations. Our approach leverages a convolutional neural network (CNN) to analyze the high-resolution images, overcoming computational challenges through a patch-based strategy that involves an overlapping window for partitioning the images into fixed-size patches. Through the utilization of manually labeled patches for image classification and identification of hydrothermal alteration areas, our results demonstrate the remarkable ability of CNN to accurately detect hydrothermal alterations. The technique is adaptable for other ore deposit models and satellite imagery types, providing a revolution in satellite image interpretation and mineral exploration

    Economic Impact of Resource Optimisation in Cloud Environment Using Different Virtual Machine Allocation Policies

    No full text
    Exceptional level of research work has been carried in the field of cloud and distributed systems for understanding their performance and reliability. Simulators are becoming popular for designing and testing different types of quality of service (QoS) matrices e.g. energy, virtualisation, and networking. A large amount of resource is wasted when servers are sitting idle which puts a negative impact on the financial aspects of companies. A popular approach used to overcome this problem is turning them ON/OFF. However, it takes time when they are turned ON affecting different matrices of QoS like energy consumption, latency, consumption and cost. In this paper, we present different energy models and their comparison with each other based on workloads for efficient server management. We introduce a different type of energy saving techniques (DVFs, IQRMC) which help toward an improvement in service. Different energy models are used with the same configuration and possible solutions are proposed for big data centres that are placed globally by large companies like Amazon, Giaki, Onlive, and Google

    Image based plant phenotyping with incremental learning and active contours

    Get PDF
    Plant phenotyping investigates how a plant's genome, interacting with the environment, affects the observable traits of a plant (phenome). It is becoming increasingly important in our quest towards efficient and sustainable agriculture. While sequencing the genome is becoming increasingly efficient, acquiring phenotype information has remained largely of low throughput. Current solutions for automated image-based plant phenotyping, rely either on semi-automated or manual analysis of the imaging data, or on expensive and proprietary software which accompanies costly hardware infrastructure. While some attempts have been made to create software applications that enable the analysis of such images in an automated fashion, most solutions are tailored to particular acquisition scenarios and restrictions on experimental design. In this paper we propose and test, a method for the segmentation and the automated analysis of time-lapse plant images from phenotyping experiments in a general laboratory setting, that can adapt to scene variability. The method involves minimal user interaction, necessary to establish the statistical experiments that may follow. At every time instance (i.e., a digital photograph), it segments the plants in images that contain many specimens of the same species. For accurate plant segmentation we propose a vector valued level set formulation that incorporates features of color intensity, local texture, and prior knowledge. Prior knowledge is incorporated using a plant appearance model implemented with Gaussian mixture models, which utilizes incrementally information from previously segmented instances. The proposed approach is tested on Arabidopsis plant images acquired with a static camera capturing many subjects at the same time. Our validation with ground truth segmentations and comparisons with state-of-the-art methods in the literature shows that the proposed method is able to handle images with complicated and changing background in an automated fashion. An accuracy of 96.7% (dice similarity coefficient) was observed, which was higher than other methods used for comparison. While here it was tested on a single plant species, the fact that we do not employ shape driven models and we do not rely on fully supervised classification (trained on a large dataset) increases the ease of deployment of the proposed solution for the study of different plant species in a variety of laboratory settings. Our solution will be accompanied by an easy to use graphical user interface and, to facilitate adoption, we will make the software available to the scientific community
    corecore