22 research outputs found

    A new image thresholding method based on Gaussian mixture model

    Get PDF
    2008-2009 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Influence of skull inhomogeneities on EEG source localization

    Get PDF
    We investigated the influence of using simplified models of the skull on electroencephalogram (EEG) source localization. An accurately segmented skull from computed tomography (CT) images, including spongy and compact bones as well as some air–filled cavities, was used as a reference model. The simplified models approximated the skull as a homogeneous compartment with: (1) isotropic, and (2) anisotropic conductivity. The results showed that these approximations could lead to errors of more than 2 cm in dipole estimation. We recommend the use of anisotropy but considering a different ratio for each region of the skull, according to the amount of spongy bone

    Cross-entropy based image thresholding

    Get PDF
    This paper presents a novel global thresholding algorithm for the binarization of documents and gray-scale images using Cross Entropy Clustering. In the first step, a gray-level histogram is constructed, and the Gaussian densities are fitted. The thresholds are then determined as the cross-points of the Gaussian densities. This approach automatically detects the number of components (the upper limit of Gaussian densities is required)

    Estimating human interactions with electrical appliances for activity-based energy savings recommendations: poster abstract

    Get PDF
    Since the power consumption of different electrical appliances in a household can be recorded by individual smart meters, it becomes possible to start considering in more details the interactions of the residents with those devices throughout the day. Appliances usages should not be considered as independent events, but rather as enablers for activities. In this work, we propose an automated method for determining when an electrical device is triggered solely from its power trace. Knowing when an appliance is powered on is required for identifying recurrent patterns that could later be understood as activities. Leveraging activity knowledge over time will allow us to design personalized energy efficient measures. We envision the design of future ambient intelligence systems, where the smart home can optimize the energy consumption in regards to the lifestyles of its residents

    THE PERFORMANCE OF VARIOUS THRESHOLDING ALGORITHMS FOR SEGMENTATION OF BIOMEDICAL IMAGE

    Get PDF
    ABSTRACT In biomedical image processing, segmentation is required for separating suspicious organ from the medical radiography. In segmentation techniques, thresholding is widely used because of its intuitive properties, simplicity of implementation and computational speed. Thresholding divided intensity of the image into two sub groups 0 or 255 for 8 bit image. Biomedical images contain complex anatomy which makes the segmentation task difficult. Various algorithms have been proposed to threshold the image. These algorithms take into consideration one or two properties of image for computing threshold. This paper contains performance comparison of various thresholding algorithms by applying on the chest radiograph (X-ray Image)

    Colonyzer: automated quantification of micro-organism growth characteristics on solid agar

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>High-throughput screens comparing growth rates of arrays of distinct micro-organism cultures on solid agar are useful, rapid methods of quantifying genetic interactions. Growth rate is an informative phenotype which can be estimated by measuring cell densities at one or more times after inoculation. Precise estimates can be made by inoculating cultures onto agar and capturing cell density frequently by plate-scanning or photography, especially throughout the exponential growth phase, and summarising growth with a simple dynamic model (e.g. the logistic growth model). In order to parametrize such a model, a robust image analysis tool capable of capturing a wide range of cell densities from plate photographs is required.</p> <p>Results</p> <p>Colonyzer is a collection of image analysis algorithms for automatic quantification of the size, granularity, colour and location of micro-organism cultures grown on solid agar. Colonyzer is uniquely sensitive to extremely low cell densities photographed after dilute liquid culture inoculation (spotting) due to image segmentation using a mixed Gaussian model for plate-wide thresholding based on pixel intensity. Colonyzer is robust to slight experimental imperfections and corrects for lighting gradients which would otherwise introduce spatial bias to cell density estimates without the need for imaging dummy plates. Colonyzer is general enough to quantify cultures growing in any rectangular array format, either growing after pinning with a dense inoculum or growing with the irregular morphology characteristic of spotted cultures. Colonyzer was developed using the open source packages: Python, RPy and the Python Imaging Library and its source code and documentation are available on SourceForge under GNU General Public License. Colonyzer is adaptable to suit specific requirements: e.g. automatic detection of cultures at irregular locations on streaked plates for robotic picking, or decreasing analysis time by disabling components such as lighting correction or colour measures.</p> <p>Conclusion</p> <p>Colonyzer can automatically quantify culture growth from large batches of captured images of microbial cultures grown during genome-wide scans over the wide range of cell densities observable after highly dilute liquid spot inoculation, as well as after more concentrated pinning inoculation. Colonyzer is open-source, allowing users to assess it, adapt it to particular research requirements and to contribute to its development.</p

    Enhancement of Background Subtraction Techniques Using a Second Derivative in Gradient Direction Filter

    Get PDF

    Automated Fragmentary Bone Matching

    Get PDF
    Identification, reconstruction and matching of fragmentary bones are basic tasks required to accomplish quantification and analysis of fragmentary human remains derived from forensic contexts. Appropriate techniques for three-dimensional surface matching have received great attention in computer vision literature, and various methods have been proposed for matching fragmentary meshes; however, many of these methods lack automation, speed and/or suffer from high sensitivity to noise. In addition, reconstruction of fragementary bones along with identification in the presence of reference model to compare with in an automatic scheme have not been addressed. In order to address these issues, we used a multi-stage technique for fragment identification, matching and registration. The study introduces an automated technique for matching of fragmentary human skeletal remains for improving forensic anthropology practice and policy. The proposed technique involves creation of surfaces models for the fragmentary elements which can be done using computerized tomographic scans followed by segmentation. Upon creation of the fragmentary elements models, the models go through feature extraction technique where the surface roughness map of each model is measured using local shape analysis measures. Adaptive thesholding is then used to extract model features. A multi-stage technique is then used to identify, match and register bone fragments to their corresponding template bone model. First, extracted features are used for matching with different template bone models using iterative closest point algorithm with different positions and orientations. The best match score, in terms of minimum root-mean-square error, is used along with the position and orientation and the resulting transformation to register the fragment bone model with the corresponding template bone model using iterative closest point algorithm

    Multi-population methods in unconstrained continuous dynamic environments: The challenges

    Get PDF
    Themulti-populationmethod has been widely used to solve unconstrained continuous dynamic optimization problems with the aim of maintaining multiple populations on different peaks to locate and track multiple changing peaks simultaneously. However, to make this approach efficient, several crucial challenging issues need to be addressed, e.g., how to determine the moment to react to changes, how to adapt the number of populations to changing environments, and how to determine the search area of each population. In addition, several other issues, e.g., communication between populations, overlapping search, the way to create multiple populations, detection of changes, and local search operators, should be also addressed. The lack of attention on these challenging issues within multi-population methods hinders the development of multi-population based algorithms in dynamic environments. In this paper, these challenging issues are comprehensively analyzed by a set of experimental studies from the algorithm design point of view. Experimental studies based on a set of popular algorithms show that the performance of algorithms is significantly affected by these challenging issues on the moving peaks benchmark. Keywords: Multi-population methods, dynamic optimization problems, evolutionary computatio
    corecore