24,372 research outputs found

    Analysis of Software Binaries for Reengineering-Driven Product Line Architecture\^aAn Industrial Case Study

    Full text link
    This paper describes a method for the recovering of software architectures from a set of similar (but unrelated) software products in binary form. One intention is to drive refactoring into software product lines and combine architecture recovery with run time binary analysis and existing clustering methods. Using our runtime binary analysis, we create graphs that capture the dependencies between different software parts. These are clustered into smaller component graphs, that group software parts with high interactions into larger entities. The component graphs serve as a basis for further software product line work. In this paper, we concentrate on the analysis part of the method and the graph clustering. We apply the graph clustering method to a real application in the context of automation / robot configuration software tools.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301

    Can Clustering Improve Requirements Traceability? A Tracelab-enabled Study

    Get PDF
    Software permeates every aspect of our modern lives. In many applications, such in the software for airplane flight controls, or nuclear power control systems software failures can have catastrophic consequences. As we place so much trust in software, how can we know if it is trustworthy? Through software assurance, we can attempt to quantify just that. Building complex, high assurance software is no simple task. The difficult information landscape of a software engineering project can make verification and validation, the process by which the assurance of a software is assessed, very difficult. In order to manage the inevitable information overload of complex software projects, we need software traceability, the ability to describe and follow the life of a requirement, in both forwards and backwards direction. The Center of Excellence for Software Traceability (CoEST) has created a compelling research agenda with the goal of ubiquitous traceability by 2035. As part of this goal, they have developed TraceLab, a visual experimental workbench built to support design, implementation, and execution of traceability experiments. Through our collaboration with CoEST, we have made several contributions to TraceLab and its community. This work contributes to the goals of the traceability research community. The three key contributions are (a) a machine learning component package for TraceLab featuring six (6) classifier algorithms, five (5) clustering algorithms, and a total of over 40 components for creating TraceLab experiments, built upon the WEKA machine learning package, as well as implementing methods outside of WEKA; (b) the design for an automated tracing system that uses clustering to decompose the task of tracing into many smaller tracing subproblems; and (c) an implementation of several key components of this tracing system using TraceLab and its experimental evaluation

    Morphological feature extraction for statistical learning with applications to solar image data

    Get PDF
    Abstract: Many areas of science are generating large volumes of digital image data. In order to take full advantage of the high-resolution and high-cadence images modern technology is producing, methods to automatically process and analyze large batches of such images are needed. This involves reducing complex images to simple representations such as binary sketches or numerical summaries that capture embedded scientific information. Using techniques derived from mathematical morphology, we demonstrate how to reduce solar images into simple ‘sketch ’ representations and numerical summaries that can be used for statistical learning. We demonstrate our general techniques on two specific examples: classifying sunspot groups and recognizing coronal loop structures. Our methodology reproduces manual classifications at an overall rate of 90 % on a set of 119 magnetogram and white light images of sunspot groups. We also show that our methodology is competitive with other automated algorithms at producing coronal loop tracings and demonstrate robustness through noise simulations. 2013 Wile

    Deep and superficial amygdala nuclei projections revealed in vivo by probabilistic tractography

    Get PDF
    Copyright © 2011 Society for Neuroscience and the authors. The The Journal of Neuroscience uses a Creative Commons Attribution-NonCommercial-ShareAlike licence: http://creativecommons.org/licenses/by-nc-sa/4.0/.Despite a homogenous macroscopic appearance on magnetic resonance images, subregions of the amygdala express distinct functional profiles as well as corresponding differences in connectivity. In particular, histological analysis shows stronger connections for superficial (i.e., centromedial and cortical), compared with deep (i.e., basolateral and other), amygdala nuclei to lateral orbitofrontal cortex and stronger connections of deep compared with superficial, nuclei to polymodal areas in the temporal pole. Here, we use diffusion weighted imaging with probabilistic tractography to investigate these connections in humans. We use a data-driven approach to segment the amygdala into two subregions using k-means clustering. The identified subregions are spatially contiguous and their location corresponds to deep and superficial nuclear groups. Quantification of the connection strength between these amygdala clusters and individual target regions corresponds to qualitative histological findings in non-human primates, indicating such findings can be extrapolated to humans. We propose that connectivity profiles provide a potentially powerful approach for in vivo amygdala parcellation and can serve as a guide in studies that exploit functional and anatomical neuroimaging.The Wellcome Trust, a Max Planck Research Award and Swiss National Science Foundation

    Intima-Media Thickness: Setting a Standard for a Completely Automated Method of Ultrasound Measurement

    Get PDF
    The intima - media thickness (IMT) of the common carotid artery is a widely used clinical marker of severe cardiovascular diseases. IMT is usually manually measured on longitudinal B-Mode ultrasound images. Many computer-based techniques for IMT measurement have been proposed to overcome the limits of manual segmentation. Most of these, however, require a certain degree of user interaction. In this paper we describe a new completely automated layers extraction (CALEXia) technique for the segmentation and IMT measurement of carotid wall in ultrasound images. CALEXia is based on an integrated approach consisting of feature extraction, line fitting, and classification that enables the automated tracing of the carotid adventitial walls. IMT is then measured by relying on a fuzzy K-means classifier. We tested CALEXia on a database of 200 images. We compared CALEXia performances to those of a previously developed methodology that was based on signal analysis (CULEXsa). Three trained operators manually segmented the images and the average profiles were considered as the ground truth. The average error from CALEXia for lumen - intima (LI) and media - adventitia (MA) interface tracings were 1.46 ± 1.51 pixel (0.091 ± 0.093 mm) and 0.40 ± 0.87 pixel (0.025 ± 0.055 mm), respectively. The corresponding errors for CULEXsa were 0.55 ± 0.51 pixels (0.035 ± 0.032 mm) and 0.59 ± 0.46 pixels (0.037 ± 0.029 mm). The IMT measurement error was equal to 0.87 ± 0.56 pixel (0.054 ± 0.035 mm) for CALEXia and 0.12 ± 0.14 pixel (0.01 ± 0.01 mm) for CULEXsa. Thus, CALEXia showed limited performance in segmenting the LI interface, but outperformed CULEXsa in the MA interface and in the number of images correctly processed (10 for CALEXia and 16 for CULEXsa). Based on two complementary strategies, we anticipate fusing them for further IMT improvement
    • …
    corecore