168,135 research outputs found

    A Product Line Systems Engineering Process for Variability Identification and Reduction

    Full text link
    Software Product Line Engineering has attracted attention in the last two decades due to its promising capabilities to reduce costs and time to market through reuse of requirements and components. In practice, developing system level product lines in a large-scale company is not an easy task as there may be thousands of variants and multiple disciplines involved. The manual reuse of legacy system models at domain engineering to build reusable system libraries and configurations of variants to derive target products can be infeasible. To tackle this challenge, a Product Line Systems Engineering process is proposed. Specifically, the process extends research in the System Orthogonal Variability Model to support hierarchical variability modeling with formal definitions; utilizes Systems Engineering concepts and legacy system models to build the hierarchy for the variability model and to identify essential relations between variants; and finally, analyzes the identified relations to reduce the number of variation points. The process, which is automated by computational algorithms, is demonstrated through an illustrative example on generalized Rolls-Royce aircraft engine control systems. To evaluate the effectiveness of the process in the reduction of variation points, it is further applied to case studies in different engineering domains at different levels of complexity. Subject to system model availability, reduction of 14% to 40% in the number of variation points are demonstrated in the case studies.Comment: 12 pages, 6 figures, 2 tables; submitted to the IEEE Systems Journal on 3rd June 201

    Automated quantification and evaluation of motion artifact on coronary CT angiography images

    Get PDF
    Abstract Purpose This study developed and validated a Motion Artifact Quantification algorithm to automatically quantify the severity of motion artifacts on coronary computed tomography angiography (CCTA) images. The algorithm was then used to develop a Motion IQ Decision method to automatically identify whether a CCTA dataset is of sufficient diagnostic image quality or requires further correction. Method The developed Motion Artifact Quantification algorithm includes steps to identify the right coronary artery (RCA) regions of interest (ROIs), segment vessel and shading artifacts, and to calculate the motion artifact score (MAS) metric. The segmentation algorithms were verified against ground‐truth manual segmentations. The segmentation algorithms were also verified by comparing and analyzing the MAS calculated from ground‐truth segmentations and the algorithm‐generated segmentations. The Motion IQ Decision algorithm first identifies slices with unsatisfactory image quality using a MAS threshold. The algorithm then uses an artifact‐length threshold to determine whether the degraded vessel segment is large enough to cause the dataset to be nondiagnostic. An observer study on 30 clinical CCTA datasets was performed to obtain the ground‐truth decisions of whether the datasets were of sufficient image quality. A five‐fold cross‐validation was used to identify the thresholds and to evaluate the Motion IQ Decision algorithm. Results The automated segmentation algorithms in the Motion Artifact Quantification algorithm resulted in Dice coefficients of 0.84 for the segmented vessel regions and 0.75 for the segmented shading artifact regions. The MAS calculated using the automated algorithm was within 10% of the values obtained using ground‐truth segmentations. The MAS threshold and artifact‐length thresholds were determined by the ROC analysis to be 0.6 and 6.25 mm by all folds. The Motion IQ Decision algorithm demonstrated 100% sensitivity, 66.7% ± 27.9% specificity, and a total accuracy of 86.7% ± 12.5% for identifying datasets in which the RCA required correction. The Motion IQ Decision algorithm demonstrated 91.3% sensitivity, 71.4% specificity, and a total accuracy of 86.7% for identifying CCTA datasets that need correction for any of the three main vessels. Conclusion The Motion Artifact Quantification algorithm calculated accurate

    Measuring Accuracy of Automated Parsing and Categorization Tools and Processes in Digital Investigations

    Full text link
    This work presents a method for the measurement of the accuracy of evidential artifact extraction and categorization tasks in digital forensic investigations. Instead of focusing on the measurement of accuracy and errors in the functions of digital forensic tools, this work proposes the application of information retrieval measurement techniques that allow the incorporation of errors introduced by tools and analysis processes. This method uses a `gold standard' that is the collection of evidential objects determined by a digital investigator from suspect data with an unknown ground truth. This work proposes that the accuracy of tools and investigation processes can be evaluated compared to the derived gold standard using common precision and recall values. Two example case studies are presented showing the measurement of the accuracy of automated analysis tools as compared to an in-depth analysis by an expert. It is shown that such measurement can allow investigators to determine changes in accuracy of their processes over time, and determine if such a change is caused by their tools or knowledge.Comment: 17 pages, 2 appendices, 1 figure, 5th International Conference on Digital Forensics and Cyber Crime; Digital Forensics and Cyber Crime, pp. 147-169, 201

    TMS-evoked long-lasting artefacts: A new adaptive algorithm for EEG signal correction

    Get PDF
    OBJECTIVE: During EEG the discharge of TMS generates a long-lasting decay artefact (DA) that makes the analysis of TMS-evoked potentials (TEPs) difficult. Our aim was twofold: (1) to describe how the DA affects the recorded EEG and (2) to develop a new adaptive detrend algorithm (ADA) able to correct the DA. METHODS: We performed two experiments testing 50 healthy volunteers. In experiment 1, we tested the efficacy of ADA by comparing it with two commonly-used independent component analysis (ICA) algorithms. In experiment 2, we further investigated the efficiency of ADA and the impact of the DA evoked from TMS over frontal, motor and parietal areas. RESULTS: Our results demonstrated that (1) the DA affected the EEG signal in the spatiotemporal domain; (2) ADA was able to completely remove the DA without affecting the TEP waveforms; (3). ICA corrections produced significant changes in peak-to-peak TEP amplitude. CONCLUSIONS: ADA is a reliable solution for the DA correction, especially considering that (1) it does not affect physiological responses; (2) it is completely data-driven and (3) its effectiveness does not depend on the characteristics of the artefact and on the number of recording electrodes. SIGNIFICANCE: We proposed a new reliable algorithm of correction for long-lasting TMS-EEG artifacts

    Test Excavations at the Spanish Governor\u27s Palace, San Antonio, Texas

    Get PDF
    Test excavations were carried out in October 1996 by the Center for Archaeological Research of The University of Texas at San Antonio in front of the Spanish Governor\u27s Palace in Military Plaza in downtown San Antonio. Planned for the retrieval of information on the depth and present condition of the foundations of the building, the excavations also recovered important information on previous occupation of the site and construction methods used when the palace was built

    Stability effects on results of diffusion tensor imaging analysis by reduction of the number of gradient directions due to motion artifacts: an application to presymptomatic Huntington's disease.

    Get PDF
    In diffusion tensor imaging (DTI), an improvement in the signal-to-noise ratio (SNR) of the fractional anisotropy (FA) maps can be obtained when the number of recorded gradient directions (GD) is increased. Vice versa, elimination of motion-corrupted or noisy GD leads to a more accurate characterization of the diffusion tensor. We previously suggest a slice-wise method for artifact detection in FA maps. This current study applies this approach to a cohort of 18 premanifest Huntington's disease (pHD) subjects and 23 controls. By 2-D voxelwise statistical comparison of original FA-maps and FA-maps with a reduced number of GD, the effect of eliminating GD that were affected by motion was demonstrated.We present an evaluation metric that allows to test if the computed FA-maps (with a reduced number of GD) still reflect a "true" FA-map, as defined by simulations in the control sample. Furthermore, we investigated if omitting data volumes affected by motion in the pHD cohort could lead to an increased SNR in the resulting FA-maps.A high agreement between original FA maps (with all GD) and corrected FA maps (i.e. without GD corrupted by motion) were observed even for numbers of eliminated GD up to 13. Even in one data set in which 46 GD had to be eliminated, the results showed a moderate agreement

    Evaluation of Motion Artifact Metrics for Coronary CT Angiography

    Get PDF
    Purpose This study quantified the performance of coronary artery motion artifact metrics relative to human observer ratings. Motion artifact metrics have been used as part of motion correction and best‐phase selection algorithms for Coronary Computed Tomography Angiography (CCTA). However, the lack of ground truth makes it difficult to validate how well the metrics quantify the level of motion artifact. This study investigated five motion artifact metrics, including two novel metrics, using a dynamic phantom, clinical CCTA images, and an observer study that provided ground‐truth motion artifact scores from a series of pairwise comparisons. Method Five motion artifact metrics were calculated for the coronary artery regions on both phantom and clinical CCTA images: positivity, entropy, normalized circularity, Fold Overlap Ratio (FOR), and Low‐Intensity Region Score (LIRS). CT images were acquired of a dynamic cardiac phantom that simulated cardiac motion and contained six iodine‐filled vessels of varying diameter and with regions of soft plaque and calcifications. Scans were repeated with different gantry start angles. Images were reconstructed at five phases of the motion cycle. Clinical images were acquired from 14 CCTA exams with patient heart rates ranging from 52 to 82 bpm. The vessel and shading artifacts were manually segmented by three readers and combined to create ground‐truth artifact regions. Motion artifact levels were also assessed by readers using a pairwise comparison method to establish a ground‐truth reader score. The Kendall\u27s Tau coefficients were calculated to evaluate the statistical agreement in ranking between the motion artifacts metrics and reader scores. Linear regression between the reader scores and the metrics was also performed. Results On phantom images, the Kendall\u27s Tau coefficients of the five motion artifact metrics were 0.50 (normalized circularity), 0.35 (entropy), 0.82 (positivity), 0.77 (FOR), 0.77(LIRS), where higher Kendall\u27s Tau signifies higher agreement. The FOR, LIRS, and transformed positivity (the fourth root of the positivity) were further evaluated in the study of clinical images. The Kendall\u27s Tau coefficients of the selected metrics were 0.59 (FOR), 0.53 (LIRS), and 0.21 (Transformed positivity). In the study of clinical data, a Motion Artifact Score, defined as the product of FOR and LIRS metrics, further improved agreement with reader scores, with a Kendall\u27s Tau coefficient of 0.65. Conclusion The metrics of FOR, LIRS, and the product of the two metrics provided the highest agreement in motion artifact ranking when compared to the readers, and the highest linear correlation to the reader scores. The validated motion artifact metrics may be useful for developing and evaluating methods to reduce motion in Coronary Computed Tomography Angiography (CCTA) images
    • 

    corecore