74,056 research outputs found

    Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric

    No full text
    Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively

    Improving Scrum User Stories and Product Backlog Using Work System Snapshots

    Get PDF
    Lack of domain knowledge is often considered a reason for improper elicitation and specification of requirements of a software system. The work system method helps analysts understand the business situation to be supported by the software system. This research investigates the effects of preparing a work system snapshot, a key artifact of the work system method, on the quality of initial requirements specifications represented within the Scrum methodology. Those specifications take the form of a product backlog, a set of user stories to be addressed). The findings from a controlled experiment conducted with 165 students in a software engineering course indicate that the preparation of work system snapshot results in a significant reduction in invalid user stories and increase in valid user stories in the product backlog

    AR2, a novel automatic muscle artifact reduction software method for ictal EEG interpretation: Validation and comparison of performance with commercially available software.

    Get PDF
    Objective: To develop a novel software method (AR2) for reducing muscle contamination of ictal scalp electroencephalogram (EEG), and validate this method on the basis of its performance in comparison to a commercially available software method (AR1) to accurately depict seizure-onset location. Methods: A blinded investigation used 23 EEG recordings of seizures from 8 patients. Each recording was uninterpretable with digital filtering because of muscle artifact and processed using AR1 and AR2 and reviewed by 26 EEG specialists. EEG readers assessed seizure-onset time, lateralization, and region, and specified confidence for each determination. The two methods were validated on the basis of the number of readers able to render assignments, confidence, the intra-class correlation (ICC), and agreement with other clinical findings. Results: Among the 23 seizures, two-thirds of the readers were able to delineate seizure-onset time in 10 of 23 using AR1, and 15 of 23 using AR2 (

    A combined noise reduction and partial volume estimation method for image quantitation

    Get PDF

    Identification of Risks in the Course of Managing the Deep Sea Archeological Projects Using Marine Robotics

    Get PDF
    An analysis is conducted of the basic risks that occur when managing the projects of deep-sea archeological research. It is proposed to consider possible risks of such projects in the form of a general set of risks that contains subsets of the identified and unidentified risks. Based on the generalization of existing experience of conducting underwater archaeological research and with regard to the peculiarities of their execution by using TV-controlled unmanned underwater vehicles, the main risks of such operations are identified. A classification of risk factors is proposed, which takes into account weather and hydrological conditions in the area of operations, peculiarities of the underwater situation, technological and technical provision of underwater archaeological research, possible obstacles from the navigation in the explored area and errors in geographical coordinates of fulfilled work, as well as the human factor. Additionally, environmental, organizational and financial risks, which the project team is aware of, are defined as directly related to planning the projects of deep-sea archeological research. A generalized risk register is developed of the projects\u27 deep-sea archaeological studies as theoretical foundation for designing the models of risk management and their quantitative evaluation when planning financial and temporal resources for such projects

    Automated quantification and evaluation of motion artifact on coronary CT angiography images

    Get PDF
    Abstract Purpose This study developed and validated a Motion Artifact Quantification algorithm to automatically quantify the severity of motion artifacts on coronary computed tomography angiography (CCTA) images. The algorithm was then used to develop a Motion IQ Decision method to automatically identify whether a CCTA dataset is of sufficient diagnostic image quality or requires further correction. Method The developed Motion Artifact Quantification algorithm includes steps to identify the right coronary artery (RCA) regions of interest (ROIs), segment vessel and shading artifacts, and to calculate the motion artifact score (MAS) metric. The segmentation algorithms were verified against ground‐truth manual segmentations. The segmentation algorithms were also verified by comparing and analyzing the MAS calculated from ground‐truth segmentations and the algorithm‐generated segmentations. The Motion IQ Decision algorithm first identifies slices with unsatisfactory image quality using a MAS threshold. The algorithm then uses an artifact‐length threshold to determine whether the degraded vessel segment is large enough to cause the dataset to be nondiagnostic. An observer study on 30 clinical CCTA datasets was performed to obtain the ground‐truth decisions of whether the datasets were of sufficient image quality. A five‐fold cross‐validation was used to identify the thresholds and to evaluate the Motion IQ Decision algorithm. Results The automated segmentation algorithms in the Motion Artifact Quantification algorithm resulted in Dice coefficients of 0.84 for the segmented vessel regions and 0.75 for the segmented shading artifact regions. The MAS calculated using the automated algorithm was within 10% of the values obtained using ground‐truth segmentations. The MAS threshold and artifact‐length thresholds were determined by the ROC analysis to be 0.6 and 6.25 mm by all folds. The Motion IQ Decision algorithm demonstrated 100% sensitivity, 66.7% ± 27.9% specificity, and a total accuracy of 86.7% ± 12.5% for identifying datasets in which the RCA required correction. The Motion IQ Decision algorithm demonstrated 91.3% sensitivity, 71.4% specificity, and a total accuracy of 86.7% for identifying CCTA datasets that need correction for any of the three main vessels. Conclusion The Motion Artifact Quantification algorithm calculated accurate

    Ontology-driven conceptual modeling: A'systematic literature mapping and review

    Get PDF
    All rights reserved. Ontology-driven conceptual modeling (ODCM) is still a relatively new research domain in the field of information systems and there is still much discussion on how the research in ODCM should be performed and what the focus of this research should be. Therefore, this article aims to critically survey the existing literature in order to assess the kind of research that has been performed over the years, analyze the nature of the research contributions and establish its current state of the art by positioning, evaluating and interpreting relevant research to date that is related to ODCM. To understand and identify any gaps and research opportunities, our literature study is composed of both a systematic mapping study and a systematic review study. The mapping study aims at structuring and classifying the area that is being investigated in order to give a general overview of the research that has been performed in the field. A review study on the other hand is a more thorough and rigorous inquiry and provides recommendations based on the strength of the found evidence. Our results indicate that there are several research gaps that should be addressed and we further composed several research opportunities that are possible areas for future research

    Troubleshooting Arterial-Phase MR Images of Gadoxetate Disodium-Enhanced Liver.

    Get PDF
    Gadoxetate disodium is a widely used magnetic resonance (MR) contrast agent for liver MR imaging, and it provides both dynamic and hepatobiliary phase images. However, acquiring optimal arterial phase images at liver MR using gadoxetate disodium is more challenging than using conventional extracellular MR contrast agent because of the small volume administered, the gadolinium content of the agent, and the common occurrence of transient severe motion. In this article, we identify the challenges in obtaining high-quality arterial-phase images of gadoxetate disodium-enhanced liver MR imaging and present strategies for optimizing arterial-phase imaging based on the thorough review of recent research in this field

    Enhancing Workflow with a Semantic Description of Scientific Intent

    Get PDF
    Peer reviewedPreprin
    corecore