5,489 research outputs found
Improving Variabilty Analysis through Scenario-Based Incompatibility Detection
Software Product Line (SPL) developments include Variability Management (VA) as a core activity aiming at minimizing the inherent complexity in commonality and variability manipulation. Particularly, the (automated) analysis of variability models refers to the activities, methods and techniques involved in the definition, design, and instantiation of variabilities modeled during SPL development. Steps of this analysis are defined as a variability analysis process (VA process), which is focused on assisting variability model designers in avoiding anomalies and/or inconsistencies, and minimizing problems when products are implemented and derived. Previously, we have proposed an approach for analyzing variability models through a well-defined VA process (named SeVaTax). This process includes a comprehensive set of scenarios, which allows a designer to detect (and even correct in some cases) different incompatibilities. In this work, we extend SeVaTax by classifying the scenarios according to their dependencies, and by assessing the use of these scenarios. This assessment introduces two experiments to evaluate accuracy and coverage. The former addresses responses when variability models are analyzed, and the latter the completeness of our process with respect to other proposals. Findings show that a more extensive set of scenarios might improve the possibilities of current practices in variability analysis.Fil: Buccella, Agustina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Patagonia Confluencia; Argentina. Universidad Nacional del Comahue. Facultad de Informatica; ArgentinaFil: Pol'la, Matias Esteban. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Patagonia Confluencia; Argentina. Universidad Nacional del Comahue. Facultad de Informatica; ArgentinaFil: Cechich, Susana Alejandra. Universidad Nacional del Comahue. Facultad de Informatica; Argentin
Fully automatic segmentation and monitoring of choriocapillaris flow voids in OCTA images
Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG.[Abstract]: Optical coherence tomography angiography (OCTA) is a non-invasive ophthalmic imaging modality that is widely used in clinical practice. Recent technological advances in OCTA allow imaging of blood flow deeper than the retinal layers, at the level of the choriocapillaris (CC), where a granular image is obtained showing a pattern of bright areas, representing blood flow, and a pattern of small dark regions, called flow voids (FVs). Several clinical studies have reported a close correlation between abnormal FVs distribution and multiple diseases, so quantifying changes in FVs distribution in CC has become an area of interest for many clinicians. However, CC OCTA images present very complex features that make it difficult to correctly compare FVs during the monitoring of a patient. In this work, we propose fully automatic approaches for the segmentation and monitoring of FVs in CC OCTA images. First, a baseline approach, in which a fully automatic segmentation methodology based on local contrast enhancement and global thresholding is proposed to segment FVs and measure changes in their distribution in a straightforward manner. Second, a robust approach in which, prior to the use of our segmentation methodology, an unsupervised trained neural network is used to perform a deformable registration that aligns inconsistencies between images acquired at different time instants. The proposed approaches were tested with CC OCTA images collected during a clinical study on the response to photodynamic therapy in patients affected by chronic central serous chorioretinopathy (CSC), demonstrating their clinical utility. The results showed that both approaches are accurate and robust, surpassing the state of the art, therefore improving the efficacy of FVs as a biomarker to monitor the patient treatments. This gives great potential for the clinical use of our methods, with the possibility of extending their use to other pathologies or treatments associated with this type of imaging.Xunta de Galicia; ED481B-2021-059Xunta de Galicia; ED431C 2020/24Xunta de Galicia; IN845D 2020/38Xunta de Galicia; ED431G 2019/01This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095894-B-I00 research project; Ministerio de Ciencia e Innovación, Government of Spain through the research projects with references PID2019-108435RB-I00; TED2021-131201B-I00 and PDC2022-133132-I00; Consellería de Cultura, Educación e Universidade, Xunta de Galicia through the postdoctoral, grant ref. ED481B-2021-059; and Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; Axencia Galega de Innovación (GAIN), Xunta de Galicia, grant ref. IN845D 2020/38; CITIC, as Research Center accredited by Galician University System, is funded by “Consellería de Cultura, Educación e Universidade from Xunta de Galicia”, supported in an 80 % through ERDF Funds, ERDF Operational Programme Galicia 2014–2020, and the remaining 20 % by “Secretaría Xeral de Universidades”, grant ref. ED431G 2019/01. Emilio López Varela acknowledges its support under FPI Grant Program through PID2019-108435RB-I00 project. Funding for open access charge: Universidade da Coruña/CISUG
Automatic Mapping of Student 3D Profiles in Software Metrics for Temporal Analysis of Programming Learning and Scoring Rubrics
The purpose of this chapter is to present an online system for a 3D representation of programming students’ profiles on software metrics that quantify effort and quality of programming from the analysis of source codes. In this representation, each student profile is a three-dimensional vector represented by a set of programming solutions developed by a student and mapped on 348 metrics of software during a programming course. Applying this profile representation, we developed a system with the following functionalities: generation of student’s timelines to verify the evolution of metrics in a sequence of programming solutions over a course, different visualizations of these variables, automatic selection of representative codes for composition of rubrics with less effort of evaluation and selection of metrics that more influence in scores attributed by teachers. The advantages of this system are to enable the analysis of where the learning difficulties begin, the monitoring of how a class evolves along a course and the dynamic composition of rubric representations to inform assessment criteria. The system proposed therefore presents itself as a relevant tool to assist teachers about decisions of an evaluative process, allowing in fact to assist students from the beginning to the end of a course
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
Knowledge-based Biomedical Data Science 2019
Knowledge-based biomedical data science (KBDS) involves the design and
implementation of computer systems that act as if they knew about biomedicine.
Such systems depend on formally represented knowledge in computer systems,
often in the form of knowledge graphs. Here we survey the progress in the last
year in systems that use formally represented knowledge to address data science
problems in both clinical and biological domains, as well as on approaches for
creating knowledge graphs. Major themes include the relationships between
knowledge graphs and machine learning, the use of natural language processing,
and the expansion of knowledge-based approaches to novel domains, such as
Chinese Traditional Medicine and biodiversity.Comment: Manuscript 43 pages with 3 tables; Supplemental material 43 pages
with 3 table
The 1990 progress report and future plans
This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers
Vision-driven Autocharacterization of Perovskite Semiconductors
In materials research, the task of characterizing hundreds of different
materials traditionally requires equally many human hours spent measuring
samples one by one. We demonstrate that with the integration of computer vision
into this material research workflow, many of these tasks can be automated,
significantly accelerating the throughput of the workflow for scientists. We
present a framework that uses vision to address specific pain points in the
characterization of perovskite semiconductors, a group of materials with the
potential to form new types of solar cells. With this approach, we automate the
measurement and computation of chemical and optoelectronic properties of
perovskites. Our framework proposes the following four key contributions: (i) a
computer vision tool for scalable segmentation to arbitrarily many material
samples, (ii) a tool to extract the chemical composition of all material
samples, (iii) an algorithm capable of automatically computing band gap across
arbitrarily many unique samples using vision-segmented hyperspectral
reflectance data, and (iv) automating the stability measurement of multi-hour
perovskite degradation experiments with vision for spatially non-uniform
samples. We demonstrate the key contributions of the proposed framework on
eighty samples of unique composition from the formamidinium-methylammonium lead
tri-iodide perovskite system and validate the accuracy of each method using
human evaluation and X-ray diffraction.Comment: Manuscript 8 pages; Supplemental 7 page
Design of the software development and verification system (SWDVS) for shuttle NASA study task 35
An overview of the Software Development and Verification System (SWDVS) for the space shuttle is presented. The design considerations, goals, assumptions, and major features of the design are examined. A scenario that shows three persons involved in flight software development using the SWDVS in response to a program change request is developed. The SWDVS is described from the standpoint of different groups of people with different responsibilities in the shuttle program to show the functional requirements that influenced the SWDVS design. The software elements of the SWDVS that satisfy the requirements of the different groups are identified
- …