108 research outputs found

    A Review of Atrial Fibrillation Detection Methods as a Service

    Get PDF
    Atrial Fibrillation (AF) is a common heart arrhythmia that often goes undetected, and even if it is detected, managing the condition may be challenging. In this paper, we review how the RR interval and Electrocardiogram (ECG) signals, incorporated into a monitoring system, can be useful to track AF events. Were such an automated system to be implemented, it could be used to help manage AF and thereby reduce patient morbidity and mortality. The main impetus behind the idea of developing a service is that a greater data volume analyzed can lead to better patient outcomes. Based on the literature review, which we present herein, we introduce the methods that can be used to detect AF efficiently and automatically via the RR interval and ECG signals. A cardiovascular disease monitoring service that incorporates one or multiple of these detection methods could extend event observation to all times, and could therefore become useful to establish any AF occurrence. The development of an automated and efficient method that monitors AF in real time would likely become a key component for meeting public health goals regarding the reduction of fatalities caused by the disease. Yet, at present, significant technological and regulatory obstacles remain, which prevent the development of any proposed system. Establishment of the scientific foundation for monitoring is important to provide effective service to patients and healthcare professionals

    Data Mining and Machine Learning for Software Engineering

    Get PDF
    Software engineering is one of the most utilizable research areas for data mining. Developers have attempted to improve software quality by mining and analyzing software data. In any phase of software development life cycle (SDLC), while huge amount of data is produced, some design, security, or software problems may occur. In the early phases of software development, analyzing software data helps to handle these problems and lead to more accurate and timely delivery of software projects. Various data mining and machine learning studies have been conducted to deal with software engineering tasks such as defect prediction, effort estimation, etc. This study shows the open issues and presents related solutions and recommendations in software engineering, applying data mining and machine learning techniques

    Fairness Testing: Testing Software for Discrimination

    Full text link
    This paper defines software fairness and discrimination and develops a testing-based method for measuring if and how much software discriminates, focusing on causality in discriminatory behavior. Evidence of software discrimination has been found in modern software systems that recommend criminal sentences, grant access to financial products, and determine who is allowed to participate in promotions. Our approach, Themis, generates efficient test suites to measure discrimination. Given a schema describing valid system inputs, Themis generates discrimination tests automatically and does not require an oracle. We evaluate Themis on 20 software systems, 12 of which come from prior work with explicit focus on avoiding discrimination. We find that (1) Themis is effective at discovering software discrimination, (2) state-of-the-art techniques for removing discrimination from algorithms fail in many situations, at times discriminating against as much as 98% of an input subdomain, (3) Themis optimizations are effective at producing efficient test suites for measuring discrimination, and (4) Themis is more efficient on systems that exhibit more discrimination. We thus demonstrate that fairness testing is a critical aspect of the software development cycle in domains with possible discrimination and provide initial tools for measuring software discrimination.Comment: Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness Testing: Testing Software for Discrimination. In Proceedings of 2017 11th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), Paderborn, Germany, September 4-8, 2017 (ESEC/FSE'17). https://doi.org/10.1145/3106237.3106277, ESEC/FSE, 201

    Leveraging Evolutionary Changes for Software Process Quality

    Full text link
    Real-world software applications must constantly evolve to remain relevant. This evolution occurs when developing new applications or adapting existing ones to meet new requirements, make corrections, or incorporate future functionality. Traditional methods of software quality control involve software quality models and continuous code inspection tools. These measures focus on directly assessing the quality of the software. However, there is a strong correlation and causation between the quality of the development process and the resulting software product. Therefore, improving the development process indirectly improves the software product, too. To achieve this, effective learning from past processes is necessary, often embraced through post mortem organizational learning. While qualitative evaluation of large artifacts is common, smaller quantitative changes captured by application lifecycle management are often overlooked. In addition to software metrics, these smaller changes can reveal complex phenomena related to project culture and management. Leveraging these changes can help detect and address such complex issues. Software evolution was previously measured by the size of changes, but the lack of consensus on a reliable and versatile quantification method prevents its use as a dependable metric. Different size classifications fail to reliably describe the nature of evolution. While application lifecycle management data is rich, identifying which artifacts can model detrimental managerial practices remains uncertain. Approaches such as simulation modeling, discrete events simulation, or Bayesian networks have only limited ability to exploit continuous-time process models of such phenomena. Even worse, the accessibility and mechanistic insight into such gray- or black-box models are typically very low. To address these challenges, we suggest leveraging objectively [...]Comment: Ph.D. Thesis without appended papers, 102 page

    A Survey on What Developers Think About Testing

    Full text link
    Software is infamous for its poor quality and frequent occurrence of bugs. While there is no doubt that thorough testing is an appropriate answer to ensure sufficient quality, the poor state of software generally suggests that developers may not always engage as thoroughly with testing as they should. This observation aligns with the prevailing belief that developers simply do not like writing tests. In order to determine the truth of this belief, we conducted a comprehensive survey with 21 questions aimed at (1) assessing developers' current engagement with testing and (2) identifying factors influencing their inclination toward testing; that is, whether they would actually like to test more but are inhibited by their work environment, or whether they would really prefer to test even less if given the choice. Drawing on 284 responses from professional software developers, we uncover reasons that positively and negatively impact developers' motivation to test. Notably, reasons for motivation to write more tests encompass not only a general pursuit of software quality but also personal satisfaction. However, developers nevertheless perceive testing as mundane and tend to prioritize other tasks. One approach emerging from the responses to mitigate these negative factors is by providing better recognition for developers' testing efforts

    A survey on software defect prediction using deep learning

    Full text link
    Defect prediction is one of the key challenges in software development and programming language research for improving software quality and reliability. The problem in this area is to properly identify the defective source code with high accuracy. Developing a fault prediction model is a challenging problem, and many approaches have been proposed throughout history. The recent breakthrough in machine learning technologies, especially the development of deep learning techniques, has led to many problems being solved by these methods. Our survey focuses on the deep learning techniques for defect prediction. We analyse the recent works on the topic, study the methods for automatic learning of the semantic and structural features from the code, discuss the open problems and present the recent trends in the field. © 2021 by the authors. Licensee MDPI, Basel, Switzerland

    Identification of diseases based on the use of inertial sensors: a systematic review

    Get PDF
    Inertial sensors are commonly embedded in several devices, including smartphones, and other specific devices. This type of sensors may be used for different purposes, including the recognition of different diseases. Several studies are focused on the use of accelerometer for the automatic recognition of different diseases, and it may powerful the different treatments with the use of less invasive and painful techniques for patients. This paper is focused in the systematic review of the studies available in the literature for the automatic recognition of different diseases with accelerometer sensors. The disease that is the most reliably detectable disease using accelerometer sensors, available in 54% of the analyzed studies, is the Parkinson’s disease. The machine learning methods implements for the recognition of Parkinson’s disease reported an accuracy of 94%. Other diseases are recognized in less number that will be subject of further analysis in the future.info:eu-repo/semantics/publishedVersio

    Android source code vulnerability detection: a systematic literature review

    Get PDF
    The use of mobile devices is rising daily in this technological era. A continuous and increasing number of mobile applications are constantly offered on mobile marketplaces to fulfil the needs of smartphone users. Many Android applications do not address the security aspects appropriately. This is often due to a lack of automated mechanisms to identify, test, and fix source code vulnerabilities at the early stages of design and development. Therefore, the need to fix such issues at the initial stages rather than providing updates and patches to the published applications is widely recognized. Researchers have proposed several methods to improve the security of applications by detecting source code vulnerabilities and malicious codes. This Systematic Literature Review (SLR) focuses on Android application analysis and source code vulnerability detection methods and tools by critically evaluating 118 carefully selected technical studies published between 2016 and 2022. It highlights the advantages, disadvantages, applicability of the proposed techniques and potential improvements of those studies. Both Machine Learning (ML) based methods and conventional methods related to vulnerability detection are discussed while focusing more on ML-based methods since many recent studies conducted experiments with ML. Therefore, this paper aims to enable researchers to acquire in-depth knowledge in secure mobile application development while minimizing the vulnerabilities by applying ML methods. Furthermore, researchers can use the discussions and findings of this SLR to identify potential future research and development directions

    Requirements Traceability: Recovering and Visualizing Traceability Links Between Requirements and Source Code of Object-oriented Software Systems

    Full text link
    Requirements traceability is an important activity to reach an effective requirements management method in the requirements engineering. Requirement-to-Code Traceability Links (RtC-TLs) shape the relations between requirement and source code artifacts. RtC-TLs can assist engineers to know which parts of software code implement a specific requirement. In addition, these links can assist engineers to keep a correct mental model of software, and decreasing the risk of code quality degradation when requirements change with time mainly in large sized and complex software. However, manually recovering and preserving of these TLs puts an additional burden on engineers and is error-prone, tedious, and costly task. This paper introduces YamenTrace, an automatic approach and implementation to recover and visualize RtC-TLs in Object-Oriented software based on Latent Semantic Indexing (LSI) and Formal Concept Analysis (FCA). The originality of YamenTrace is that it exploits all code identifier names, comments, and relations in TLs recovery process. YamenTrace uses LSI to find textual similarity across software code and requirements. While FCA employs to cluster similar code and requirements together. Furthermore, YamenTrace gives a visualization of recovered TLs. To validate YamenTrace, it applied on three case studies. The findings of this evaluation prove the importance and performance of YamenTrace proposal as most of RtC-TLs were correctly recovered and visualized.Comment: 17 pages, 14 figure
    corecore