347 research outputs found
Best software test & quality assurance practices in the project life-cycle. An approach to the creation of a process for improved test & quality assurance practices in the project life-cycle of an SME
The cost of software problems or errors is a significant problem to global industry, not only to the producers of the software but also to their customers and end users of the software.
There is a cost associated with the lack of quality of software to companies who purchase a software product and also to the companies who produce the same piece of software. The task of improving quality on a limited cost base is a difficult one.
The foundation of this thesis lies with the difficult task of evaluating software from its inception through its development until its testing and subsequent release. The focus of this thesis is on the improvement of the testing & quality assurance task in an Irish SME company with software quality problems but with a limited budget.
Testing practices and quality assurance methods are outlined in the thesis explaining what was used during the software quality improvement process in the company. Projects conducted in the company are used for the research in the thesis. Following the quality improvement process in the company a framework for improving software quality was produced and subsequently used and evaluated in another company
Personalised and Adjustable Interval Type-2 Fuzzy-Based PPG Quality Assessment for the Edge
Most of today's wearable technology provides seamless cardiac activity
monitoring. Specifically, the vast majority employ Photoplethysmography (PPG)
sensors to acquire blood volume pulse information, which is further analysed to
extract useful and physiologically related features. Nevertheless, PPG-based
signal reliability presents different challenges that strongly affect such data
processing. This is mainly related to the fact of PPG morphological wave
distortion due to motion artefacts, which can lead to erroneous interpretation
of the extracted cardiac-related features. On this basis, in this paper, we
propose a novel personalised and adjustable Interval Type-2 Fuzzy Logic System
(IT2FLS) for assessing the quality of PPG signals. The proposed system employs
a personalised approach to adapt the IT2FLS parameters to the unique
characteristics of each individual's PPG signals.Additionally, the system
provides adjustable levels of personalisation, allowing healthcare providers to
adjust the system to meet specific requirements for different applications. The
proposed system obtained up to 93.72\% for average accuracy during validation.
The presented system has the potential to enable ultra-low complexity and
real-time PPG quality assessment, improving the accuracy and reliability of
PPG-based health monitoring systems at the edge
Disentangling Extraction and Reasoning in Multi-hop Spatial Reasoning
Spatial reasoning over text is challenging as the models not only need to
extract the direct spatial information from the text but also reason over those
and infer implicit spatial relations. Recent studies highlight the struggles
even large language models encounter when it comes to performing spatial
reasoning over text. In this paper, we explore the potential benefits of
disentangling the processes of information extraction and reasoning in models
to address this challenge. To explore this, we design various models that
disentangle extraction and reasoning(either symbolic or neural) and compare
them with state-of-the-art(SOTA) baselines with no explicit design for these
parts. Our experimental results consistently demonstrate the efficacy of
disentangling, showcasing its ability to enhance models' generalizability
within realistic data domains.Comment: Accepted in EMNLP-Finding 202
A Study of Quality Assurance and Testing in Software Development Life Cycle
The objective of this document is to specify a Software Quality Life Cycle (SQLC) that will be used in the development of high quality software. The goal is to create streamlined usable process that supports the SQLC so that the activities related to software quality can be integrated into the existing software development processes. In addition, it is important that we create these processes so they will: Not inhibit the flow of work Not inhibit the creativity of the people Not fail immediately because of the time or resources required Not fail in the long run because the process or life cycle is unsupportable or inflexible.
This document will: Outline the Software Quality Life Cycle (SQLC) and the steps in that life cycle. Focus on a framework and guidelines, not step by step instructions. Define software quality and testing terms that may be unfamiliar or used inconsistently.
Benefits for the users of this document: Provide a repeatable process where the user don\u27t have to reinvent the wheel. Decrease the learning curve of those new to software quality assurance. Aid communication and eliminate confusion with the use of consistent terminology. Contribute to a higher degree of accuracy for project estimates.
SCOPE OF THIS DOCUMENT
The Software Quality Life Cycle testing involves continuous testing of the system during the development process. At predetermined points, the results of the development process are inspected to determine the correctness of the implementation. These inspections identify defects at the earliest possible point. This document will explain the Software Quality Life Cycle (SQLC) process and how it relates to the Software Development Life Cycle (SDLC).
This document will: Encompass the full life cycle of quality assurance and testing an application. Include main levels of testing (unit, system, user acceptance, and installation). Provide an overview of all types of testing. Focus on web and client/ server applications. Address maintenance testing
Fear Classification using Affective Computing with Physiological Information and Smart-Wearables
Mención Internacional en el título de doctorAmong the 17 Sustainable Development Goals proposed within the 2030 Agenda
and adopted by all of the United Nations member states, the fifth SDG is a call
for action to effectively turn gender equality into a fundamental human right and
an essential foundation for a better world. It includes the eradication of all types
of violence against women. Focusing on the technological perspective, the range of
available solutions intended to prevent this social problem is very limited. Moreover,
most of the solutions are based on a panic button approach, leaving aside
the usage and integration of current state-of-the-art technologies, such as the Internet
of Things (IoT), affective computing, cyber-physical systems, and smart-sensors.
Thus, the main purpose of this research is to provide new insight into the design and
development of tools to prevent and combat Gender-based Violence risky situations
and, even, aggressions, from a technological perspective, but without leaving aside
the different sociological considerations directly related to the problem. To achieve
such an objective, we rely on the application of affective computing from a realist
point of view, i.e. targeting the generation of systems and tools capable of being implemented
and used nowadays or within an achievable time-frame. This pragmatic
vision is channelled through: 1) an exhaustive study of the existing technological
tools and mechanisms oriented to the fight Gender-based Violence, 2) the proposal
of a new smart-wearable system intended to deal with some of the current technological
encountered limitations, 3) a novel fear-related emotion classification approach
to disentangle the relation between emotions and physiology, and 4) the definition
and release of a new multi-modal dataset for emotion recognition in women.
Firstly, different fear classification systems using a reduced set of physiological signals are explored and designed. This is done by employing open datasets together
with the combination of time, frequency and non-linear domain techniques. This
design process is encompassed by trade-offs between both physiological considerations
and embedded capabilities. The latter is of paramount importance due to
the edge-computing focus of this research. Two results are highlighted in this first
task, the designed fear classification system that employed the DEAP dataset data
and achieved an AUC of 81.60% and a Gmean of 81.55% on average for a subjectindependent
approach, and only two physiological signals; and the designed fear
classification system that employed the MAHNOB dataset data achieving an AUC
of 86.00% and a Gmean of 73.78% on average for a subject-independent approach,
only three physiological signals, and a Leave-One-Subject-Out configuration. A detailed
comparison with other emotion recognition systems proposed in the literature
is presented, which proves that the obtained metrics are in line with the state-ofthe-
art.
Secondly, Bindi is presented. This is an end-to-end autonomous multimodal system
leveraging affective IoT throughout auditory and physiological commercial off-theshelf
smart-sensors, hierarchical multisensorial fusion, and secured server architecture
to combat Gender-based Violence by automatically detecting risky situations
based on a multimodal intelligence engine and then triggering a protection protocol.
Specifically, this research is focused onto the hardware and software design of one of
the two edge-computing devices within Bindi. This is a bracelet integrating three
physiological sensors, actuators, power monitoring integrated chips, and a System-
On-Chip with wireless capabilities. Within this context, different embedded design
space explorations are presented: embedded filtering evaluation, online physiological
signal quality assessment, feature extraction, and power consumption analysis.
The reported results in all these processes are successfully validated and, for some
of them, even compared against physiological standard measurement equipment.
Amongst the different obtained results regarding the embedded design and implementation
within the bracelet of Bindi, it should be highlighted that its low power
consumption provides a battery life to be approximately 40 hours when using a 500
mAh battery.
Finally, the particularities of our use case and the scarcity of open multimodal datasets dealing with emotional immersive technology, labelling methodology considering
the gender perspective, balanced stimuli distribution regarding the target
emotions, and recovery processes based on the physiological signals of the volunteers
to quantify and isolate the emotional activation between stimuli, led us to the definition
and elaboration of Women and Emotion Multi-modal Affective Computing
(WEMAC) dataset. This is a multimodal dataset in which 104 women who never
experienced Gender-based Violence that performed different emotion-related stimuli
visualisations in a laboratory environment. The previous fear binary classification
systems were improved and applied to this novel multimodal dataset. For instance,
the proposed multimodal fear recognition system using this dataset reports up to
60.20% and 67.59% for ACC and F1-score, respectively. These values represent a
competitive result in comparison with the state-of-the-art that deal with similar
multi-modal use cases.
In general, this PhD thesis has opened a new research line within the research group
under which it has been developed. Moreover, this work has established a solid base
from which to expand knowledge and continue research targeting the generation of
both mechanisms to help vulnerable groups and socially oriented technology.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: David Atienza Alonso.- Secretaria: Susana Patón Álvarez.- Vocal: Eduardo de la Torre Arnan
motilitAI: a machine learning framework for automatic prediction of human sperm motility
In this article, human semen samples from the Visem dataset are automatically assessed with machine learning methods for their quality with respect to sperm motility. Several regression models are trained to automatically predict the percentage (0–100) of progressive, non-progressive, and immotile spermatozoa. The videos are adopted for unsupervised tracking and two different feature extraction methods—in particular custom movement statistics and displacement features. We train multiple neural networks and support vector regression models on the extracted features. Best results are achieved using a linear Support Vector Regressor with an aggregated and quantized representation of individual displacement features of each sperm cell. Compared to the best submission of the Medico Multimedia for Medicine challenge, which used the same dataset and splits, the mean absolute error (MAE) could be reduced from 8.83 to 7.31. We provide the source code for our experiments on GitHub (Code available at: https://github.com/EIHW/motilitAI)
A methodology for integrating legacy systems with the client/server environment
The research is conducted in the area of software methodologies with the emphasis on the integration of legacy systems with the client/server environment. The investigation starts with identifying the characteristics of legacy systems in order to determine the features and technical characteristics required of an integration methodology. A number of existing methodologies are evaluated with respect to their features and technical characteristics in order to derive a synthesis for a generic methodology. This evaluation yields the meta primitives of a generic
methodology. The revised spiral model (Boehm,1986; DuPlessis & Vander Wah,1992) is customised to
arrive at a software process model which provides a framework for the integration of legacy systems
with the client/server environment. The integration methodology is based on this process model.ComputingM. Sc. (Information Systems
Efficient And Flexible Continuous Integration Infrastructure to Improve Software Development Process
Continuous Integration (CI) is a popular software-engineering methodology for co-working between programmers. The key function of CI is to run, build and test tasks automatically when a programmer wants to share his or her code or implement a feature. The primary objectives of CI are to prevent growing integration problems, and to provide feedback with useful information for resolving these issues easily and quickly. Despite extensive academic research and popular services in the industry, such as TravisCI, CircleCI or Jenkins, there is practically have limitations, which result from limited available resources, including budget and low computing power. Moreover, the diversity of modern computer environments, such as different operating systems, libraries or disk sizes, memory, and network speeds, increase both the costs for CI and difficulties in finding bugs automatically in every cases. This study aims to propose supplemental external and internal methods to solve the above obstacles. First, our approach enables programmers to configure different execution environments such as memory and network bandwidth during CI services. Then, we introduce an enhanced CI infrastructure that can efficiently schedule CI services based on resource-profiling techniques and our time-based scheduling algorithm, thereby reducing the overall CI time. Our case studies show that the proposed approaches can report the resource usage information after completing a CI service as well as improve the performance of an existing CI infrastructure
- …