1,446 research outputs found

    Can Clustering Improve Requirements Traceability? A Tracelab-enabled Study

    Get PDF
    Software permeates every aspect of our modern lives. In many applications, such in the software for airplane flight controls, or nuclear power control systems software failures can have catastrophic consequences. As we place so much trust in software, how can we know if it is trustworthy? Through software assurance, we can attempt to quantify just that. Building complex, high assurance software is no simple task. The difficult information landscape of a software engineering project can make verification and validation, the process by which the assurance of a software is assessed, very difficult. In order to manage the inevitable information overload of complex software projects, we need software traceability, the ability to describe and follow the life of a requirement, in both forwards and backwards direction. The Center of Excellence for Software Traceability (CoEST) has created a compelling research agenda with the goal of ubiquitous traceability by 2035. As part of this goal, they have developed TraceLab, a visual experimental workbench built to support design, implementation, and execution of traceability experiments. Through our collaboration with CoEST, we have made several contributions to TraceLab and its community. This work contributes to the goals of the traceability research community. The three key contributions are (a) a machine learning component package for TraceLab featuring six (6) classifier algorithms, five (5) clustering algorithms, and a total of over 40 components for creating TraceLab experiments, built upon the WEKA machine learning package, as well as implementing methods outside of WEKA; (b) the design for an automated tracing system that uses clustering to decompose the task of tracing into many smaller tracing subproblems; and (c) an implementation of several key components of this tracing system using TraceLab and its experimental evaluation

    Snapshot hyperspectral imaging : near-infrared image replicating imaging spectrometer and achromatisation of Wollaston prisms

    Get PDF
    Conventional hyperspectral imaging (HSI) techniques are time-sequential and rely on temporal scanning to capture hyperspectral images. This temporal constraint can limit the application of HSI to static scenes and platforms, where transient and dynamic events are not expected during data capture. The Near-Infrared Image Replicating Imaging Spectrometer (N-IRIS) sensor described in this thesis enables snapshot HSI in the short-wave infrared (SWIR), without the requirement for scanning and operates without rejection in polarised light. It operates in eight wavebands from 1.1Όm to 1.7Όm with a 2.0° diagonal field-of-view. N-IRIS produces spectral images directly, without the need for prior topographic or image reconstruction. Additional benefits include compactness, robustness, static operation, lower processing overheads, higher signal-to-noise ratio and higher optical throughput with respect to other HSI snapshot sensors generally. This thesis covers the IRIS design process from theoretical concepts to quantitative modelling, culminating in the N-IRIS prototype designed for SWIR imaging. This effort formed the logical step in advancing from peer efforts, which focussed upon the visible wavelengths. After acceptance testing to verify optical parameters, empirical laboratory trials were carried out. This testing focussed on discriminating between common materials within a controlled environment as proof-of-concept. Significance tests were used to provide an initial test of N-IRIS capability in distinguishing materials with respect to using a conventional SWIR broadband sensor. Motivated by the design and assembly of a cost-effective visible IRIS, an innovative solution was developed for the problem of chromatic variation in the splitting angle (CVSA) of Wollaston prisms. CVSA introduces spectral blurring of images. Analytical theory is presented and is illustrated with an example N-IRIS application where a sixfold reduction in dispersion is achieved for wavelengths in the region 400nm to 1.7Όm, although the principle is applicable from ultraviolet to thermal-IR wavelengths. Experimental proof of concept is demonstrated and the spectral smearing of an achromatised N-IRIS is shown to be reduced by an order of magnitude. These achromatised prisms can provide benefits to areas beyond hyperspectral imaging, such as microscopy, laser pulse control and spectrometry

    Toward an Effective Automated Tracing Process

    Get PDF
    Traceability is defined as the ability to establish, record, and maintain dependency relations among various software artifacts in a software system, in both a forwards and backwards direction, throughout the multiple phases of the project’s life cycle. The availability of traceability information has been proven vital to several software engineering activities such as program comprehension, impact analysis, feature location, software reuse, and verification and validation (V&V). The research on automated software traceability has noticeably advanced in the past few years. Various methodologies and tools have been proposed in the literature to provide automatic support for establishing and maintaining traceability information in software systems. This movement is motivated by the increasing attention traceability has been receiving as a critical element of any rigorous software development process. However, despite these major advances, traceability implementation and use is still not pervasive in industry. In particular, traceability tools are still far from achieving performance levels that are adequate for practical applications. Such low levels of accuracy require software engineers working with traceability tools to spend a considerable amount of their time verifying the generated traceability information, a process that is often described as tedious, exhaustive, and error-prone. Motivated by these observations, and building upon a growing body of work in this area, in this dissertation we explore several research directions related to enhancing the performance of automated tracing tools and techniques. In particular, our work addresses several issues related to the various aspects of the IR-based automated tracing process, including trace link retrieval, performance enhancement, and the role of the human in the process. Our main objective is to achieve performance levels, in terms of accuracy, efficiency, and usability, that are adequate for practical applications, and ultimately to accomplish a successful technology transfer from research to industry

    Ventral hippocampal circuits for the state-dependent control of feeding behaviour

    Get PDF
    The hippocampus is classically thought to support spatial cognition and episodic memory, but increasing evidence indicates that the hippocampus is also important for non-spatial, motivated behaviour. Hunger is an internal motivational state that not only directly invigorates behaviour towards food, but can also act as a contextual signal to support adaptive behaviour. Lesions to the hippocampus impair the internal sensing of hunger as a context, and hippocampal neurons express receptors for hunger-related hormones. However, it remains unclear whether the hippocampus is involved in sensing hunger and, if so, how hunger state sensing modulates hippocampal activity at the circuit and cellular levels to alter behaviour. Using in vivo Ca2+ imaging during naturalistic and operant-based feeding behaviour, pharmacogenetics, anatomical tracing, whole-cell electrophysiology and molecular knockdown approaches, in this PhD I probed the functional role of the ventral subiculum (vS) circuitry in hunger state sensing during feeding behaviour. The results obtained implicates the vS in encoding the anticipation of food consumption. This encoding is both specific to vS projections to the nucleus accumbens (vSNAc) and dependent on the hunger state; hunger inhibits the activity of vSNAc neurons, and this inhibition relies on ghrelin receptor signalling in vSNAc neurons. Furthermore, altering the activity of vSNAc neurons shifts the probability of transitioning from food exploration to consumption. Finally, there is a distinct input connectivity to individual vS projections, providing a potential neural basis for the heterogeneous functions of projection-specific vS neurons. Overall, this PhD advances the understanding of hippocampal function to encompass a nonspatial domain - the sensing of the hunger state – as well as clarify the cellular- and circuit-level mechanisms involved in hunger state sensing. This work presents evidence for a neural mechanism by which hunger can act as a contextual signal and alter behaviour through defined output projections from the ventral hippocampus

    CHARACTERIZATION OF ENGINEERED SURFACES

    Get PDF
    In the recent years there has been an increasing interest in manufacturing products where surface topography plays a functional role. These surfaces are called engineered surfaces and are used in a variety of industries like semi conductor, data storage, micro- optics, MEMS etc. Engineered products are designed, manufactured and inspected to meet a variety of specifications such as size, position, geometry and surface finish to control the physical, chemical, optical and electrical properties of the surface. As the manufacturing industry strive towards shrinking form factor resulting in miniaturization of surface features, measurement of such micro and nanometer scale surfaces is becoming more challenging. Great strides have been made in the area of instrumentation to capture surface data, but the area of algorithms and procedures to determine form, size and orientation information of surface features still lacks the advancement needed to support the characterization requirements of R&D and high volume manufacturing. This dissertation addresses the development of fast and intelligent surface scanning algorithms and methodologies for engineered surfaces to determine form, size and orientation of significant surface features. Object recognition techniques are used to identify the surface features and CMM type fitting algorithms are applied to calculate the dimensions of the features. Recipes can be created to automate the characterization and process multiple features simultaneously. The developed methodologies are integrated into a surface analysis toolbox developed in MATLAB environment. The deployment of the developed application on the web is demonstrated

    Recovering Trace Links Between Software Documentation And Code

    Get PDF
    Introduction Software development involves creating various artifacts at different levels of abstraction and establishing relationships between them is essential. Traceability link recovery (TLR) automates this process, enhancing software quality by aiding tasks like maintenance and evolution. However, automating TLR is challenging due to semantic gaps resulting from different levels of abstraction. While automated TLR approaches exist for requirements and code, architecture documentation lacks tailored solutions, hindering the preservation of architecture knowledge and design decisions. Methods This paper presents our approach TransArC for TLR between architecture documentation and code, using componentbased architecture models as intermediate artifacts to bridge the semantic gap. We create transitive trace links by combining the existing approach ArDoCo for linking architecture documentation to models with our novel approach ArCoTL for linking architecture models to code. Results We evaluate our approaches with five open-source projects, comparing our results to baseline approaches. The model-to-code TLR approach achieves an average F1-score of 0.98, while the documentation-to-code TLR approach achieves a promising average F1-score of 0.82, significantly outperforming baselines. Conclusion Combining two specialized approaches with an intermediate artifact shows promise for bridging the semantic gap. In future research, we will explore further possibilities for such transitive approaches

    Spatial embedding and wiring cost constrain the functional layout of the cortical network of rodents and primates

    Get PDF
    Mammals show a wide range of brain sizes, reflecting adaptation to diverse habitats. Comparing interareal cortical networks across brains of different sizes and mammalian orders provides robust information on evolutionarily preserved features and species-specific processing modalities. However, these networks are spatially embedded, directed, and weighted, making comparisons challenging. Using tract tracing data from macaque and mouse, we show the existence of a general organizational principle based on an exponential distance rule (EDR) and cortical geometry, enabling network comparisons within the same model framework. These comparisons reveal the existence of network invariants between mouse and macaque, exemplified in graph motif profiles and connection similarity indices, but also significant differences, such as fractionally smaller and much weaker long-distance connections in the macaque than in mouse. The latter lends credence to the prediction that long-distance cortico-cortical connections could be very weak in the much-expanded human cortex, implying an increased susceptibility to disconnection syndromes such as Alzheimer disease and schizophrenia. Finally, our data from tracer experiments involving only gray matter connections in the primary visual areas of both species show that an EDR holds at local scales as well (within 1.5 mm), supporting the hypothesis that it is a universally valid property across all scales and, possibly, across the mammalian class

    Recovering Trace Links Between Software Documentation And Code

    Get PDF
    Introduction Software development involves creating various artifacts at different levels of abstraction and establishing relationships between them is essential. Traceability link recovery (TLR) automates this process, enhancing software quality by aiding tasks like maintenance and evolution. However, automating TLR is challenging due to semantic gaps resulting from different levels of abstraction. While automated TLR approaches exist for requirements and code, architecture documentation lacks tailored solutions, hindering the preservation of architecture knowledge and design decisions. Methods This paper presents our approach TransArC for TLR between architecture documentation and code, using componentbased architecture models as intermediate artifacts to bridge the semantic gap. We create transitive trace links by combining the existing approach ArDoCo for linking architecture documentation to models with our novel approach ArCoTL for linking architecture models to code. Results We evaluate our approaches with five open-source projects, comparing our results to baseline approaches. The model-to-code TLR approach achieves an average F1-score of 0.98, while the documentation-to-code TLR approach achieves a promising average F1-score of 0.82, significantly outperforming baselines. Conclusion Combining two specialized approaches with an intermediate artifact shows promise for bridging the semantic gap. In future research, we will explore further possibilities for such transitive approaches

    Towards an Intelligent System for Software Traceability Datasets Generation

    Get PDF
    Software datasets and artifacts play a crucial role in advancing automated software traceability research. They can be used by researchers in different ways to develop or validate new automated approaches. Software artifacts, other than source code and issue tracking entities, can also provide a great deal of insight into a software system and facilitate knowledge sharing and information reuse. The diversity and quality of the datasets and artifacts within a research community have a significant impact on the accuracy, generalizability, and reproducibility of the results and consequently on the usefulness and practicality of the techniques under study. Collecting and assessing the quality of such datasets are not trivial tasks and have been reported as an obstacle by many researchers in the domain of software engineering. In this dissertation, we report our empirical work that aims to automatically generate and assess the quality of such datasets. Our goal is to introduce an intelligent system that can help researchers in the domain of software traceability in obtaining high-quality “training sets”, “testing sets” or appropriate “case studies” from open source repositories based on their needs. In the first project, we present a first-of-its-kind study to review and assess the datasets that have been used in software traceability research over the last fifteen years. It presents and articulates the current status of these datasets, their characteristics, and their threats to validity. Second, this dissertation introduces a Traceability-Dataset Quality Assessment (T-DQA) framework to categorize software traceability datasets and assist researchers to select appropriate datasets for their research based on different characteristics of the datasets and the context in which those datasets will be used. Third, we present the results of an empirical study with limited scope to generate datasets using three baseline approaches for the creation of training data. These approaches are (i) Expert-Based, (ii) Automated Web-Mining, which generates training sets by automatically mining tactic\u27s APIs from technical programming websites, and lastly, (iii) Automated Big-Data Analysis, which mines ultra-large-scale code repositories to generate training sets. We compare the trace-link creation accuracy achieved using each of these three baseline approaches and discuss the costs and benefits associated with them. Additionally, in a separate study, we investigate the impact of training set size on the accuracy of recovering trace links. Finally, we conduct a large-scale study to identify which types of software artifacts are produced by a wide variety of open-source projects at different levels of granularity. Then we propose an automated approach based on Machine Learning techniques to identify various types of software artifacts. Through a set of experiments, we report and compare the performance of these algorithms when applied to software artifacts. Finally, we conducted a study to understand how software traceability experts and practitioners evaluate the quality of their datasets. In addition, we aim at gathering experts’ opinions on all quality attributes and metrics proposed by T-DQA
    • 

    corecore