260 research outputs found
Recommended from our members
Software engineering: Testing real-time embedded systems using timed automata based approaches
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Real-time Embedded Systems (RTESs) have an increasing role in controlling society infrastructures that we use on a day-to-day basis. RTES behaviour is not based solely on the interactions it might have with its surrounding environment, but also on the timing requirements it induces. As a result, ensuring that an RTES behaves correctly is non-trivial, especially after adding time as a new dimension to the complexity of the testing process. This research addresses the problem of testing RTESs from Timed Automata (TA) specification by the following. First, a new Priority-based Approach (PA) for testing RTES modelled formally as UPPAAL timed automata (TA variant) is introduced. Test cases generated according to a proposed timed adequacy criterion (clock region coverage) are divided into three sets of priorities, namely boundary, out-boundary and in-boundary. The selection of which set is most appropriate for a System Under Test (SUT) can be decided by the tester according to the system type, time specified for the testing process and its budget. Second, PA is validated in comparison with four well-known timed testing approaches based on TA using Specification Mutation Analysis (SMA). To enable the validation, a set of timed and functional mutation operators based on TA is introduced. Three case studies are used to run SMA. The effectiveness of timed testing approaches are determined and contrasted according to the mutation score which shows that our PA achieves high mutation adequacy score compared with others. Third, to enhance the applicability of PA, a new testing tool (GeTeX) that deploys PA is introduced. In its current version, GeTeX supports Control Area Network (CAN) applications. GeTeX is validated by developing a prototype for that purpose. Using GeTeX, PA is also empirically validated in comparison with some TA testing approaches using a complete industrial-strength test bed. The assessment is based on fault coverage, structural coverage, the length of generated test cases and a proposed assessment factor. The assessment is based on fault coverage, structural coverage, the length of generated test cases and a proposed assessment factor. The assessment results confirmed the superiority of PA over the other test approaches. The overall assessment factor showed that structural and fault coverage scores of PA with respect to the length of its tests were better than the others proving the applicability of PA. Finally, an Analytical Hierarchy Process (AHP) decision-making framework for our PA is developed. The framework can provide testers with a systematic approach by which they can prioritise the available PA test sets that best fulfils their testing requirements. The AHP framework developed is based on the data collected heuristically from the test bed and data collected by interviewing testing experts. The framework is then validated using two testing scenarios. The decision outcomes of the AHP framework were significantly correlated to those of testing experts which demonstrated the soundness and validity of the framework.This study is funded by Damascus University, Syri
Software engineering : testing real-time embedded systems using timed automata based approaches
Real-time Embedded Systems (RTESs) have an increasing role in controlling society infrastructures that we use on a day-to-day basis. RTES behaviour is not based solely on the interactions it might have with its surrounding environment, but also on the timing requirements it induces. As a result, ensuring that an RTES behaves correctly is non-trivial, especially after adding time as a new dimension to the complexity of the testing process. This research addresses the problem of testing RTESs from Timed Automata (TA) specification by the following. First, a new Priority-based Approach (PA) for testing RTES modelled formally as UPPAAL timed automata (TA variant) is introduced. Test cases generated according to a proposed timed adequacy criterion (clock region coverage) are divided into three sets of priorities, namely boundary, out-boundary and in-boundary. The selection of which set is most appropriate for a System Under Test (SUT) can be decided by the tester according to the system type, time specified for the testing process and its budget. Second, PA is validated in comparison with four well-known timed testing approaches based on TA using Specification Mutation Analysis (SMA). To enable the validation, a set of timed and functional mutation operators based on TA is introduced. Three case studies are used to run SMA. The effectiveness of timed testing approaches are determined and contrasted according to the mutation score which shows that our PA achieves high mutation adequacy score compared with others. Third, to enhance the applicability of PA, a new testing tool (GeTeX) that deploys PA is introduced. In its current version, GeTeX supports Control Area Network (CAN) applications. GeTeX is validated by developing a prototype for that purpose. Using GeTeX, PA is also empirically validated in comparison with some TA testing approaches using a complete industrial-strength test bed. The assessment is based on fault coverage, structural coverage, the length of generated test cases and a proposed assessment factor. The assessment is based on fault coverage, structural coverage, the length of generated test cases and a proposed assessment factor. The assessment results confirmed the superiority of PA over the other test approaches. The overall assessment factor showed that structural and fault coverage scores of PA with respect to the length of its tests were better than the others proving the applicability of PA. Finally, an Analytical Hierarchy Process (AHP) decision-making framework for our PA is developed. The framework can provide testers with a systematic approach by which they can prioritise the available PA test sets that best fulfils their testing requirements. The AHP framework developed is based on the data collected heuristically from the test bed and data collected by interviewing testing experts. The framework is then validated using two testing scenarios. The decision outcomes of the AHP framework were significantly correlated to those of testing experts which demonstrated the soundness and validity of the framework.EThOS - Electronic Theses Online ServiceDamascus University, SyriaGBUnited Kingdo
Evaluating performance for procurement: A structured method for assessing the usability of future speech interfaces
Procurement is a process by which organizations acquire equipment to enhance the effectiveness of their operations. Equipment will only enhance effectiveness if it is usable for its purpose in the work environment, i.e. if it enables tasks to be performed to the desired quality with acceptable costs to those who operate it. Procurement presents a requirement, then, for evaluations of the performance of human-machine work systems. This thesis is concerned with the provision of information to support procurers in performing such evaluations. The Ministry of Defence (an equipment procurer) has presented a particular requirement for a means of assessing the usability of speech interfaces in the establishment of the feasibility of computerized battlefield work systems. A structured method was developed to meet this requirement, the scope, notation and process of which sought to be explicit and proceduralized. The scope was specified in terms of a conceptualization of human-computer interaction: the method supported the development of representations of the task, device and user, which could be implemented as simulations and used in empirical evaluations of system performance. Notations for representations were proposed, and procedures enabling the use of the notations. The specification and implementation of the four sub-methods is described, and subsequent enhancement in the context of evaluations of speech interfaces for battlefield observation tasks. The complete method is presented. An evaluation of the method was finally performed with respect to the quality of the assessment output and costs to the assessor. The results suggested that the method facilitated systematic assessment, although some inadequacies were identified in the expression of diagnostic information which was recruited by the procedures, and in some of the procedures themselves. The research offers support for the use of structured human factors evaluation methods in procurement. Qualifications relate to the appropriate expression of knowledge of device-user interaction, and to the conflict between requirements for flexibility and low-level proceduralization
Evolvable Smartphone-Based Point-of-Care Systems For In-Vitro Diagnostics
Recent developments in the life-science -omics disciplines, together with advances in micro and nanoscale technologies offer unprecedented opportunities to tackle some of the major healthcare challenges of our time. Lab-on-Chip technologies coupled with smart-devices in particular, constitute key enablers for the decentralization of many in-vitro medical diagnostics applications to the point-of-care, supporting the advent of a preventive and personalized medicine.
Although the technical feasibility and the potential of Lab-on-Chip/smart-device systems is repeatedly demonstrated, direct-to-consumer applications remain scarce. This thesis addresses this limitation. System evolvability is a key enabler to the adoption and long-lasting success of next generation point-of-care systems by favoring the integration of new technologies, streamlining the reengineering efforts for system upgrades and limiting the risk of premature system obsolescence. Among possible implementation strategies, platform-based design stands as a particularly suitable entry point. One necessary condition, is for change-absorbing and change-enabling mechanisms to be incorporated in the platform architecture at initial design-time. Important considerations arise as to where in Lab-on-Chip/smart-device platforms can these mechanisms be integrated, and how to implement them.
Our investigation revolves around the silicon-nanowire biological field effect transistor, a promising biosensing technology for the detection of biological analytes at ultra low concentrations. We discuss extensively the sensitivity and instrumentation requirements
set by the technology before we present the design and implementation of an evolvable smartphone-based platform capable of interfacing lab-on-chips embedding such sensors. We elaborate on the implementation of various architectural patterns throughout the platform and present how these facilitated the evolution of the system towards one accommodating for electrochemical sensing. Model-based development was undertaken throughout the engineering process. A formal SysML system model fed our evolvability assessment process. We introduce, in particular, a model-based methodology enabling the evaluation of modular scalability: the ability of a system to scale the current value of one of its specification by successively reengineering targeted system modules.
The research work presented in this thesis provides a roadmap for the development of evolvable point-of-care systems, including those targeting direct-to-consumer applications. It extends from the early identification of anticipated change, to the assessment of the ability of a system to accommodate for these changes. Our research should thus interest industrials eager not only to disrupt, but also to last in a shifting socio-technical paradigm
System architecture metrics: an evaluation
The research described in this dissertation is a study of the application of measurement, or metrics for software engineering. This is not in itself a new idea; the concept of measuring software was first mooted close on twenty years ago. However, examination of what is a considerable body of metrics work, reveals that incorporating measurement into software engineering is rather less straightforward than one might pre-suppose and despite the advancing years, there is still a lack of maturity.
The thesis commences with a dissection of three of the most popular metrics, namely Haistead's software science, McCabe's cyclomatic complexity and Henry and Kafura's information flow - all of which might be regarded as having achieved classic status. Despite their popularity these metrics are all flawed in at least three respects. First and foremost, in each case it is unclear exactly what is being measured: instead there being a preponderance of such metaphysical terms as complexIty and qualIty. Second, each metric is theoretically doubtful in that it exhibits anomalous behaviour. Third, much of the claimed empirical support for each metric is spurious arising from poor experimental design, and inappropriate statistical analysis. It is argued that these problems are not misfortune but the inevitable consequence of the ad hoc and unstructured approach of much metrics research: in particular the scant regard paid to the role of underlying models.
This research seeks to address these problems by proposing a systematic method for the development and evaluation of software metrics. The method is a goal directed, combination of formal modelling techniques, and empirical ealiat%or. The met\io s applied to the problem of developing metrics to evaluate software designs - from the perspective of a software engineer wishing to minimise implementation difficulties, faults and future maintenance problems. It highlights a number of weaknesses within the original model. These are tackled in a second, more sophisticated model which is multidimensional, that is it combines, in this case, two metrics. Both the theoretical and empirical analysis show this model to have utility in its ability to identify hardto- implement and unreliable aspects of software designs. It is concluded that this method goes some way towards the problem of introducing a little more rigour into the development, evaluation and evolution of metrics for the software engineer
Recommended from our members
Bank capital: definition, adequacy and issue announcement effects
This dissertation focuses primarily on potential explanations for bank common stock abnormal returns, and their patterns, coincident with the announcement of bank capital issues. Potential influences considered include increased regulatory pressure, conflicting regulatory and market views of bank capital adequacy and the relative predictability of security type. Where possible, the dissertation is set in both UK and US contexts. The dissertation has four principal research components; (1) a review of historical and contemporary bank capital regulation in the UK and US. Historical analysis indicates that the definition of capital, as determined by its functional properties, is dynamic and qualifies the consistency of its measurement over time. The regulatory control of absolute levels of capital is seen to have influence on bank structural development, costs and risk. The regulatory control of relative bank capital (ie in terms of balance sheet structure) is found to have a long and controversial history in the US and is effective progenitor of the current methodology of bank capital measurement and assessment, such as the Basle Agreement, and contains a number of potentially costly deficiencies. (2) an examination of bank capital issue announcement effects in the UK. Following similar work in the US (eg Keeley 1989) negative abnormal return effects are found associated with the announcements of UK ordinary share issues. Also, evidence hints that an imposed increase in regulatory capital pressure (viz the introduction of a minimum capital ratio regime) causes a reduction in issue announcement effects for ordinary share issues. (3) assessment of the capital adequacy of UK and US banks from a market perspective and in terms of a number definitions of capital; namely equity, regulatory primary capital (US), and the 1992 Basle Agreement capital.Conflict between market and regulatory views of capital adequacy are observed in certain years for primary capital. In terms of the capital structure relevance hypothesis, this suggests particular costs which may influence issue announcement effects. (4) modelling the predictability of UK bank capital issue security type (viz ordinary share and debt) and assessing the hypothesis that it is inversely related to the announcement abnormal returns
Proceedings of Plenary Session: The LACIE Symposium
A technology assessment of the LACIE data processing and information systems was discussed during the Large Area Crop Inventory Experiment Symposium. Crop inventories of wheat yield in the United States as well as several other nations (such as the U.S.S.R., Canada, etc.) were discussed, along with the methodology involved in acquiring this data
Theoretical analysis of the philosophy and practice of disciplined inquiry
2015 Spring.Includes bibliographical references.This dissertation theoretically examined the process of disciplined inquiry in the social sciences from its philosophical foundations to its extensions into practice. Key to conceptualization of disciplined inquiry were two regulative ideals: the commitment to the concepts that define the possibility of experience and the commitment to processes for combining the concepts of experience. The paradigm theory of Lincoln, Lynham, and Guba (e.g., Lincoln & Lynham, 2011; Lincoln, Lynham, & Guba, 2011) provided a sophisticated explanation of the possibility of experience that inquirers can commit to when engaging in disciplined inquires. Review of literature revealed an inadequacy in the state of theoretical understanding of processes for combining the concepts of experience. To develop a theoretical agenda of research for disciplined inquiry, the literature on paradigm theory and theory building was analyzed. A historical analysis of paradigm theory revealed milestones in more than 40 years of inquiry focused on conceptualization of the theory. A reverse engineering analysis theoretically examined paradigm theory and its milestones identified from the historical analysis for key features of the theoretical process. A revised conceptualization of disciplined inquiry was presented and a theoretical agenda for developing the underlying theoretical framework for the processes of combining the concepts of experience was outlined
Data bases and data base systems related to NASA's aerospace program. A bibliography with indexes
This bibliography lists 1778 reports, articles, and other documents introduced into the NASA scientific and technical information system, 1975 through 1980
- …