416,913 research outputs found
Efficient testing based on logical architecture
The rapid increase of software-intensive systems' size and complexity makes it infeasible to exhaustively run testing on the low level of source code. Instead, the testing should be executed on the high level of system architecture, i.e., at a level where component or subsystems relate and interoperate or interact collectively with the system environment. Testing at this level is system testing, including hardware and software in union. Moreover, when integrating complex, distributed systems and providing support for conformance, interoperability and interoperation tests, we need to have an explicit test description. In this vision paper, we discuss (1) how to select tests from logical architecture, especially based on the dependencies within the system, and (2) how to represent the selected tests in explicit and readable manner, so that the software systems can be cost-e!ciently maintained and evolved over their entire life-cycle. In addition, we further study the relevance between diâ”erent tests, based on which, we can optimise the test suites for e!cient testing, and propose optimal resource allocation strategies for cloud-based testing
Fault Insertions into Hardware-in-the-Loop Simulation
The Ohio State EcoCAR Mobility challenge is an intercollegiate team that designs, builds, and tests a hybrid electric vehicle. One of the main goals of this team is to build a hybrid supervisory controls strategy that tests the potential failure mechanisms derived from fault analysis. Currently, Automotive companies are focused on integrating model-based designs enabling simulations for low-cost, rapid experimentation that assess a vehicle's performance. Model-based designs allow engineers to simulate specific tests within controlled environmental conditions. Through the use of model-based design, engineers can test vehicle and component faults inside a simulation model to assess how the vehicle behaves during various failures without incurring the cost of destructive testing.
This thesis, in partner with the EcoCAR Mobility Challenge, aims to incorporate modern industrial fault diagnostics into a hardware-in-the-loop (HIL) simulation and analyze the performance of the model-based design. Fault Tree Analysis (FTA) and Failure Mode and Effect Analysis (FMEA) were used to develop the necessary requirements for the vehicle system. Different faults were intended to be tested for each major component, including, but not limited to, the energy storage system (ESS), rear electric motor, belted alternator starter, DC-DC converter, and the multiplexed vehicle electrical center. The ESS was the only component demonstrated as an example for integrating the fault insertion method. The research details how a standard method was constructed for developing and inserting faults in the HIL test environment. The process is used for testing and designing the control algorithm for a hybrid supervisor controller.No embargoAcademic Major: Mechanical Engineerin
Automation of the Continuous Integration (CI) - Continuous Delivery/Deployment (CD) Software Development
Continuous Integration (CI) is a practice in software development where developers periodically merge code changes in a central shared repository, after which automatic versions and tests are executed. CI entails an automation component (the target of this project) and a cultural one, as developers have to learn to integrate code periodically. The main goal of CI is to reduce the time to feedback over the software integration process, allowing to locate and fix bugs more easily and quickly, thus enhancing it quality while reducing the time to validate and publish new soIn traditional software development, where teams of developers worked on the same project in isolation, often led to problems integrating the resulting code. Due to this isolation, the project was not deliverable until the integration of all its parts, which was tedious and generated errors. The Continuous Integration (CI ) emerged as a practice to solve the problems of traditional methodology, with the aim of improving the quality of the code. This thesis sets out what is it and how Continuous Integration is achieved, the principles that makes it as effective as possible and the processes that follow as a consequence, to thus introduce the context of its objective: the creation of a system that automates the start-up and set-up of an environment to be able to apply the methodology of continuous integration
Recommended from our members
Stock Returns and Inflation: Some New Evidence
Using aggregate and industry-wise monthly UK data over a period of 44 years we
examine the long run relationship between stock return index (St) and retail price index
(Pt) in a VAR framework. Univariate tests confirm Pt as I(2); nevertheless pairs of St
and Pt are co-integrated and share common I(1) trend. There is no evidence of shared
I(2) trend. We find evidence of shifts in the co- integrating ranks and parameters, and
accounting for these shifts improved estimatesâ precision. The long run price elasticity
of return index is consistently above unity, a finding that stands in sharp contrast to the
existing ones. Overall our results suggest that tax-paying stock investors are fully
insulated against inflation in the long run
Recommended from our members
Investigation into an improved modular rule-based testing framework for business rules
Rule testing in scheduling applications is a complex and potentially costly business problem. This thesis reports the outcome of research undertaken to develop a system to describe and test scheduling rules against a set of scheduling data. The overall intention of the research was to reduce commercial scheduling costs by minimizing human domain expert interaction within the scheduling process.
This thesis reports the outcome of research initiated following a consultancy project to develop a system to test driver schedules against the legal driving rules in force in the UK and the EU. One of the greatest challenges faced was interpreting the driving rules and translating them into the chosen programming language. This part of the project took considerable effort to complete the programming, testing and debugging processes. A potential problem then arises if the Department of Transport or the European Union alter or change the driving rules. Considerable software development is likely to be required to support the new rule set.
The approach considered takes into account the need for a modular software component that can be used in not just transport scheduling systems which look at legal driving rules but may also be integrated into other systems that have the need to test temporal rules. The integration of the rule testing component into existing systems is key to making the proposed solution reusable.
The research outcome proposes an alternative approach to rule definition, similar to that of RuleML, but with the addition of rule metadata to provide the ability of describing rules of a temporal nature. The rules can be serialised and deserialised between XML (eXtensible Markup Language) and objects within an object oriented environment (in this case .NET with C#), to provide a means of transmission of the rules over a communication infrastructure. The rule objects can then be compiled into an executable software library, allowing the rules to be tested more rapidly than traditional interpreted rules. Additional support functionality is also defined to provide a means of effectively integrating the rule testing engine into existing applications.
Following the construction of a rule testing engine that has been designed to meet the given requirements, a series of tests were undertaken to determine the effectiveness of the proposed approach. This lead to the implementation of improvements in the caching of constructed work plans to further improve performance. Tests were also carried out into the application of the proposed solution within alternative scheduling domains and to analyse the difference in computational performance and memory usage across system architectures, software frameworks and operating systems, with the support of Mono.
Future work that is expected to follow on from this thesis will likely reside in investigations into the development of graphical design tools for the creation of the rules, improvements in the work plan construction algorithm, parallelisation of elements of the process to take better advantage of multi-core processors and off-loading of the rule testing process onto dedicated or generic computational processors
Orion Multi-Purpose Crew Vehicle Active Thermal Control and Environmental Control and Life Support Development Status
The Orion Multi Purpose Crew Vehicle (MPCV) is the first crew transport vehicle to be developed by the National Aeronautics and Space Administration (NASA) in the last thirty years. Orion is currently being developed to transport the crew safely beyond Earth orbit. This year, the vehicle focused on building the Exploration Flight Test 1 (EFT1) vehicle to be launched in September of 2014. The development of the Orion Active Thermal Control (ATCS) and Environmental Control and Life Support (ECLS) System, focused on the integrating the components into the EFT1 vehicle and preparing them for launch. Work also has started on preliminary design reviews for the manned vehicle. Additional development work is underway to keep the remaining component progressing towards implementation on the flight tests of EM1 in 2017 and of EM2 in 2020. This paper covers the Orion ECLS development from April 2013 to April 2014
Inorganic nanozyme with combined self-oxygenation/degradable capabilities for sensitized cancer immunochemotherapy
Recently emerged cancer immunochemotherapy has provided enormous new possibilities to replace traditional chemotherapy in fighting tumor. However, the treatment efficacy is hampered by tumor hypoxia-induced immunosuppression in tumor microenvironment (TME). Herein, we fabricated a self-oxygenation/degradable inorganic nanozyme with a coreâshell structure to relieve tumor hypoxia in cancer immunochemotherapy. By integrating the biocompatible CaO2 as the oxygen-storing component, this strategy is more effective than the earlier designed nanocarriers for delivering oxygen or H2O2, and thus provides remarkable oxygenation and long-term capability in relieving hypoxia throughout the tumor tissue. Consequently, in vivo tests validate that the delivery system can successfully relieve hypoxia and reverse the immunosuppressive TME to favor antitumor immune responses, leading to enhanced chemoimmunotherapy with cytotoxic T lymphocyte-associated antigen 4 blockade. Overall, a facile, robust and effective strategy is proposed to improve tumor oxygenation by using self-decomposable and biocompatible inorganic nanozyme reactor, which will not only provide an innovative pathway to relieve intratumoral hypoxia, but also present potential applications in other oxygen-favored cancer therapies or oxygen deficiency-originated diseases
Assessing the Casual Relationship between Euro-Area Money and Price in Time-Varying Environment
The paper provides new evidence on the causal relationship between money and price for the euro area using quarterly data for the period 1980 to 2006, employing two alternative methods of estimation: the vector error correction (VEC) and time-varying coefficient (TVC) estimation techniques. The latter technique has the advantage over the former technique in that it can deal with possible specification biases and spurious relationships that may have arisen from structural changes. The empirical results from the VEC method reveal a bidirectional causal relationship between money and price. Contrary, the results from the TVC technique suggest that money is acting as an exogenous process determining the price level.Causality; VEC, Time Varying Coefficient Estimation; Euro Area
A Portofolio Balance Approach to Euro-Area Money Demand in a Time-Varying Environment
As part of its monetary policy strategy, the European Central Bank has formulated a reference value for M3 growth. A pre-requisite for the use of a reference value for M3 growth is the existence of a stable demand function for that aggregate. However, a large empirical literature has emerged showing that, beginning in 2001, essentially all euro area M3 demand functions have exhibited instability. This paper considers euroarea money demand in the context of the portfolio-balance framework. Our basic premise is that there is a stable demand-for-money function but that the models that have been used until now to estimate euro area money-demand are not well-specified because they do not include a measure of wealth. Using two empirical methodologies - - a co-integrated vector equilibrium correction (VEC) approach and a time-varying coefficient (TVC) approach - - we find that a demand-for-money function that includes wealth is stable. The upshot of our findings is that M3 behaviour continues to provide useful information about medium-term developments on inflation.Money demand; VEC, time varying coefficient estimation; Euro area
- âŠ