991 research outputs found

    Quantum Computing

    Full text link
    Quantum mechanics---the theory describing the fundamental workings of nature---is famously counterintuitive: it predicts that a particle can be in two places at the same time, and that two remote particles can be inextricably and instantaneously linked. These predictions have been the topic of intense metaphysical debate ever since the theory's inception early last century. However, supreme predictive power combined with direct experimental observation of some of these unusual phenomena leave little doubt as to its fundamental correctness. In fact, without quantum mechanics we could not explain the workings of a laser, nor indeed how a fridge magnet operates. Over the last several decades quantum information science has emerged to seek answers to the question: can we gain some advantage by storing, transmitting and processing information encoded in systems that exhibit these unique quantum properties? Today it is understood that the answer is yes. Many research groups around the world are working towards one of the most ambitious goals humankind has ever embarked upon: a quantum computer that promises to exponentially improve computational power for particular tasks. A number of physical systems, spanning much of modern physics, are being developed for this task---ranging from single particles of light to superconducting circuits---and it is not yet clear which, if any, will ultimately prove successful. Here we describe the latest developments for each of the leading approaches and explain what the major challenges are for the future.Comment: 26 pages, 7 figures, 291 references. Early draft of Nature 464, 45-53 (4 March 2010). Published version is more up-to-date and has several corrections, but is half the length with far fewer reference

    Assessing multi-version systems through fault Injection

    Get PDF
    Multi-version design (MVD) has been proposed as a method for increasing the dependability of critical systems beyond current levels. However, a major obstacle to large-scale commercial usage of this approach is the lack of quantitative characterizations available. Fault injection is used to help seek an answer this problem. Fault injection is a phrase covering a variety of testing techniques that can be applied to both hardware and software, all of which involve the deliberate insertion of faults into an operational system to determine its response. This approach has the potential for yielding highly useful metrics with regard to MVD systems, as well as giving developers a greater insight into the behaviour of each channel within the system, k this research, an automatic fault injection system for multi-version systems called FITMVS is developed. A multi-version system is then, tested using this system, and the results analysed. It is concluded that this approach can yield several extremely useful metrics, such as metrics related to channel sensitivity, channel sensitivity to common-mode error, program' scope sensitivity, program scope sensitivity to common-mode error, error frequency distribution and common-mode error frequency distribution. In addition to this, the analysis of the multi-version system tested indicates that the system has an extremely low probability of experiencing common-mode error, although several key points in channel code are identified as having higher sensitivity to faults than others

    Group communications and database replication:techniques, issues and performance

    Get PDF
    Databases are an important part of today's IT infrastructure: both companies and state institutions rely on database systems to store most of their important data. As we are more and more dependent on database systems, securing this key facility is now a priority. Because of this, research on fault-tolerant database systems is of increasing importance. One way to ensure the fault-tolerance of a system is by replicating it. Replication is a natural way to deal with failures: if one copy is not available, we use another one. However implementing consistent replication is not easy. Database replication is hardly a new area of research: the first papers on the subject are more than twenty years old. Yet how to build an efficient, consistent replicated database is still an open research question. Recently, a new approach to solve this problem has been proposed. The idea is to rely on some communication infrastructure called group communications. This infrastructure offers some high-level primitives that can help in the design and the implementation of a replicated database. While promising, this approach to database replication is still in its infancy. This thesis focuses on group communication-based database replication and strives to give an overall understanding of this topic. This thesis has three major contributions. In the structural domain, it introduces a classification of replication techniques. In the qualitative domain, an analysis of fault-tolerance semantics is proposed. Finally, in the quantitative domain, a performance evaluation of group communication-based database replication is presented. The classification gives an overview of the different means to implement database replication. Techniques described in the literature are sorted using this classification. The classification highlights structural similarities of techniques originating from different communities (database community and distributed system community). For each category of the classification, we also analyse the requirements imposed on the database component and group communication primitives that are needed to enforce consistency. Group communication-based database replication implies building a system from two different components: a database system and a group communication system. Fault-tolerance is an end-to-end property: a system built from two components tends to be as fault-tolerant as the weakest component. The analysis of fault-tolerance semantics show what fault-tolerance guarantee is ensured by group communication based replication techniques. Additionally a new faulttolerance guarantee, group-safety, is proposed. Group-safety is better suited to group communication-based database replication. We also show that group-safe replication techniques can offer improved performance. Finally, the performance evaluation offers a quantitative view of group communication based replication techniques. The performance of group communication techniques and classical database replication techniques is compared. The way those different techniques react to different loads is explored. Some optimisation of group communication techniques are also described and their performance benefits evaluated

    Pervasive computing reference architecture from a software engineering perspective (PervCompRA-SE)

    Get PDF
    Pervasive computing (PervComp) is one of the most challenging research topics nowadays. Its complexity exceeds the outdated main frame and client-server computation models. Its systems are highly volatile, mobile, and resource-limited ones that stream a lot of data from different sensors. In spite of these challenges, it entails, by default, a lengthy list of desired quality features like context sensitivity, adaptable behavior, concurrency, service omnipresence, and invisibility. Fortunately, the device manufacturers improved the enabling technology, such as sensors, network bandwidth, and batteries to pave the road for pervasive systems with high capabilities. On the other hand, this domain area has gained an enormous amount of attention from researchers ever since it was first introduced in the early 90s of the last century. Yet, they are still classified as visionary systems that are expected to be woven into people’s daily lives. At present, PervComp systems still have no unified architecture, have limited scope of context-sensitivity and adaptability, and many essential quality features are insufficiently addressed in PervComp architectures. The reference architecture (RA) that we called (PervCompRA-SE) in this research, provides solutions for these problems by providing a comprehensive and innovative pair of business and technical architectural reference models. Both models were based on deep analytical activities and were evaluated using different qualitative and quantitative methods. In this thesis we surveyed a wide range of research projects in PervComp in various subdomain areas to specify our methodological approach and identify the quality features in the PervComp domain that are most commonly found in these areas. It presented a novice approach that utilizes theories from sociology, psychology, and process engineering. The thesis analyzed the business and architectural problems in two separate chapters covering the business reference architecture (BRA) and the technical reference architecture (TRA). The solutions for these problems were introduced also in the BRA and TRA chapters. We devised an associated comprehensive ontology with semantic meanings and measurement scales. Both the BRA and TRA were validated throughout the course of research work and evaluated as whole using traceability, benchmark, survey, and simulation methods. The thesis introduces a new reference architecture in the PervComp domain which was developed using a novel requirements engineering method. It also introduces a novel statistical method for tradeoff analysis and conflict resolution between the requirements. The adaptation of the activity theory, human perception theory and process re-engineering methods to develop the BRA and the TRA proved to be very successful. Our approach to reuse the ontological dictionary to monitor the system performance was also innovative. Finally, the thesis evaluation methods represent a role model for researchers on how to use both qualitative and quantitative methods to evaluate a reference architecture. Our results show that the requirements engineering process along with the trade-off analysis were very important to deliver the PervCompRA-SE. We discovered that the invisibility feature, which was one of the envisioned quality features for the PervComp, is demolished and that the qualitative evaluation methods were just as important as the quantitative evaluation methods in order to recognize the overall quality of the RA by machines as well as by human beings

    Future manned systems advanced avionics study

    Get PDF
    COTS+ was defined in this study as commercial off-the-shelf (COTS) products, ruggedized and militarized components, and COTS technology. This study cites the benefits of integrating COTS+ in space, postulates a COTS+ integration methodology, and develops requirements and an architecture to achieve integration. Developmental needs and concerns were identified throughout the study; these needs, concerns, and recommendations relative to their abatement are subsequently presented for further action and study. The COTS+ concept appears workable in part or in totality. No COTS+ technology gaps were identified; however, radiation tolerance was cited as a concern, and the deferred maintenance issue resurfaced. Further study is recommended to explore COTS+ cost-effectiveness, maintenance philosophy, needs, concerns, and utility metrics. The generation of a development plan to further investigate and integrate COTS+ technology is recommended. A COTS+ transitional integration program is recommended. Sponsoring and establishing technology maturation programs and COTS+ engineering and standards committees are deemed necessary and are recommended for furthering COTS+ integration in space

    Space Station Systems: a Bibliography with Indexes (Supplement 8)

    Get PDF
    This bibliography lists 950 reports, articles, and other documents introduced into the NASA scientific and technical information system between July 1, 1989 and December 31, 1989. Its purpose is to provide helpful information to researchers, designers and managers engaged in Space Station technology development and mission design. Coverage includes documents that define major systems and subsystems related to structures and dynamic control, electronics and power supplies, propulsion, and payload integration. In addition, orbital construction methods, servicing and support requirements, procedures and operations, and missions for the current and future Space Station are included

    A Prescription for Partial Synchrony

    Get PDF
    Algorithms in message-passing distributed systems often require partial synchrony to tolerate crash failures. Informally, partial synchrony refers to systems where timing bounds on communication and computation may exist, but the knowledge of such bounds is limited. Traditionally, the foundation for the theory of partial synchrony has been real time: a time base measured by counting events external to the system, like the vibrations of Cesium atoms or piezoelectric crystals. Unfortunately, algorithms that are correct relative to many real-time based models of partial synchrony may not behave correctly in empirical distributed systems. For example, a set of popular theoretical models, which we call M_*, assume (eventual) upper bounds on message delay and relative process speeds, regardless of message size and absolute process speeds. Empirical systems with bounded channel capacity and bandwidth cannot realize such assumptions either natively, or through algorithmic constructions. Consequently, empirical deployment of the many M_*-based algorithms risks anomalous behavior. As a result, we argue that real time is the wrong basis for such a theory. Instead, the appropriate foundation for partial synchrony is fairness: a time base measured by counting events internal to the system, like the steps executed by the processes. By way of example, we redefine M_* models with fairness-based bounds and provide algorithmic techniques to implement fairness-based M_* models on a significant subset of the empirical systems. The proposed techniques use failure detectors — system services that provide hints about process crashes — as intermediaries that preserve the fairness constraints native to empirical systems. In effect, algorithms that are correct in M_* models are now proved correct in such empirical systems as well. Demonstrating our results requires solving three open problems. (1) We propose the first unified mathematical framework based on Timed I/O Automata to specify empirical systems, partially synchronous systems, and algorithms that execute within the aforementioned systems. (2) We show that crash tolerance capabilities of popular distributed systems can be denominated exclusively through fairness constraints. (3) We specify exemplar system models that identify the set of weakest system models to implement popular failure detectors

    Formal methods and digital systems validation for airborne systems

    Get PDF
    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992

    Aeronautical engineering: A continuing bibliography with indexes (supplement 318)

    Get PDF
    This bibliography lists 217 reports, articles, and other documents introduced into the NASA scientific and technical information system in June 1995. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment, and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Navigating the IoT landscape: Unraveling forensics, security issues, applications, research challenges, and future

    Full text link
    Given the exponential expansion of the internet, the possibilities of security attacks and cybercrimes have increased accordingly. However, poorly implemented security mechanisms in the Internet of Things (IoT) devices make them susceptible to cyberattacks, which can directly affect users. IoT forensics is thus needed for investigating and mitigating such attacks. While many works have examined IoT applications and challenges, only a few have focused on both the forensic and security issues in IoT. Therefore, this paper reviews forensic and security issues associated with IoT in different fields. Future prospects and challenges in IoT research and development are also highlighted. As demonstrated in the literature, most IoT devices are vulnerable to attacks due to a lack of standardized security measures. Unauthorized users could get access, compromise data, and even benefit from control of critical infrastructure. To fulfil the security-conscious needs of consumers, IoT can be used to develop a smart home system by designing a FLIP-based system that is highly scalable and adaptable. Utilizing a blockchain-based authentication mechanism with a multi-chain structure can provide additional security protection between different trust domains. Deep learning can be utilized to develop a network forensics framework with a high-performing system for detecting and tracking cyberattack incidents. Moreover, researchers should consider limiting the amount of data created and delivered when using big data to develop IoT-based smart systems. The findings of this review will stimulate academics to seek potential solutions for the identified issues, thereby advancing the IoT field.Comment: 77 pages, 5 figures, 5 table
    • …
    corecore