1,411,654 research outputs found
Context-aware adaptation in DySCAS
DySCAS is a dynamically self-configuring middleware for automotive control systems. The addition of autonomic, context-aware dynamic configuration to automotive control systems brings a potential for a wide range of benefits in terms of robustness, flexibility, upgrading etc. However, the automotive systems represent a particularly challenging domain for the deployment of autonomics concepts, having a combination of real-time performance constraints, severe resource limitations, safety-critical aspects and cost pressures. For these reasons current systems are statically configured. This paper describes the dynamic run-time configuration aspects of DySCAS and focuses on the extent to which context-aware adaptation has been achieved in DySCAS, and the ways in which the various design and implementation challenges are met
Analysis of Performance and Power Aspects of Hypervisors in Soft Real-Time Embedded Systems
The exponential growth of malware designed to attack soft real-time embedded systems has necessitated solutions to secure these systems. Hypervisors are a solution, but the overhead imposed by them needs to be quantitatively understood. Experiments were conducted to quantify the overhead hypervisors impose on soft real-time embedded systems. A soft real-time computer vision algorithm was executed, with average and worst-case execution times measured as well as the average power consumption. These experiments were conducted with two hypervisors and a control configuration. The experiments showed that each hypervisor imposed differing amounts of overhead, with one achieving near native performance and the other noticeably impacting the performance of the system
A Novel Technique for Cancelable and Irrevocable Biometric Template Generation for Fingerprints
Cancelable biometric key generation is vital in biometric systems to protect sensitive information of users. A novel technique called Reciprocated Magnitude and Complex Conjugate- Phase (RMCCP) transform is proposed. This proposed method comprises of different components for the development of new method. It is tested with the multiple aspects such as cancelability, irrevocability and security. FVC database and real time datasets are used to observe the performance on Match score using ROC, time complexity, and space complexity. The experimental results show that the proposed method is better in all the aspects of performance.
The Borexino detector at the Laboratori Nazionali del Gran Sasso
Borexino, a large volume detector for low energy neutrino spectroscopy, is
currently running underground at the Laboratori Nazionali del Gran Sasso,
Italy. The main goal of the experiment is the real-time measurement of sub MeV
solar neutrinos, and particularly of the mono energetic (862 keV) Be7 electron
capture neutrinos, via neutrino-electron scattering in an ultra-pure liquid
scintillator. This paper is mostly devoted to the description of the detector
structure, the photomultipliers, the electronics, and the trigger and
calibration systems. The real performance of the detector, which always meets,
and sometimes exceeds, design expectations, is also shown. Some important
aspects of the Borexino project, i.e. the fluid handling plants, the
purification techniques and the filling procedures, are not covered in this
paper and are, or will be, published elsewhere (see Introduction and
Bibliography).Comment: 37 pages, 43 figures, to be submitted to NI
Recommended from our members
Fast and deep deformation approximations
Character rigs are procedural systems that compute the shape of an animated character for a given pose. They can be highly complex and must account for bulges, wrinkles, and other aspects of a character's appearance. When comparing film-quality character rigs with those designed for real-time applications, there is typically a substantial and readily apparent difference in the quality of the mesh deformations. Real-time rigs are limited by a computational budget and often trade realism for performance. Rigs for film do not have this same limitation, and character riggers can make the rig as complicated as necessary to achieve realistic deformations. However, increasing the rig complexity slows rig evaluation, and the animators working with it can become less efficient and may experience frustration. In this paper, we present a method to reduce the time required to compute mesh deformations for film-quality rigs, allowing better interactivity during animation authoring and use in real-time games and applications. Our approach learns the deformations from an existing rig by splitting the mesh deformation into linear and nonlinear portions. The linear deformations are computed directly from the transformations of the rig's underlying skeleton. We use deep learning methods to approximate the remaining nonlinear portion. In the examples we show from production rigs used to animate lead characters, our approach reduces the computational time spent on evaluating deformations by a factor of 5×-10×. This significant savings allows us to run the complex, film-quality rigs in real-time even when using a CPU-only implementation on a mobile device
Nanophotonic reservoir computing with photonic crystal cavities to generate periodic patterns
Reservoir computing (RC) is a technique in machine learning inspired by neural systems. RC has been used successfully to solve complex problems such as signal classification and signal generation. These systems are mainly implemented in software, and thereby they are limited in speed and power efficiency. Several optical and optoelectronic implementations have been demonstrated, in which the system has signals with an amplitude and phase. It is proven that these enrich the dynamics of the system, which is beneficial for the performance. In this paper, we introduce a novel optical architecture based on nanophotonic crystal cavities. This allows us to integrate many neurons on one chip, which, compared with other photonic solutions, closest resembles a classical neural network. Furthermore, the components are passive, which simplifies the design and reduces the power consumption. To assess the performance of this network, we train a photonic network to generate periodic patterns, using an alternative online learning rule called first-order reduced and corrected error. For this, we first train a classical hyperbolic tangent reservoir, but then we vary some of the properties to incorporate typical aspects of a photonics reservoir, such as the use of continuous-time versus discrete-time signals and the use of complex-valued versus real-valued signals. Then, the nanophotonic reservoir is simulated and we explore the role of relevant parameters such as the topology, the phases between the resonators, the number of nodes that are biased and the delay between the resonators. It is important that these parameters are chosen such that no strong self-oscillations occur. Finally, our results show that for a signal generation task a complex-valued, continuous-time nanophotonic reservoir outperforms a classical (i.e., discrete-time, real-valued) leaky hyperbolic tangent reservoir (normalized root-mean-square errors = 0.030 versus NRMSE = 0.127)
Practical applications of probabilistic model checking to communication protocols
Probabilistic model checking is a formal verification technique for the analysis of systems that exhibit stochastic behaviour. It has been successfully employed in an extremely wide array of application domains including, for example, communication and multimedia protocols, security and power management. In this chapter we focus on the applicability of these techniques to the analysis of communication protocols. An analysis of the performance of such systems must successfully incorporate several crucial aspects, including concurrency between multiple components, real-time constraints and randomisation. Probabilistic model checking, in particular using probabilistic timed automata, is well suited to such an analysis. We provide an overview of this area, with emphasis on an industrially relevant case study: the IEEE 802.3 (CSMA/CD) protocol. We also discuss two contrasting approaches to the implementation of probabilistic model checking, namely those based on numerical computation and those based on discrete-event simulation. Using results from the two tools PRISM and APMC, we summarise the advantages, disadvantages and trade-offs associated with these techniques
Evaluating non-functional properties globally
Real-time systems are usually dependable systems which, besides timing constraints, have to meet some other quality criteria in order to provide certain reliance on its operation. For this reason, a key issue in the development of this kind of system is to trade off the different non-functional aspects involved in the system e.g., time, performance, safety, power, or memory. The success of the development process is often determined by how earlier we can make thorough assumptions and estimations, and hence, take careful decisions about non-functional aspects. Our approach to support this decision activity is based on treating non-functional properties and requirements uniformly, and still supporting specific evaluation and analysi
Intermedial Ontologies: Strategies of Preparedness, Research and Design in Real Time Performance Capture
The paper introduces and inspects core elements relative to the ‘live’ in performances that utilise real time Motion Capture (MoCap) systems and cognate/reactive virtual environments by drawing on interdisciplinary research conducted by Matthew Delbridge (University of Tasmania), and the collaborative live MoCap workshops carried out in projects DREX and VIMMA (2009-12 and 2013-14, University of Tampere). It also discusses strategies to revise manners of direction and performing, practical work processes, questions of production design and educational aspects peculiar to technological staging. Through the analysis of a series of performative experiments involving 3D real time virtual reality systems, projection mapping and reactive surfaces, new ways of interacting in/with performance have been identified. This poses a unique challenge to traditional approaches of learning about staging, dramaturgy, acting, dance and performance design in the academy, all of which are altered in a fundamental manner when real time virtual reality is introduced as a core element of the performative experience. Meanwhile, various analyses, descriptions and theorisations of technological performance have framed up-to-date policies on how to approach these questions more systematically. These have given rise to more sophisticated notions of preparedness of performing arts professionals, students and researchers to confront the potentials of new technologies and the forms of creativity and art they enable. The deployment of real time Motion Capture systems and co-present virtual environments in an educational setting comprise a peculiar but informative case of study for the above to be explored
- …