8,560 research outputs found
Run-time efficient probabilistic model checking
Since the inception of discontinuous Galerkin (DG) methods for elliptic problems, there has existed a question of whether DG methods can be made more computationally efficient than continuous Galerkin (CG) methods. Fewer degrees of freedom, approximation properties for elliptic problems together with the number of optimization techniques, such as static condensation, available within CG framework made it challenging for DG methods to be competitive until recently. However, with the introduction of a static-condensation-amenable DG method—the hybridizable discontinuous Galerkin (HDG) method—it has become possible to perform a realistic comparison of CG and HDG methods when applied to elliptic problems. In this work, we extend upon an earlier 2D comparative study, providing numerical results and discussion of the CG and HDG method performance in three dimensions. The comparison categories covered include steady-state elliptic and time-dependent parabolic problems, various element types and serial and parallel performance. The postprocessing technique, which allows for superconvergence in the HDG case, is also discussed. Depending on the direct linear system solver used and the type of the problem (steady-state vs. time-dependent) in question the HDG method either outperforms or demonstrates a comparable performance when compared with the CG method. The HDG method however falls behind performance-wise when the iterative solver is used, which indicates the need for an effective preconditioning strategy for the method
A formal approach to adaptive software: continuous assurance of non-functional requirements
Abstract
Modern software systems are increasingly requested to be adaptive to changes in the environment in which they are embedded. Moreover, adaptation often needs to be performed automatically, through self-managed reactions enacted by the application at run time. Off-line, human-driven changes should be requested only if self-adaptation cannot be achieved successfully. To support this kind of autonomic behavior, software systems must be empowered by a rich run-time support that can monitor the relevant phenomena of the surrounding environment to detect changes, analyze the data collected to understand the possible consequences of changes, reason about the ability of the application to continue to provide the required service, and finally react if an adaptation is needed. This paper focuses on non-functional requirements, which constitute an essential component of the quality that modern software systems need to exhibit. Although the proposed approach is quite general, it is mainly exemplified in the paper in the context of service-oriented systems, where the quality of service (QoS) is regulated by contractual obligations between the application provider and its clients. We analyze the case where an application, exported as a service, is built as a composition of other services. Non-functional requirements—such as reliability and performance—heavily depend on the environment in which the application is embedded. Thus changes in the environment may ultimately adversely affect QoS satisfaction. We illustrate an approach and support tools that enable a holistic view of the design and run-time management of adaptive software systems. The approach is based on formal (probabilistic) models that are used at design time to reason about dependability of the application in quantitative terms. Models continue to exist at run time to enable continuous verification and detection of changes that require adaptation.</jats:p
Supporting self-adaptation via quantitative verification and sensitivity analysis at run time
Modern software-intensive systems often interact with an environment whose behavior changes over time, often unpredictably. The occurrence of changes may jeopardize their ability to meet the desired requirements. It is therefore desirable to design software in a way that it can self-adapt to the occurrence of changes with limited, or even without, human intervention. Self-adaptation can be achieved by bringing software models and model checking to run time, to support perpetual automatic reasoning about changes. Once a change is detected, the system itself can predict if requirements violations may occur and enable appropriate counter-actions. However, existing mainstream model checking techniques and tools were not conceived for run-time usage; hence they hardly meet the constraints imposed by on-the-fly analysis in terms of execution time and memory usage. This paper addresses this issue and focuses on perpetual satisfaction of non-functional requirements, such as reliability or energy consumption. Its main contribution is the description of a mathematical framework for run-time efficient probabilistic model checking. Our approach statically generates a set of verification conditions that can be efficiently evaluated at run time as soon as changes occur. The proposed approach also supports sensitivity analysis, which enables reasoning about the effects of changes and can drive effective adaptation strategies
DSOL: a declarative approach to self-adaptive service orchestrations
Service oriented computing (SOC) has brought a simplification in the way distributed applications can be built. Mainstream approaches, however, failed to support dynamic, self-managed compositions that would empower even non-technical users to build their own orchestrations. Indeed, because of the changeable world in which they are embedded, service compositions must be able to adapt to changes that may happen at run-time. Unfortunately, mainstream SOC languages, like BPEL and BPMN, make it quite hard to develop such kind of self-adapting orchestrations. We claim that this is mostly due to the imperative programming paradigm they are based on. To overcome this limitation we propose a radically different, strongly declarative approach to model service orchestration, which is easier to use and results in more flexible and self-adapting orchestrations. An ad-hoc engine, leveraging well-known planning techniques, interprets such models to support dynamic service orchestration at run-time
Recommended from our members
18F-FAC PET Visualizes Brain-Infiltrating Leukocytes in a Mouse Model of Multiple Sclerosis.
Brain-infiltrating leukocytes contribute to multiple sclerosis (MS) and autoimmune encephalomyelitis and likely play a role in traumatic brain injury, seizure, and stroke. Brain-infiltrating leukocytes are also primary targets for MS disease-modifying therapies. However, no method exists for noninvasively visualizing these cells in a living organism. 1-(2'-deoxy-2'-18F-fluoroarabinofuranosyl) cytosine (18F-FAC) is a PET radiotracer that measures deoxyribonucleoside salvage and accumulates preferentially in immune cells. We hypothesized that 18F-FAC PET could noninvasively image brain-infiltrating leukocytes. Methods: Healthy mice were imaged with 18F-FAC PET to quantify if this radiotracer crosses the blood-brain barrier (BBB). Experimental autoimmune encephalomyelitis (EAE) is a mouse disease model with brain-infiltrating leukocytes. To determine whether 18F-FAC accumulates in brain-infiltrating leukocytes, EAE mice were analyzed with 18F-FAC PET, digital autoradiography, and immunohistochemistry, and deoxyribonucleoside salvage activity in brain-infiltrating leukocytes was analyzed ex vivo. Fingolimod-treated EAE mice were imaged with 18F-FAC PET to assess if this approach can monitor the effect of an immunomodulatory drug on brain-infiltrating leukocytes. PET scans of individuals injected with 2-chloro-2'-deoxy-2'-18F-fluoro-9-β-d-arabinofuranosyl-adenine (18F-CFA), a PET radiotracer that measures deoxyribonucleoside salvage in humans, were analyzed to evaluate whether 18F-CFA crosses the human BBB. Results: 18F-FAC accumulates in the healthy mouse brain at levels similar to 18F-FAC in the blood (2.54 ± 0.2 and 3.04 ± 0.3 percentage injected dose per gram, respectively) indicating that 18F-FAC crosses the BBB. EAE mice accumulate 18F-FAC in the brain at 180% of the levels of control mice. Brain 18F-FAC accumulation localizes to periventricular regions with significant leukocyte infiltration, and deoxyribonucleoside salvage activity is present at similar levels in brain-infiltrating T and innate immune cells. These data suggest that 18F-FAC accumulates in brain-infiltrating leukocytes in this model. Fingolimod-treated EAE mice accumulate 18F-FAC in the brain at 37% lower levels than control-treated EAE mice, demonstrating that 18F-FAC PET can monitor therapeutic interventions in this mouse model. 18F-CFA accumulates in the human brain at 15% of blood levels (0.08 ± 0.01 and 0.54 ± 0.07 SUV, respectively), indicating that 18F-CFA does not cross the BBB in humans. Conclusion: 18F-FAC PET can visualize brain-infiltrating leukocytes in a mouse MS model and can monitor the response of these cells to an immunomodulatory drug. Translating this strategy into humans will require exploring additional radiotracers
Development and Validation of a Spike Detection and Classification Algorithm Aimed at Implementation on Hardware Devices
Neurons cultured in vitro on MicroElectrode Array (MEA) devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a) statistical analysis on both simulated and real signal and (b) Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems
MORPH: A Reference Architecture for Configuration and Behaviour Self-Adaptation
An architectural approach to self-adaptive systems involves runtime change of
system configuration (i.e., the system's components, their bindings and
operational parameters) and behaviour update (i.e., component orchestration).
Thus, dynamic reconfiguration and discrete event control theory are at the
heart of architectural adaptation. Although controlling configuration and
behaviour at runtime has been discussed and applied to architectural
adaptation, architectures for self-adaptive systems often compound these two
aspects reducing the potential for adaptability. In this paper we propose a
reference architecture that allows for coordinated yet transparent and
independent adaptation of system configuration and behaviour
Asymmetric core combustion in neutron stars and a potential mechanism for gamma ray bursts
We study the transition of nuclear matter to strange quark matter (SQM)
inside neutron stars (NSs). It is shown that the influence of the magnetic
field expected to be present in NS interiors has a dramatic effect on the
propagation of a laminar deflagration (widely studied so far), generating a
strong acceleration of the flame in the polar direction. This results in a
strong asymmetry in the geometry of the just formed core of hot SQM which
resembles a cylinder orientated in the direction of the magnetic poles of the
NS. This geometrical asymmetry gives rise to a bipolar emission of the thermal
neutrino-antineutrino pairs produced in the process of SQM formation. The
neutrino-antineutrino pairs annihilate into electron-positron pairs just above
the polar caps of the NS giving rise to a relativistic fireball, thus providing
a suitable form of energy transport and conversion to gamma emission that may
be associated to short gamma ray bursts (GRBs).Comment: 2 figure
- …