153 research outputs found
Compressed absorbing boundary conditions via matrix probing
Absorbing layers are sometimes required to be impractically thick in order to
offer an accurate approximation of an absorbing boundary condition for the
Helmholtz equation in a heterogeneous medium. It is always possible to reduce
an absorbing layer to an operator at the boundary by layer-stripping
elimination of the exterior unknowns, but the linear algebra involved is
costly. We propose to bypass the elimination procedure, and directly fit the
surface-to-surface operator in compressed form from a few exterior Helmholtz
solves with random Dirichlet data. The result is a concise description of the
absorbing boundary condition, with a complexity that grows slowly (often,
logarithmically) in the frequency parameter.Comment: 29 pages with 25 figure
Formal Safety and Security Assessment of an Avionic Architecture with Alloy
We propose an approach based on Alloy to formally model and assess a system
architecture with respect to safety and security requirements. We illustrate
this approach by considering as a case study an avionic system developed by
Thales, which provides guidance to aircraft. We show how to define in Alloy a
metamodel of avionic architectures with a focus on failure propagations. We
then express the specific architecture of the case study in Alloy. Finally, we
express and check properties that refer to the robustness of the architecture
to failures and attacks.Comment: In Proceedings ESSS 2014, arXiv:1405.055
Synthesis of a 1-boratabenzene-(2,3,4,5-tetramethylphosphole) : towards a planar monophosphole
Novel boratabenzene–phosphole complexes have been prepared and structurally characterized. The electronic communication between the two heterocyclic rings linked by a P–B bond and the aromaticity of these systems were probed using crystallographic and density functional studies
Budgeting Under-Specified Tasks for Weakly-Hard Real-Time Systems
In this paper, we present an extension of slack analysis for budgeting in the design of weakly-hard real-time systems. During design, it often happens that some parts of a task set are fully specified while other parameters, e.g. regarding recovery or monitoring tasks, will be available only much later. In such cases, slack analysis can help anticipate how these missing parameters can influence the behavior of the whole system so that a resource budget can be allocated to them. It is, however, sufficient in many application contexts to budget these tasks in order to preserve weakly-hard rather than hard guarantees. We thus present an extension of slack analysis for deriving task budgets for systems with hard and weakly-hard requirements. This work is motivated by and validated on a realistic case study inspired by industrial practice
Mega-modeling of complex, distributed, heterogeneous CPS systems
Model-Driven Design (MDD) has proven to be a powerful technology to address the development of increasingly complex embedded systems. Beyond complexity itself, challenges come from the need to deal with parallelism and heterogeneity. System design must target different execution platforms with different OSs and HW resources, even bare-metal, support local and distributed systems, and integrate on top of these heterogeneous platforms multiple functional component coming from different sources (developed from scratch, legacy code and third-party code), with different behaviors operating under different models of computation and communication. Additionally, system optimization to improve performance, power consumption, cost, etc. requires analyzing huge lists of possible design solutions. Addressing these challenges require flexible design technologies able to support from a single-source model its architectural mapping to different computing resources, of different kind and in different platforms. Traditional MDD methods and tools typically rely on fixed elements, which makes difficult their integration under this variability. For example, it is unlikely to integrate in the same system legacy code with a third-party component. Usually some re-coding is required to enable such interconnection. This paper proposes a UML/MARTE system modeling methodology able to address the challenges mentioned above by improving flexibility and scalability. This approach is illustrated and demonstrated on a flight management system. The model is flexible enough to be adapted to different architectural solutions with a minimal effort by changing its underlying Model of Computation and Communication (MoCC). Being completely platform independent, from the same model it is possible to explore various solutions on different execution platforms.This work has been partially funded by the EU and the Spanish MICINN through the ECSEL MegaMart and Comp4Drones projects and the TEC2017-86722-C4-3-R PLATINO project
Tracing Hardware Monitors in the GR712RC Multicore Platform: Challenges and Lessons Learnt from a Space Case Study
The demand for increased computing performance is driving industry in critical-embedded systems (CES) domains, e.g. space, towards the use of multicores processors. Multicores, however, pose several challenges that must be addressed before their safe adoption in critical embedded domains. One of the prominent challenges is software timing analysis, a fundamental step in the verification and validation process. Monitoring and profiling solutions, traditionally used for debugging and optimization, are increasingly exploited for software timing in multicores. In particular, hardware event monitors related to requests to shared hardware resources are building block to assess and restraining multicore interference. Modern timing analysis techniques build on event monitors to track and control the contention tasks can generate each other in a multicore platform. In this paper we look into the hardware profiling problem from an industrial perspective and address both methodological and practical problems when monitoring a multicore application. We assess pros and cons of several profiling and tracing solutions, showing that several aspects need to be taken into account while considering the appropriate mechanism to collect and extract the profiling information from a multicore COTS platform. We address the profiling problem on a representative COTS platform for the aerospace domain to find that the availability of directly-accessible hardware counters is not a given, and it may be necessary to the develop specific tools that capture the needs of both the user’s and the timing analysis technique requirements. We report challenges in developing an event monitor tracing tool that works for bare-metal and RTEMS configurations and
show the accuracy of the developed tool-set in profiling a real aerospace application. We also show how the profiling tools can be exploited, together with handcrafted benchmarks, to characterize the application behavior in terms of multicore timing interference.This work has been partially supported by a collaboration agreement between Thales Research and the Barcelona Supercomputing Center, and the European Research Council (ERC) under the EU’s Horizon 2020 research and innovation programme (grant agreement No. 772773).
MINECO partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC2013-14717).Peer ReviewedPostprint (published version
Quantifying the Flexibility of Real-Time Systems
International audienceIn this paper we define the flexibility of a system as its capability to schedule a new task. We present an approach to quantify the flexibility of a system. More importantly, we show that it is possible under certain conditions to identify the task that will directly induce the limitations on a possible software update. If performed at design time, such a result can be used to adjust the system design by giving more slack to the limiting task. We illustrate how these results apply to a simple system
Development of a MSW gasification model for flexible integration into a MFA-LCA framework
This paper presents the development of a comprehensive gasification module designed
to be integrated in a MFA-LCA framework. From existing gasification models
present in the literature, the most appropriate modelling strategy is selected and implemented
into the module. This module needs to be able to capture the influence of
input parameters, such as gasification reactor type, oxidizing agent, feedstock composition
and operating conditions on the process outputs, including syngas yield, its
composition and LHV, as well as tar and char contents. A typical gasification process
is usually modelled in four steps: drying, pyrolysis, oxidation and reduction. Models
representing each of these steps are presented in this paper. Since the type of gasification
reactor is taken into account in the module, models for downdraft moving bed
and bubbling fluidized bed reactor are also reviewed. The gasification module will be
integrated into a MFA framework (VMR-Sys), which enables calculation of relevant
gasifier feedstock parameters, such as moisture content, composition, properties
and particle size distribution. Outputs from the module will also include elemental
compositions obtained from VMR-Sys calculations. Finally, all outputs from the module
will be used to build LCA-inventory data
Non-contrast CT markers of intracerebral hematoma expansion : a reliability study
Objectives: We evaluated whether clinicians agree in the detection of non-contrast CT markers of
intracerebral hemorrhage (ICH) expansion.
Methods: From our local dataset, we randomly sampled 60 patients diagnosed with spontaneous ICH.
Fifteen physicians and trainees (Stroke Neurology, Interventional and Diagnostic Neuroradiology) were
trained to identify six density (Barras density, black hole, blend, hypodensity, fluid level, swirl) and three
shape (Barras shape, island, satellite) expansion markers, using standardized definitions. Thirteen raters
performed a second assessment. Inter and intra-rater agreement were measured using Gwet’s AC1, with a
coefficient > 0.60 indicating substantial to almost perfect agreement.
Results: Almost perfect inter-rater agreement was observed for the swirl (0.85, 95% CI: 0.78-0.90) and
fluid level (0.84, 95% CI: 0.76-0.90) markers, while the hypodensity (0.67, 95% CI: 0.56-0.76) and blend
(0.62, 95% CI: 0.51-0.71) markers showed substantial agreement. Inter-rater agreement was otherwise
moderate, and comparable between density and shape markers. Inter-rater agreement was lower for the
three markers that require the rater to identify one specific axial slice (Barras density, Barras shape,
island: 0.46, 95% CI: 0.40-0.52 versus others: 0.60, 95% CI: 0.56-0.63). Inter-observer agreement did not
differ when stratified for raters’ experience, hematoma location, volume or anticoagulation status. Intrarater agreement was substantial to almost perfect for all but the black hole marker.
Conclusion: In a large sample of raters with different backgrounds and expertise levels, only four of nine
non-contrast CT markers of ICH expansion showed substantial to almost perfect inter-rater agreement
- …