69,581 research outputs found

    Analyzing Satellite Ocean Color Match-Up Protocols Using the Satellite Validation Navy Tool (SAVANT) At MOBY and Two AERONET-OC Sites

    Get PDF
    The satellite validation navy tool (SAVANT) was developed by the Naval Research Laboratory to help facilitate the assessment of the stability and accuracy of ocean color satellites, using numerous ground truth (in situ) platforms around the globe and support methods for match-up protocols. The effects of varying spatial constraints with permissive and strict protocols on match-up uncertainty are evaluated, in an attempt to establish an optimal satellite ocean color calibration and validation (cal/val) match-up protocol. This allows users to evaluate the accuracy of ocean color sensors compared to specific ground truth sites that provide continuous data. Various match-up constraints may be adjusted, allowing for varied evaluations of their effects on match-up data. The results include the following: (a) the difference between aerosol robotic network ocean color (AERONET-OC) and marine optical Buoy (MOBY) evaluations; (b) the differences across the visible spectrum for various water types; (c) spatial differences and the size of satellite area chosen for comparison; and (d) temporal differences in optically complex water. The match-up uncertainty analysis was performed using Suomi National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) SNPP data at the AERONET-OC sites and the MOBY site. It was found that the more permissive constraint sets allow for a higher number of match-ups and a more comprehensive representation of the conditions, while the restrictive constraints provide better statistical match-ups between in situ and satellite sensors

    TURTLE-P: a UML profile for the formal validation of critical and distributed systems

    Get PDF
    The timed UML and RT-LOTOS environment, or TURTLE for short, extends UML class and activity diagrams with composition and temporal operators. TURTLE is a real-time UML profile with a formal semantics expressed in RT-LOTOS. Further, it is supported by a formal validation toolkit. This paper introduces TURTLE-P, an extended profile no longer restricted to the abstract modeling of distributed systems. Indeed, TURTLE-P addresses the concrete descriptions of communication architectures, including quality of service parameters (delay, jitter, etc.). This new profile enables co-design of hardware and software components with extended UML component and deployment diagrams. Properties of these diagrams can be evaluated and/or validated thanks to the formal semantics given in RT-LOTOS. The application of TURTLE-P is illustrated with a telecommunication satellite system

    Designing compliant business processes with obligations and permissions. Business process management workshops.

    Get PDF
    The sequence and timing constraints on the activities in business processes are an important aspect of business process compliance. To date, these constraints are most often implicitly transcribed into control-flow-based process models. This implicit representation of constraints, however, complicates the verification, validation and reuse in business process design. In this paper, we investigate the use of temporal deontic assignments on activities as a means to declaratively capture the control-flow semantics that reside in business regulations and business policies. In particular, we introduce PENELOPE, a language to express temporal rules about the obligations and permissions in a business interaction, and an algorithm to generate compliant sequence-flow-based process models that can be used in business process design.

    Towards an HLA Run-time Infrastructure with Hard Real-time Capabilities

    Get PDF
    Our work takes place in the context of the HLA standard and its application in real-time systems context. The HLA standard is inadequate for taking into consideration the different constraints involved in real-time computer systems. Many works have been invested in order to providing real-time capabilities to Run Time Infrastructures (RTI) to run real time simulation. Most of these initiatives focus on major issues including QoS guarantee, Worst Case Transit Time (WCTT) knowledge and scheduling services provided by the underlying operating systems. Even if our ultimate objective is to achieve real-time capabilities for distributed HLA federations executions, this paper describes a preliminary work focusing on achieving hard real-time properties for HLA federations running on a single computer under Linux operating systems. Our paper proposes a novel global bottom up approach for designing real-time Run time Infrastructures and a formal model for validation of uni processor to (then) distributed real-time simulation with CERTI

    HLA high performance and real-time simulation studies with CERTI

    Get PDF
    Our work takes place in the context of the HLA standard and its application in real-time systems context. Indeed, current HLA standard is inadequate for taking into consideration the different constraints involved in real-time computer systems. Many works have been invested in order to provide real-time capabilities to Run Time Infrastructures (RTI). This paper describes our approach focusing on achieving hard real-time properties for HLA federations through a complete state of the art on the related domain. Our paper also proposes a global bottom up approach from basic hardware and software basic requirements to experimental tests for validation of distributed real-time simulation with CERTI

    Generalized Rank Pooling for Activity Recognition

    Full text link
    Most popular deep models for action recognition split video sequences into short sub-sequences consisting of a few frames; frame-based features are then pooled for recognizing the activity. Usually, this pooling step discards the temporal order of the frames, which could otherwise be used for better recognition. Towards this end, we propose a novel pooling method, generalized rank pooling (GRP), that takes as input, features from the intermediate layers of a CNN that is trained on tiny sub-sequences, and produces as output the parameters of a subspace which (i) provides a low-rank approximation to the features and (ii) preserves their temporal order. We propose to use these parameters as a compact representation for the video sequence, which is then used in a classification setup. We formulate an objective for computing this subspace as a Riemannian optimization problem on the Grassmann manifold, and propose an efficient conjugate gradient scheme for solving it. Experiments on several activity recognition datasets show that our scheme leads to state-of-the-art performance.Comment: Accepted at IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 201

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Towards the Model-Driven Engineering of Secure yet Safe Embedded Systems

    Full text link
    We introduce SysML-Sec, a SysML-based Model-Driven Engineering environment aimed at fostering the collaboration between system designers and security experts at all methodological stages of the development of an embedded system. A central issue in the design of an embedded system is the definition of the hardware/software partitioning of the architecture of the system, which should take place as early as possible. SysML-Sec aims to extend the relevance of this analysis through the integration of security requirements and threats. In particular, we propose an agile methodology whose aim is to assess early on the impact of the security requirements and of the security mechanisms designed to satisfy them over the safety of the system. Security concerns are captured in a component-centric manner through existing SysML diagrams with only minimal extensions. After the requirements captured are derived into security and cryptographic mechanisms, security properties can be formally verified over this design. To perform the latter, model transformation techniques are implemented in the SysML-Sec toolchain in order to derive a ProVerif specification from the SysML models. An automotive firmware flashing procedure serves as a guiding example throughout our presentation.Comment: In Proceedings GraMSec 2014, arXiv:1404.163
    • 

    corecore