17 research outputs found

    Edge-based Runtime Verification for the Internet of Things

    Get PDF
    Complex distributed systems such as the ones induced by Internet of Things (IoT) deployments, are expected to operate in compliance to their requirements. This can be checked by inspecting events flowing throughout the system, typically originating from end-devices and reflecting arbitrary actions, changes in state or sensing. Such events typically reflect the behavior of the overall IoT system – they may indicate executions which satisfy or violate its requirements. This article presents a service-based software architecture and technical framework supporting runtime verification for widely deployed, volatile IoT systems. At the lowest level, systems we consider are comprised of resource-constrained devices connected over wide area networks generating events. In our approach, monitors are deployed on edge components, receiving events originating from end-devices or other edge nodes. Temporal logic properties expressing desired requirements are then evaluated on each edge monitor in a runtime fashion. The system exhibits decentralization since evaluation occurs locally on edge nodes, and verdicts possibly affecting satisfaction of properties on other edge nodes are propagated accordingly. This reduces dependence on cloud infrastructures for IoT data collection and centralized processing. We illustrate how specification and runtime verification can be achieved in practice on a characteristic case study of smart parking. Finally, we demonstrate the feasibility of our design over a testbed instantiation, whereupon we evaluate performance and capacity limits of different hardware classes under monitoring workloads of varying intensity using state-of-the-art LPWAN technology

    Sabrina: Modeling and Visualization of Economy Data with Incremental Domain Knowledge

    Full text link
    Investment planning requires knowledge of the financial landscape on a large scale, both in terms of geo-spatial and industry sector distribution. There is plenty of data available, but it is scattered across heterogeneous sources (newspapers, open data, etc.), which makes it difficult for financial analysts to understand the big picture. In this paper, we present Sabrina, a financial data analysis and visualization approach that incorporates a pipeline for the generation of firm-to-firm financial transaction networks. The pipeline is capable of fusing the ground truth on individual firms in a region with (incremental) domain knowledge on general macroscopic aspects of the economy. Sabrina unites these heterogeneous data sources within a uniform visual interface that enables the visual analysis process. In a user study with three domain experts, we illustrate the usefulness of Sabrina, which eases their analysis process

    Prediction of Solar Proton Event Fluence spectra from their Peak flux spectra

    Get PDF
    Solar Proton Events (SPEs) are of great importance and significance for the study of Space Weather and Heliophysics. These populations of protons are accelerated at high energies ranging from a few MeVs to hundreds of MeVs and can pose a significant hazard both to equipment on board spacecrafts as well as astronauts as they are ionizing radiation. The ongoing study of SPEs can help to understand their characteristics, relative underlying physical mechanisms, and help in the design of forecasting and nowcasting systems which provide warnings and predictions. In this work, we present a study on the relationships between the Peak Flux and Fluence spectra of SPEs. This study builds upon existing work and provides further insights into the characteristics and the relationships of SPE Peak flux and Fluence spectra. Moreover it is shown how these relationships can be quantified in a sound manner and exploited in a simple methodology with which the Fluence spectrum of an SPE can be well predicted from its given Peak spectrum across two orders of magnitude of proton energies, from 5 MeV to 200 MeV. Finally it is discussed how the methodology in this work can be easily applied to forecasting and nowcasting systems

    Static and dynamic balance deficiencies in chronic low back pain

    No full text
    BACKGROUND: According to previously conducted studies, people with Low Back Pain (LBP) present with static balance deficiencies. OBJECTIVE: The aim of the present study was to compare static, as well as dynamic balance ability between Chronic Low Back Pain (CLBP) and healthy subjects. METHODS: The CLBP group comprised 17 subjects and the control group of 16 subjects, matched for age, BMI and gender. The protocol applied compared the balance ability when performing the Star Excursion Balance Test (SEBT) and the static 1-leg stance position. The innovation introduced in the protocol was that the participants performed not only the static 1-leg stance, but also the dynamic SEBT on a force plate which recorded the target sway (TS), i.e. the Center of Pressure (CoP) excursion. RESULTS: The CLBP group had significantly reduced performance in SEBT, coupled with greater static and dynamic TS values. Age and especially BMI also bear a significant effect on SEBT execution. The inclusion of SEBT and TS derived scores in a stepwise logistic regression equation lead to the correct classification of 85% of the subjects. CONCLUSIONS: Dynamic and static balance ability provide supplementary information for the identification of the presence of CLBP, with dynamic balance being more instrumental. © 2016 - IOS Press and the authors. All rights reserved

    Efficient Architectures for Multigigabit CCSDS LDPC Encoders

    No full text
    Quasi-cyclic low-density parity-check (QC-LDPC) codes have been adopted by the Consultative Committee for Space Data Systems (CCSDS) as the recommended standard for onboard channel coding in Near-Earth and Deep-Space communications. Encoder architectures proposed so far are not efficient for high-throughput hardware implementations targeting the specific CCSDS codes. In this article, we introduce a novel architecture for the multiplication of a dense quasi-cyclic (QC) matrix with a bit vector, which is the fundamental operation of QC-LDPC encoding. The architecture leverages the inherent parallelism of the QC structure by concurrently processing multiple bits, according to an optimized scheduling. Based on this architecture, we propose efficient encoders for CCSDS codes, according to all the applicable low-density parity-check (LDPC) code encoding methods. Moreover, in the special case of the code for Near-Earth communications, we also introduce a preprocessing algorithm to efficiently handle the challenges arising from the generator's matrix circulant size (511 bits). The proposed architectures have been implemented in various field-programmable gate array (FPGA) technologies and validated in Zynq UltraScale+ multiprocessor system-on-chip (MPSoC), achieving a significant speedup compared with previous approaches, while at the same time keeping resource utilization low. © 1993-2012 IEEE

    High-Performance COTS FPGA SoC for Parallel Hyperspectral Image Compression with CCSDS-123.0-B-1

    No full text
    Nowadays, hyperspectral imaging is recognized as a cornerstone remote sensing technology. Next generation, high-speed airborne, and space-borne imagers have increased resolution, resulting in an explosive growth in data volume and instrument data rate in the range of gigapixel per second. This competes with limited on-board resources and bandwidth, making hyperspectral image compression a mission critical on-board processing task. At the same time, the 'new space' trend is emerging, where launch costs decrease, and agile approaches are exploited building smallsats using commercial-off-the-shelf (COTS) parts. In this contribution, we introduce a high-performance parallel implementation of the CCSDS-123.0-B-1 hyperspectral compression algorithm targeting SRAM field-programmable gate array (FPGA) technology. The architecture exploits image segmentation to provide the robustness to data corruption and enables scalable throughput performance by leveraging segment-level parallelism. Furthermore, we exploit the capabilities of a COTS FPGA system-on-chip (SoC) device to optimize size, weight, power, and cost (SWaP-C). The architecture partitions a hyperspectral cube stored in a DRAM framebuffer into segments, compressing them in parallel using a flexible software scheduler hosted in the SoC CPU and several compressor accelerator cores in the FPGA fabric. A 5-core implementation demonstrated on a Zynq-7045 FPGA achieves a throughput performance of 1387 Msamples/s [22.2 Gb/s at 16 bits per pixel per band (bpppb)] and outperforms previous implementations in equivalent FPGA technology, allowing seamless integration with next-generation hyperspectral sensors. © 1993-2012 IEEE

    A 3.3 Gbps CCSDS 123.0-B-1 Multispectral Hyperspectral Image Compression Hardware Accelerator on a Space-Grade SRAM FPGA

    No full text
    The explosive growth of data volume from next generation high-resolution and high-speed hyperspectral remote sensing systems will compete with the limited on-board storage resources and bandwidth available for the transmission of data to ground stations making hyperspectral image compression a mission critical and challenging on-board payload data processing task. The Consultative Committee for Space Data Systems (CCSDS) has issued recommended standard CCSDS-123.0-B-1 for lossless multispectral and hyperspectral image compression. In this paper, a very high data-rate performance hardware accelerator is presented implementing the CCSDS-123.0-B-1 algorithm as an IP core targeting a space-grade FPGA. For the first time, the introduced architecture based on the principles of C-slow retiming, exploits the inherent task-level parallelism of the algorithm under BIP ordering and implements a reconfigurable fine-grained pipeline in critical feedback loops, achieving high throughput performance. The CCSDS-123.0-B-1 IP core achieves beyond the current state-of-the-art data-rate performance with a maximum throughput of 213 MSamples/s (3.3 Gbps @ 16-bits) using 11 percent of LUTs and 27 percent of BRAMs of the Virtex-5QV FPGA resources for a typical hyperspectral image, leveraging the full throughput of a single SpaceFibre lane. To the best of our knowledge, it is the fastest implementation of CCSDS-123.0-B-1 targeting a space-grade FPGA to date. © 2013 IEEE

    Efficient field-programmable gate array implementation of CCSDS 121.0-B-2 lossless data compression algorithm for image compression

    No full text
    The Consultative Committee for Space Data Systems (CCSDS) 121.0-B-2 lossless data compression standard defines a lossless adaptive source coding algorithm which is applicable to a wide range of imaging and nonimaging data. We introduce a field-programmable gate array (FPGA) implementation of CCSDS 121.0-B-2 as an intellectual property (IP) core with the following features: (a) it is enhanced with a two-dimensional (2-D) second-order predictor making it more suitable for image compression, (b) it is enhanced with near-lossless compression functionality, (c) its parallel, pipelined architecture provides high data-rate performance with a maximum achievable throughput of 205 Msamples Ms (3.2 Gbps at 16 bit) when targeting the Xilinx Virtex-5QV FPGA, and (d) it requires very low FPGA resources. When mission requirements impose lossless image compression, the CCSDS 121.0-B-2 IP core provides a very low implementation cost solution. According to European Space Agency PROBA-3 Bridging Phase, the CCSDS 121.0-B-2 IP core will be implemented in a Microsemi RTAX2000 FPGA, hosted in the data processing unit of the Coronagraph Control Box, of the Association of Spacecraft for Polarimetric and Imaging Investigation of the Corona of the Sun Coronagraph System Payload. To the best of our knowledge, it is the fastest FPGA implementation of CCSDS 121.0-B-2 to date, also including a 2-D second-order predictor making it more suitable for image compression. © 2015 Society of Photo-Optical Instrumentation Engineers

    Artificial intelligence unfolding for space radiation monitor data

    No full text
    The reliable and accurate calculation of incident particle radiation fluxes from space radiation monitor measurements, i.e. count-rates, is of great interest and importance. Radiation monitors are relatively simple and easy to implement instruments found on board multiple spacecrafts and can thus provide information about the radiation environment in various regions of space ranging from Low Earth orbit to missions in Lagrangian points and even interplanetary missions. However, the unfolding of fluxes from monitor count-rates, being an ill-posed inverse problem, is not trivial and prone to serious errors due to the inherent difficulties present in such problems. In this work we present a novel unfolding method which uses tools from the fields of Artificial Intelligence and Machine Learning to achieve good unfolding of monitor measurements. The unfolding method combines a Case Based Reasoning approach with a Genetic Algorithm, which are both widely used. We benchmark the method on data from European Space Agency’s (ESA) Standard Radiation Environment Monitor (SREM) on board the INTEGRAL mission by calculating proton fluxes during Solar Energetic Particle Events and electron fluxes from measurements within the outer Radiation Belt. Extensive evaluation studies are made by comparing the unfolded proton fluxes with data from the SEPEM Reference Dataset v2.0 and the unfolded electron fluxes with data from the Van Allen Probes mission instruments Magnetic Electron Ion Spectrometer (MagEIS) and Relativistic Electron Proton Telescope (REPT)

    Artificial intelligence unfolding for space radiation monitor data

    No full text
    The reliable and accurate calculation of incident particle radiation fluxes from space radiation monitor measurements, i.e. count-rates, is of great interest and importance. Radiation monitors are relatively simple and easy to implement instruments found on board multiple spacecrafts and can thus provide information about the radiation environment in various regions of space ranging from Low Earth orbit to missions in Lagrangian points and even interplanetary missions. However, the unfolding of fluxes from monitor count-rates, being an ill-posed inverse problem, is not trivial and prone to serious errors due to the inherent difficulties present in such problems. In this work we present a novel unfolding method which uses tools from the fields of Artificial Intelligence and Machine Learning to achieve good unfolding of monitor measurements. The unfolding method combines a Case Based Reasoning approach with a Genetic Algorithm, which are both widely used. We benchmark the method on data from European Space Agency's (ESA) Standard Radiation Environment Monitor (SREM) on board the INTEGRAL mission by calculating proton fluxes during Solar Energetic Particle Events and electron fluxes from measurements within the outer Radiation Belt. Extensive evaluation studies are made by comparing the unfolded proton fluxes with data from the SEPEM Reference Dataset v2.0 and the unfolded electron fluxes with data from the Van Allen Probes mission instruments Magnetic Electron Ion Spectrometer (MagEIS) and Relativistic Electron Proton Telescope (REPT). © 2018 S. Aminalragia-Giamini et al., Published by EDP Sciences
    corecore