19 research outputs found
Modeling Interference for the Coexistence of 6G Networks and Passive Sensing Systems
Future wireless networks and sensing systems will benefit from access to
large chunks of spectrum above 100 GHz, to achieve terabit-per-second data
rates in 6th Generation (6G) cellular systems and improve accuracy and reach of
Earth exploration and sensing and radio astronomy applications. These are
extremely sensitive to interference from artificial signals, thus the spectrum
above 100 GHz features several bands which are protected from active
transmissions under current spectrum regulations. To provide more agile access
to the spectrum for both services, active and passive users will have to
coexist without harming passive sensing operations. In this paper, we provide
the first, fundamental analysis of Radio Frequency Interference (RFI) that
large-scale terrestrial deployments introduce in different satellite sensing
systems now orbiting the Earth. We develop a geometry-based analysis and extend
it into a data-driven model which accounts for realistic propagation, building
obstruction, ground reflection, for network topology with up to nodes in
more than km. We show that the presence of harmful RFI depends on
several factors, including network load, density and topology, satellite
orientation, and building density. The results and methodology provide the
foundation for the development of coexistence solutions and spectrum policy
towards 6G
Endomyocardial biopsy in acute myocarditis: For all patients. La biopsia endomiocardica nella miocardite acuta- Per tutti.
Hybrid Point Cloud Semantic Compression for Automotive Sensors: A Performance Evaluation
In a fully autonomous driving framework, where vehicles operate without human intervention, information sharing plays a fundamental role. In this context, new network solutions have to be designed to handle the large volumes of data generated by the rich sensor suite of the cars in a reliable and efficient way. Among all the possible sensors, Light Detection and Ranging (LiDAR) can produce an accurate 3D point cloud representation of the surrounding environment, which in turn generates high data rates. For this reason, efficient point cloud compression is paramount to alleviate the burden of data transmission over bandwidth-constrained channels and to facilitate real-time communications. In this paper, we propose a pipeline to efficiently compress LiDAR observations in an automotive scenario. First, we leverage the capabilities of RangeNet++, a Deep Neural Network (DNN) used to semantically infer point labels, to reduce the channel load by selecting the most valuable environmental data to be disseminated. Second, we compress the selected points using Draco, a 3D compression algorithm which is able to obtain compression up to the quantization error. Our experiments, validated on the Semantic KITTI dataset, demonstrate that it is possible to compress and send the information at the frame rate of the LiDAR, thus achieving real-time performance
SELMA: SEmantic Large-Scale Multimodal Acquisitions in Variable Weather, Daytime and Viewpoints
Accurate scene understanding from multiple sensors mounted on cars is a key requirement for autonomous driving systems. Nowadays, this task is mainly performed through data-hungry deep learning techniques that need very large amounts of data to be trained. Due to the high cost of performing segmentation labeling, many synthetic datasets have been proposed. However, most of them miss the multi-sensor nature of the data, and do not capture the significant changes introduced by the variation of daytime and weather conditions. To fill these gaps, we introduce SELMA, a novel synthetic dataset for semantic segmentation that contains more than 30K unique waypoints acquired from 24 different sensors including RGB, depth, semantic cameras and LiDARs, in 27 different weather and daytime conditions, for a total of more than 20M samples. SELMA is based on CARLA, an open-source simulator for generating synthetic data in autonomous driving scenarios, that we modified to increase the variability and the diversity in the scenes and class sets, and to align it with other benchmark datasets. As shown by the experimental evaluation, SELMA allows the efficient training of standard and multi-modal deep learning architectures, and achieves remarkable results on real-world data. SELMA is free and publicly available, thus supporting open science and research
Temporal Characterization and Prediction of VR Traffic: A Network Slicing Use Case
Over the past few years, the concept of Virtual Reality (VR) has attracted increasing interest thanks to its extensive industrial and commercial applications. Currently, the 3D models of the virtual scenes are generally stored in the VR visor itself, which operates as a standalone device. However, applications that entail multi-party interactions will likely require the scene to be processed by an external server and then streamed to the visors. However, the stringent Quality of Service (QoS) constraints imposed by the VR's interactive nature require Network Slicing (NS) solutions, for which profiling the traffic generated by the VR application is crucial. To this end, we collected more than 4 hours of traces in a real setup and analyzed their temporal correlation, focusing on the CBR encoding mode, which should generate more predictable traffic streams. From the collected data, we then distilled two prediction models for future frame size, which can be instrumental in the design of dynamic resource allocation algorithms. Our results show that even the state-of-the-art H.264 CBR mode may have significant frame size fluctuations, impacting NS optimization. We then exploited the models to dynamically determine requirements in an NS scenario, providing the required QoS while minimizing resource usage
Point Cloud Compression for Efficient Data Broadcasting: A Performance Comparison
The worldwide commercialization of fifth generation (5G) wireless networks and the exciting possibilities offered by connected and autonomous vehicles (CAVs) are pushing toward the deployment of heterogeneous sensors for tracking dynamic objects in the automotive environment. Among them, Light Detection and Ranging (LiDAR) sensors are witnessing a surge in popularity as their application to vehicular networks seem particularly promising. LiDARs can indeed produce a three-dimensional (3D) mapping of the surrounding environment, which can be used for object detection, recognition, and topography. These data are encoded as a point cloud which, when transmitted, may pose significant challenges to the communication systems as it can easily congest the wireless channel. Along these lines, this paper investigates how to compress point clouds in a fast and efficient way. Both 2D- and a 3D-oriented approaches are considered, and the performance of the corresponding techniques is analyzed in terms of (de)compression time, efficiency, and quality of the decompressed frame compared to the original. We demonstrate that, thanks to the matrix form in which LiDAR frames are saved, compression methods that are typically applied for 2D images give equivalent results, if not better, than those specifically designed for 3D point clouds
Enabling simulation-based optimization through machine learning: A case study on antenna design
Complex phenomena are generally modeled with sophisticated simulators that, depending on their accuracy, can be very demanding in terms of computational resources and simulation time. Their time-consuming nature, together with a typically vast parameter space to be explored, make simulation- based optimization often infeasible. In this work, we present a method that enables the optimization of complex systems through Machine Learning (ML) techniques. We show how well-known learning algorithms are able to reliably emulate a complex simulator with a modest dataset obtained from it. The trained emulator is then able to yield values close to the simulated ones in virtually no time. Therefore, it is possible to perform a global numerical optimization over the vast multi-dimensional parameter space, in a fraction of the time that would be required by a simple brute-force search. As a testbed for the proposed methodology, we used a network simulator for next-generation mmWave cellular systems. After simulating several antenna configurations and collecting the resulting network-level statistics, we feed it into our framework. Results show that, even with few data points, extrapolating a continuous model makes it possible to estimate the global optimum configuration almost instantaneously. The very same tool can then be used to achieve any further optimization goal on the same input parameters in negligible time
P1234Pocket complications after device-based therapy in patients with chronic heart failure
55First degree atrioventricular block on basal electrocardiogram predicts future arrhythmic events in patients with brugada syndrome
Passive transfer of affinity-purified anti-heart autoantibodies (AHA) from sera of patients with myocarditis induces experimental myocarditis in mice.
BACKGROUND:
Human autoimmune myocarditis is characterized by an increased frequency of serum organ and disease-specific anti-heart autoantibodies (AHA) in affected patients. To assess whether AHA are directly pathogenic, we used the passive transfer technique of AHA from patients to normal Balb/c mice to induce an experimental myocarditis.
METHODS:
In keeping with a classical passive transfer experiment, sera from 5 AHA positive myocarditis patients (3 male, mean age 30 ± 11 years, 3 with giant cell and 2 with lymphocytic myocarditis) were affinity purified and injected into 25 Balb/c mice. As controls, affinity purified sera from 5 healthy donors were passively transferred to 25 Balb/c mice. Further 15 control mice were injected with phosphate-buffered saline and 9 mice did not receive any injection. In all patients cardiac-specific AHA of IgG class had been previously detected by an indirect immunofluorescence (IFL) technique on cryostat sections of O blood group human heart. The animals were sacrificed after 4 weeks and the hearts were blindly examined for histological evidence of myocarditis by an expert cardiac pathologist.
RESULTS:
Myocarditis was present in 13/25 (52%) of the mice which received affinity-purified IgG from patients. The findings of severe, moderate or mild myocarditis were more common in the mice which received affinity-purified IgG from patients (20%; 20% and 12%) than in control animals (2%, p=0.01; 0%, p=0.003; and 0%, p=0.04 respectively).
CONCLUSIONS:
These findings provide a new evidence for AHA-mediated pathogenicity in human myocarditis, according to Rose-Witebsky criteria
