26 research outputs found

    Progetto di un Flussimetro Integrato in Microsistemi Fluidici per Analisi Genetiche

    Get PDF
    Negli ultimi anni si e' avuto un impulso sempre maggiore allo sviluppo e alla realizzazione di microsistemi in grado di misurare e controllare il flusso di gas e liquidi; infatti e' stato valutato che nel 2000 i sensori microfluidici coprivano il 19% del mercato dei MEMS (sistemi micro-elettro-meccanici). Grazie alle dimensioni estremamente ridotte questi sensori hanno il grosso vantaggio di poter avere un’ottima risoluzione spaziale e temporale, permettendo misure di flussi esigui, caratterizzati da velocita' dell’ordine dei millimetri al secondo. Per questa ragione si prestano ad essere integrati in microsistemi per analisi genetiche, in cui e' di fondamentale importanza indirizzare e controllare i movimenti di fluidi organici e soluzioni. Un microsistema in grado di effettuare in maniera quasi autonoma analisi genetiche potrebbe sostituire la strumentazione tradizionale, in quanto unirebbe alle dimensioni ridotte e alla trasportabilita' un’elevata velocità di analisi e bassi costi, sia per la minima richiesta di reagenti e campioni , sia per la possibilità di essere prodotto su larga scala sfruttando le tecnologie microelettroniche esistenti. Questo lavoro di tesi si pone in questo ambito e descrive il progetto di un Flussimetro da usare in Microsistemi per Analisi Genetiche. In particolare il dispositivo e' un sensore per la misura del flusso elettroosmotico in un microcapillare di vetro per analisi elettroforetiche. Con piccole modifiche la struttura puo', tuttavia, essere utilizzata in altre applicazioni per la misura di flussi di gas o liquidi. La tesi è articolata in cinque capitoli. Nel primo e' riportata un’introduzione generale ai MEMS, e una descrizione più dettagliata dei sensori di flusso integrati. Nel secondo capitolo si descrivono i microsistemi per analisi genetiche e l’importanza del controllo dei fenomeni elettrocinetici nei microcanali. Inoltre si da' una breve descrizione delle tecnologie realizzative usate. Il terzo capitolo e' incentrato su un’analisi teorica dei fenomeni elettrocinetici in microfluidica, a partire dalla formazione del doppio strato elettrico sulle pareti interne dei microcanali, fino allo studio dello streaming potential, dell’elettroforesi e dell’elettroosmosi. L’acquisizione di tali conoscenze e' fondamentale per poter simulare i diversi fenomeni elettrocinetici e per studiarne l’effetto in concomitanza con fenomeni di convezione e conduzione del calore nei microcanali. Negli ultimi due capitoli viene, infine, descritta la progettazione del sensore mediante un simulatore ad elementi finiti commerciale, Femlab®3.1. In particolare nel capitolo quarto sono riportate le simulazioni relative a diverse configurazioni del sensore e la dipendenza della sensibilita' dai parametri geometrici della struttura e dai materiali usati. Nel quinto capitolo viene illustrata la configurazione finale adottata per il flussimetro, le sue dimensioni e i materiali utilizzati. In base a questi dati vengono presentate delle nuove simulazioni volte a prevedere il comportamento del dispositivo reale e ad ottimizzarlo, ottenendo le ultime informazioni per il disegno del layout. In conclusione viene mostrato il layout finale con le relative maschere per litografia ottica e viene descritto il processo di fabbricazione del dispositivo

    A framework to analyze noise factors of automotive perception sensors

    Get PDF
    Automated vehicles (AVs) are one of the breakthroughs of this century. The main argument to support their development is increased safety and reduction of human and economic losses; however, to demonstrate that AVs are safer than human drivers billions of miles of testing are required. Thus, realistic simulation and virtual testing of AV systems and sensors are crucial to accelerate the technological readiness. In particular, perception sensor measurements are affected by uncertainties due to noise factors; these uncertainties need to be included in simulations. This letter presents a framework to exhaustively analyze and simulate the effect of the combination of noise factors on sensor data. We applied the framework to analyze one sensor, the light detection and ranging (LiDAR), but it can be easily adapted to study other sensors. Results demonstrate that single noise factor analysis gives an incomplete knowledge of measurement degradation and perception is dramatically hindered when more noises are combined. The proposed framework is a powerful tool to predict the degradation of AV sensor performance

    Taming Transformers for Realistic Lidar Point Cloud Generation

    Full text link
    Diffusion Models (DMs) have achieved State-Of-The-Art (SOTA) results in the Lidar point cloud generation task, benefiting from their stable training and iterative refinement during sampling. However, DMs often fail to realistically model Lidar raydrop noise due to their inherent denoising process. To retain the strength of iterative sampling while enhancing the generation of raydrop noise, we introduce LidarGRIT, a generative model that uses auto-regressive transformers to iteratively sample the range images in the latent space rather than image space. Furthermore, LidarGRIT utilises VQ-VAE to separately decode range images and raydrop masks. Our results show that LidarGRIT achieves superior performance compared to SOTA models on KITTI-360 and KITTI odometry datasets. Code available at:https://github.com/hamedhaghighi/LidarGRIT

    A novel score-based LiDAR point cloud degradation analysis method

    Get PDF
    Assisted and automated driving systems critically depend on high-quality sensor data to build accurate situational awareness. A key aspect of maintaining this quality is the ability to quantify the perception sensor degradation through detecting dissimilarities in sensor data. Amongst various perception sensors, LiDAR technology has gained traction, due to a significant reduction of its cost and the benefits of providing a detailed 3D understanding of the environment (point cloud). However, measuring the dissimilarity between LiDAR point clouds, especially in the context of data degradation due to noise factors, has been underexplored in the literature. A comprehensive point cloud dissimilarity score metric is essential for detecting severe sensor degradation, which could lead to hazardous events due to the compromised performance of perception tasks. Additionally, this score metric plays a central role in the use of virtual sensor models, where a thorough validation of sensor models is required for accuracy and reliability. To address this gap, this paper introduces a novel framework that evaluates point clouds dissimilarity based on high-level geometries. Contrasting with traditional methods like the computationally expensive Hausdorff metric which involves correspondence-search algorithms, our framework uses a tailored downsampling method to ensure efficiency. This is followed by condensing point clouds into shape signatures which results in efficient comparison. In addition to controlled simulations, our framework demonstrated repeatability, robustness, and consistency, in highly noisy real-world scenarios, surpassing traditional methods

    Benchmarking the Robustness of Panoptic Segmentation for Automated Driving

    Full text link
    Precise situational awareness is required for the safe decision-making of assisted and automated driving (AAD) functions. Panoptic segmentation is a promising perception technique to identify and categorise objects, impending hazards, and driveable space at a pixel level. While segmentation quality is generally associated with the quality of the camera data, a comprehensive understanding and modelling of this relationship are paramount for AAD system designers. Motivated by such a need, this work proposes a unifying pipeline to assess the robustness of panoptic segmentation models for AAD, correlating it with traditional image quality. The first step of the proposed pipeline involves generating degraded camera data that reflects real-world noise factors. To this end, 19 noise factors have been identified and implemented with 3 severity levels. Of these factors, this work proposes novel models for unfavourable light and snow. After applying the degradation models, three state-of-the-art CNN- and vision transformers (ViT)-based panoptic segmentation networks are used to analyse their robustness. The variations of the segmentation performance are then correlated to 8 selected image quality metrics. This research reveals that: 1) certain specific noise factors produce the highest impact on panoptic segmentation, i.e. droplets on lens and Gaussian noise; 2) the ViT-based panoptic segmentation backbones show better robustness to the considered noise factors; 3) some image quality metrics (i.e. LPIPS and CW-SSIM) correlate strongly with panoptic segmentation performance and therefore they can be used as predictive metrics for network performance

    Semantic-aware video compression for automotive cameras

    Get PDF
    Assisted and automated driving functions in vehicles exploit sensor data to build situational awareness, however, the data amount required by these functions might exceed the bandwidth of current wired vehicle communication networks. Consequently, sensor data reduction, and automotive camera video compression need investigation. However, conventional video compression schemes, such as H.264 and H.265, have been mainly optimised for human vision. In this paper, we propose a semantic-aware (SA) video compression (SAC) framework that compresses separately and simultaneously region-of-interest and region-out-of-interest of automotive camera video frames, before transmitting them to processing unit(s), where the data are used for perception tasks, such as object detection, semantic segmentation, etc. Using our newly proposed technique, the region-of-interest (ROI), encapsulating most of the road stakeholders, retains higher quality using lower compression ratio. The experimental results show that under the same overall compression ratio, our proposed SAC scheme maintains a similar or better image quality, measured accordingly to traditional metrics and to our newly proposed semantic-aware metrics. The newly proposed metrics, namely SA-PSNR, SA-SSIM, and iIoU, give more emphasis to ROI quality, which has an immediate impact on the planning and decisions of assisted and automated driving functions. Using our SA-X264 compression, SA-PSNR and SA-SSIM have an increase of 2.864 and 0.008 respectively compared to traditional H.264, with higher ROI quality and the same compression ratio. Finally, a segmentation-based perception algorithm has been used to compare reconstructed frames, demonstrating a 2.7% mIOU improvement, when using the proposed SAC method versus traditional compression techniques

    Optimising Faster R-CNN training to enable video camera compression for assisted and automated driving systems

    Get PDF
    Advanced driving assistance systems based on only one camera or one RADAR are evolving into the current assisted and automated driving functions delivering SAE Level 2 and above capabilities. A suite of environmental perception sensors is required to achieve safe and reliable planning and navigation in future vehicles equipped with these capabilities. The sensor suite, based on several cameras, LiDARs, RADARs and ultrasonic sensors, needs to be adequate to provide sufficient (and redundant, depending on the level of driving automation) spatial and temporal coverage of the environment around the vehicle. However, the data amount produced by the sensor suite can easily exceed a few tens of Gb/s, with a single ‘average’ automotive camera producing more than 3 Gb/s. It is therefore important to consider leveraging traditional video compression techniques as well as to investigate novel ones to reduce the amount of video camera data to be transmitted to the vehicle processing unit(s). In this paper, we demonstrate that lossy compression schemes, with high compression ratios (up to 1:1,000) can be applied safely to the camera video data stream when machine learning based object detection is used to consume the sensor data. We show that transfer learning can be used to re-train a deep neural network with H.264 and H.265 compliant compressed data, and it allows the network performance to be optimised based on the compression level of the generated sensor data. Moreover, this form of transfer learning improves the neural network performance when evaluating uncompressed data, increasing its robustness to real world variations of the data

    The effect of camera data degradation factors on panoptic segmentation for automated driving

    Get PDF
    Precise scene understanding based on perception sensors’ data is important for assisted and automated driving (AAD) functions, to enable accurate decision-making processes and safe navigation. Among various perception tasks using camera images (e.g. object detection, semantic segmentation), panoptic segmentation shows promising scene understanding capability in terms of recognizing and classifying different types and objects, imminent obstacles, and drivable space at a pixel level. While current panoptic segmentation methods exhibit good potential for AAD perception under ‘ideal’ conditions, there are no systematic studies investigating the effects that various degradation factors can have on the quality of the data generated by automotive cameras. Therefore, in this paper, we consider 5 categories of camera data degradation models, namely light level, adverse weather, internal sensor noises, motion blur and compression artefacts. These 5 categories include 11 potential degradation models, with different degradation levels. Based on these 11 models and multiple degradation levels, we synthesize an augmented version of Cityscape, named the Degraded-Cityscapes (D-Cityscapes). Moreover, for the environmental light level, we propose a new synthetic method with generative adversarial learning and zero-reference deep curve estimation to simulate 3 degraded light levels including low light, night light with glare, and extreme light. To compare the effect of the implemented camera degradation factors, we run extensive tests using a panoptic segmentation network (i.e. EfficientPS), quantifying how the performance metrics vary when the data are degraded. Based on the evaluation results, we demonstrate that extreme snow, blur, and light are the most threatening conditions for panoptic segmentation in AAD, while EfficientPS can cope well with light fog, compression, and blur, which can provide insights for future research directions

    Accelerating stereo image simulation for automotive applications using neural stereo super resolution

    Get PDF
    Camera image simulation is integral to the virtual validation of autonomous vehicles and robots that use visual perception to understand their environment. It also has applications in creating image datasets for training learning-based vision models. As camera image simulation takes into account a wide variety of external and internal parameters, achieving a high-fidelity simulation is a computationally expensive process. Recently, several neural network-based techniques have been proposed to reduce the computational complexity of image rendering, a critical element of the camera simulation pipeline. However, the existing methods are tailored for monocular camera images and are not optimised for stereo images, which are widely used in autonomous driving applications. To address this, we propose a technique based on Stereo Super Resolution (SSR) to speed up the simulation of stereo images. The proposed method first simulates stereo images at a lower resolution, then super-resolves them to their original resolution using our introduced SSR model, ETSSR. We evaluated the performance of our technique using the CARLA driving simulator and created our own synthetic dataset for training ETSSR. The evaluations indicate that our approach can speed up stereo image simulation by a factor of up to 2.57 over various resolutions. Moreover, it shows that our ETSSR achieves on-par or superior performance compared to the state-of-the-art models, using significantly fewer parameters and FLOPs. We have made our source code and dataset available at https://github.com/hamedhaghighi/ETSSR

    Methodology to investigate interference using off-the-shelf LiDARs

    Get PDF
    With the increase of assisted and automated functions provided on new vehicles and some automotive manufacturers starting to equip high end vehicles with LiDARs, there is a need to consider and analyse the effects of having LiDAR sensors on different vehicles interacting with each other in close proximity (e.g. cities, highways, crossroads, etc.). This paper investigates interference between 360 degree scanning LiDARs, which are one of the common typologies of automotive LiDARs. One LiDAR was selected as the victim, and 5 different LiDARs were used one by one as offenders. The victim and offending LiDARs were placed in a controlled environment to reduce sources of noise, and several sets of measurements were carried out and repeated at least four times. When the attacker and victim LiDARs were turned on at the same time some variations in the signals were observed, however the statistical variation was too low to be able to identify interference. As a result, this work highlights that there is no obvious effect of interference witnessed between the selected off-the-shelf 360 degree LiDAR sensors; this lack of interference can be attributed to the working principle of this type of LiDAR and low probability of having directly interfering beams, and also to the focusing and filtering optical circuits that the LiDARs have by design. The presented results confirm that mechanical scanning LiDAR can be used safely for assisted and automated driving even in situations with multiple LiDARs
    corecore