135 research outputs found

    Digital image enhancement by brightness and contrast manipulation using Verilog hardware description language

    Get PDF
    A foggy environment may cause digitally captured images to appear blurry, dim, or low in contrast. This will impact computer vision systems that rely on image information. With the need for real-time image information, such as a plate number recognition system, a simple yet effective image enhancement algorithm using a hardware implementation is very much needed to fulfil the need. To improve images that suffer from low exposure and hazy, the hardware implementations are usually based on complex algorithms. Hence, the aim of this paper is to propose a less complex enhancement algorithm for hardware implementation that is able to improve the quality of such images. The proposed method simply combines brightness and contrast manipulation to enhance the image. In order to see the performance of the proposed method, a total of 100 vehicle registration number images were collected, enhanced, and evaluated. The evaluation results were compared to two other enhancement methods quantitatively and qualitatively. Quantitative evaluation is done by evaluating the output image using peak signal-to-noise ratio and mean-square error evaluation metrics, while a survey is done to evaluate the output image qualitatively. Based on the quantitative evaluation results, our proposed method outperforms the other two enhancement methods

    The C-Band All-Sky Survey: Instrument design, status, and first-look data

    Get PDF
    The C-Band All-Sky Survey (C-BASS) aims to produce sensitive, all-sky maps of diffuse Galactic emission at 5 GHz in total intensity and linear polarization. These maps will be used (with other surveys) to separate the several astrophysical components contributing to microwave emission, and in particular will allow an accurate map of synchrotron emission to be produced for the subtraction of foregrounds from measurements of the polarized Cosmic Microwave Background. We describe the design of the analog instrument, the optics of our 6.1 m dish at the Owens Valley Radio Observatory, the status of observations, and first-look data.Comment: 10 pages, 11 figures, published in Proceedings of SPIE MIllimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy V (2010), Vol. 7741, 77411I-1 - 77411I-1

    Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations

    Get PDF
    Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing

    Analysis of the performance of a polarized LiDAR imager in fog

    Get PDF
    This paper focuses on exploring ways to improve the performance of LiDAR imagers through fog. One of the known weaknesses of LiDAR technology is the lack of tolerance to adverse environmental conditions, such as the presence of fog, which hampers the future development of LiDAR in several markets. Within this paper, a LiDAR unit is designed and constructed to be able to apply temporal and polarimetric discrimination for detecting the number of signal photons received with detailed control of its temporal and spatial distribution under co-polarized and cross-polarized configurations. The system is evaluated using different experiments in a macro-scale fog chamber under controlled fog conditions. Using the complete digitization of the acquired signals, we analyze the natural light media response, to see that due to its characteristics it could be directly filtered out. Moreover, we confirm that there exists a polarization memory effect, which, by using a polarimetric cross-configuration detector, allows improvement of object detection in point clouds. These results are useful for applications related to computer vision, in fields like autonomous vehicles or outdoor surveillance where many variable types of environmental conditions may be present.Agència de Gestió d’Ajuts Universitaris i de Recerca (2021FI_B2 00068, 2021FI_B2 00077); DSTL (DSTLX1000145661); Ministerio de Ciencia e Innovación (PDC2021-121038-I00, PID2020-119484RB-I00).Peer ReviewedPostprint (author's final draft

    CNN2Gate: an implementation of convolutional neural networks inference on FPGAs with automated design space exploration

    Get PDF
    ABSTRACT: Convolutional Neural Networks (CNNs) have a major impact on our society, because of the numerous services they provide. These services include, but are not limited to image classification, video analysis, and speech recognition. Recently, the number of researches that utilize FPGAs to implement CNNs are increasing rapidly. This is due to the lower power consumption and easy reconfigurability that are offered by these platforms. Because of the research efforts put into topics, such as architecture, synthesis, and optimization, some new challenges are arising for integrating suitable hardware solutions to high-level machine learning software libraries. This paper introduces an integrated framework (CNN2Gate), which supports compilation of a CNN model for an FPGA target. CNN2Gate is capable of parsing CNN models from several popular high-level machine learning libraries, such as Keras, Pytorch, Caffe2, etc. CNN2Gate extracts computation flow of layers, in addition to weights and biases, and applies a “given” fixed-point quantization. Furthermore, it writes this information in the proper format for the FPGA vendor’s OpenCL synthesis tools that are then used to build and run the project on FPGA. CNN2Gate performs design-space exploration and fits the design on different FPGAs with limited logic resources automatically. This paper reports results of automatic synthesis and design-space exploration of AlexNet and VGG-16 on various Intel FPGA platforms

    High-performance hardware accelerators for image processing in space applications

    Get PDF
    Mars is a hard place to reach. While there have been many notable success stories in getting probes to the Red Planet, the historical record is full of bad news. The success rate for actually landing on the Martian surface is even worse, roughly 30%. This low success rate must be mainly credited to the Mars environment characteristics. In the Mars atmosphere strong winds frequently breath. This phenomena usually modifies the lander descending trajectory diverging it from the target one. Moreover, the Mars surface is not the best place where performing a safe land. It is pitched by many and close craters and huge stones, and characterized by huge mountains and hills (e.g., Olympus Mons is 648 km in diameter and 27 km tall). For these reasons a mission failure due to a landing in huge craters, on big stones or on part of the surface characterized by a high slope is highly probable. In the last years, all space agencies have increased their research efforts in order to enhance the success rate of Mars missions. In particular, the two hottest research topics are: the active debris removal and the guided landing on Mars. The former aims at finding new methods to remove space debris exploiting unmanned spacecrafts. These must be able to autonomously: detect a debris, analyses it, in order to extract its characteristics in terms of weight, speed and dimension, and, eventually, rendezvous with it. In order to perform these tasks, the spacecraft must have high vision capabilities. In other words, it must be able to take pictures and process them with very complex image processing algorithms in order to detect, track and analyse the debris. The latter aims at increasing the landing point precision (i.e., landing ellipse) on Mars. Future space-missions will increasingly adopt Video Based Navigation systems to assist the entry, descent and landing (EDL) phase of space modules (e.g., spacecrafts), enhancing the precision of automatic EDL navigation systems. For instance, recent space exploration missions, e.g., Spirity, Oppurtunity, and Curiosity, made use of an EDL procedure aiming at following a fixed and precomputed descending trajectory to reach a precise landing point. This approach guarantees a maximum landing point precision of 20 km. By comparing this data with the Mars environment characteristics, it is possible to understand how the mission failure probability still remains really high. A very challenging problem is to design an autonomous-guided EDL system able to even more reduce the landing ellipse, guaranteeing to avoid the landing in dangerous area of Mars surface (e.g., huge craters or big stones) that could lead to the mission failure. The autonomous behaviour of the system is mandatory since a manual driven approach is not feasible due to the distance between Earth and Mars. Since this distance varies from 56 to 100 million of km approximately due to the orbit eccentricity, even if a signal transmission at the light speed could be possible, in the best case the transmission time would be around 31 minutes, exceeding so the overall duration of the EDL phase. In both applications, algorithms must guarantee self-adaptability to the environmental conditions. Since the Mars (and in general the space) harsh conditions are difficult to be predicted at design time, these algorithms must be able to automatically tune the internal parameters depending on the current conditions. Moreover, real-time performances are another key factor. Since a software implementation of these computational intensive tasks cannot reach the required performances, these algorithms must be accelerated via hardware. For this reasons, this thesis presents my research work done on advanced image processing algorithms for space applications and the associated hardware accelerators. My research activity has been focused on both the algorithm and their hardware implementations. Concerning the first aspect, I mainly focused my research effort to integrate self-adaptability features in the existing algorithms. While concerning the second, I studied and validated a methodology to efficiently develop, verify and validate hardware components aimed at accelerating video-based applications. This approach allowed me to develop and test high performance hardware accelerators that strongly overcome the performances of the actual state-of-the-art implementations. The thesis is organized in four main chapters. Chapter 2 starts with a brief introduction about the story of digital image processing. The main content of this chapter is the description of space missions in which digital image processing has a key role. A major effort has been spent on the missions in which my research activity has a substantial impact. In particular, for these missions, this chapter deeply analizes and evaluates the state-of-the-art approaches and algorithms. Chapter 3 analyzes and compares the two technologies used to implement high performances hardware accelerators, i.e., Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). Thanks to this information the reader may understand the main reasons behind the decision of space agencies to exploit FPGAs instead of ASICs for high-performance hardware accelerators in space missions, even if FPGAs are more sensible to Single Event Upsets (i.e., transient error induced on hardware component by alpha particles and solar radiation in space). Moreover, this chapter deeply describes the three available space-grade FPGA technologies (i.e., One-time Programmable, Flash-based, and SRAM-based), and the main fault-mitigation techniques against SEUs that are mandatory for employing space-grade FPGAs in actual missions. Chapter 4 describes one of the main contribution of my research work: a library of high-performance hardware accelerators for image processing in space applications. The basic idea behind this library is to offer to designers a set of validated hardware components able to strongly speed up the basic image processing operations commonly used in an image processing chain. In other words, these components can be directly used as elementary building blocks to easily create a complex image processing system, without wasting time in the debug and validation phase. This library groups the proposed hardware accelerators in IP-core families. The components contained in a same family share the same provided functionality and input/output interface. This harmonization in the I/O interface enables to substitute, inside a complex image processing system, components of the same family without requiring modifications to the system communication infrastructure. In addition to the analysis of the internal architecture of the proposed components, another important aspect of this chapter is the methodology used to develop, verify and validate the proposed high performance image processing hardware accelerators. This methodology involves the usage of different programming and hardware description languages in order to support the designer from the algorithm modelling up to the hardware implementation and validation. Chapter 5 presents the proposed complex image processing systems. In particular, it exploits a set of actual case studies, associated with the most recent space agency needs, to show how the hardware accelerator components can be assembled to build a complex image processing system. In addition to the hardware accelerators contained in the library, the described complex system embeds innovative ad-hoc hardware components and software routines able to provide high performance and self-adaptable image processing functionalities. To prove the benefits of the proposed methodology, each case study is concluded with a comparison with the current state-of-the-art implementations, highlighting the benefits in terms of performances and self-adaptability to the environmental conditions

    Implementation of Super Resolution Techniques in Geospatial Satellite Imagery

    Get PDF
    The potential for more precise land cover classifications and pattern analysis is provided by technological advancements and the growing accessibility of high-resolution satellite images, which might significantly improve the detection and quantification of land cover change for conservation.  A group of methods known as "super-resolution imaging" use generative modelling to increase the resolution of an imaging system. Super-Resolution Imaging, which falls under the category of sophisticated computer vision and image processing, has a variety of practical uses, including astronomical imaging, surveillance and security, medical imaging, and satellite imaging. As computer vision is where deep learning algorithms for super-resolution first appeared, they were mostly created on RGB images in 8-bit colour depth, where the sensor and camera are separated by a few meters. But no evaluation of these methods has been done

    SW-VHDL Co-Verification Environment Using Open Source Tools

    Get PDF
    The verification of complex digital designs often involves the use of expensive simulators. The present paper proposes an approach to verify a specific family of complex hardware/software systems, whose hardware part, running on an FPGA, communicates with a software counterpart executed on an external processor, such as a user/operator software running on an external PC. The hardware is described in VHDL and the software may be described in any computer language that can be interpreted or compiled into a (Linux) executable file. The presented approach uses open source tools, avoiding expensive license costs and usage restrictions.Unión Europea 68722

    Highlights Analysis System (HAnS) for low dynamic range to high dynamic range conversion of cinematic low dynamic range content

    Get PDF
    We propose a novel and efficient algorithm for detection of specular reflections and light sources (highlights) in cinematic content. The detection of highlights is important for reconstructing them properly in the conversion of the low dynamic range (LDR) to high dynamic range (HDR) content. Highlights are often difficult to be distinguished from bright diffuse surfaces, due to their brightness being reduced in the conventional LDR content production. Moreover, the cinematic LDR content is subject to the artistic use of effects that change the apparent brightness of certain image regions (e.g. limiting depth of field, grading, complex multi-lighting setup, etc.). To ensure the robustness of highlights detection to these effects, the proposed algorithm goes beyond considering only absolute brightness and considers five different features. These features are: the size of the highlight relative to the size of the surrounding image structures, the relative contrast in the surrounding of the highlight, its absolute brightness expressed through the luminance (luma feature), through the saturation in the color space (maxRGB feature) and through the saturation in white (minRGB feature). We evaluate the algorithm on two different image data-sets. The first one is a publicly available LDR image data-set without cinematic content, which allows comparison to the broader State of the art. Additionally, for the evaluation on cinematic content, we create an image data-set consisted of manually annotated cinematic frames and real-world images. For the purpose of demonstrating the proposed highlights detection algorithm in a complete LDR-to-HDR conversion pipeline, we additionally propose a simple inverse-tone-mapping algorithm. The experimental analysis shows that the proposed approach outperforms conventional highlights detection algorithms on both image data-sets, achieves high quality reconstruction of the HDR content and is suited for use in LDR-to-HDR conversion
    corecore