226 research outputs found
Algorithms for 3D data estimation from single-pixel ToF sensors and stereo vision systems
Depth Map Estimation from stereo devices and time of flight range cameras have been a challenging issues in Computer Vision. Distance Estimations from single-pixel histograms of time of flight sensors are exploited in numerous fields. Beyond the several drawbacks such as degradation caused by strong ambient light, scattered and multi-path possibilities, most of the prediction algorithms could be applied to resolve these problems effectively. As these two different tasks are handled in connection with each other, supervised approaches are considered since they provide more robust results. These results are used to train the model to improve three- dimensional geometry information and against major difficulties such as complicated patterns and objects. These approaches are observed according to their accuracy with help of metrics and get improved their performances.
This thesis focuses on the analysis of Time-of-Flight and stereo vision systems for depth map estimation and single-pixel distance prediction. State of art algorithms are compared and implemented with additional strategies which are integrated to minimize the error ratio. The histograms which are obtained from Time of Flight Sensor Simulation are exploited as a dataset for single-pixel distance prediction and after that, NYU Dataset is selected for depth map estimation
Signal processing algorithms for enhanced image fusion performance and assessment
The dissertation presents several signal processing algorithms for image fusion in noisy multimodal
conditions. It introduces a novel image fusion method which performs well for image
sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has
no requirements for a priori knowledge of the noise component. The image is decomposed with
Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The
properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic
and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment
methods show favourable performance of the proposed scheme compared to previous efforts
on image fusion, notably in heavily corrupted images.
The approach is further improved by incorporating the advantages of CP with a state-of-the-art
fusion technique named independent component analysis (ICA), for joint-fusion processing
based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to
eliminating high frequency information of the images involved, thereby limiting image sharpness.
Fusion using ICA, on the other hand, performs well in transferring edges and other salient features
of the input images into the composite output. The combination of both methods, coupled with
several mathematical morphological operations in an algorithm fusion framework, is considered a
viable solution. Again, according to the quantitative metrics the results of our proposed approach
are very encouraging as far as joint fusion and denoising are concerned.
Another focus of this dissertation is on a novel metric for image fusion evaluation that is based
on texture. The conservation of background textural details is considered important in many fusion
applications as they help define the image depth and structure, which may prove crucial in
many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process.
This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order
statistical features for the derivation of an image textural measure, which is then used to
replace the edge-based calculations in an objective-based fusion metric. Performance evaluation
on established fusion methods verifies that the proposed metric is viable, especially for multimodal
scenarios
Point-cloud based 3D object detection and classification methods for self-driving applications: A survey and taxonomy
Autonomous vehicles are becoming central for the future of mobility, supported by advances in deep learning techniques. The performance of aself-driving system is highly dependent on the quality of the perception task. Developments in sensor technologies have led to an increased availability of 3D scanners such as LiDAR, allowing for a more accurate representation of the vehicle's surroundings, leading to safer systems. The rapid development and consequent rise of research studies around self-driving systems since early 2010, resulted in a tremendous increase in the number and novelty of object detection methods. After the first wave of works that essentially tried to expand known techniques from object detection in images, more recently there has been a notable development in newer and more adapted to LiDAR data works. This paper addresses the existing literature on object detection using LiDAR data within the scope of self-driving and brings a systematic way for analysing it. Unlike general object detection surveys, we will focus on point-cloud data, which presents specific challenges, notably its high-dimensional and sparse nature. This work introduces a common object detection pipeline and taxonomy to facilitate a thorough comparison between different techniques and, departing from it, this work will critically examine the representation of data (critical for complexity reduction), feature extraction and finally the object detection models. A comparison between performance results of the different models is included, alongside with some future research challenges.This work is supported by European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) [Project n. 037902; Funding Reference: POCI-01-0247-FEDER-037902]
Recommended from our members
Cross-Layer Platform for Dynamic, Energy-Efficient Optical Networks
The design of the next-generation Internet infrastructure is driven by the need to sustain the massive growth in bandwidth demands. Novel, energy-efficient, optical networking technologies and architectures are required to effectively meet the stringent performance requirements with low cost and ultrahigh energy efficiencies. In this thesis, a cross-layer communications platform is proposed to enable greater intelligence and functionality on the physical layer. Providing the optical layer with advanced networking capabilities will facilitate the dynamic management and optimization of optical switching based on performance monitoring measurements and higher-layer attributes. The cross-layer platform aims to create a new framework for networks to incorporate packet-scale measurement subsystems and techniques for monitoring the health of the optical channel. This will allow for quality-of-service- and energy-aware routing schemes, as well as an enhanced awareness of the optical data signals. This thesis first presents the design and development of an optical packet switching fabric. Leveraging a networking test-bed environment to validate networking hypotheses, advanced switching functionalities are demonstrated, including the support for quality-of-service based routing and packet multicasting. The investigated cross-layering is based on emerging optical technologies, enabling packet protection techniques and packet-rate switching fabric reconfiguration. Coupled with fast performance monitoring, the platform will achieve significant performance gains within the endeavor of all-optical switching. Allowing for a more intelligent, programmable optical layer aims to support greater flexibility with respect to bandwidth allocation and potentially a significant reduction in the network's energy consumption. The ultimate deliverable of this work is a high-performance, cross-layer enabled optical network node. The experimental demonstration of an initial prototype creates a dynamic network element with distributed control plane management, featuring fast packet-rate optical switching capabilities and embedded physical-layer performance monitoring modules. The cross-layer box enables an intelligent traffic delivery system that can dynamically manipulate optical switching on a packet-granular scale. With the goal of achieving advanced multi-layer routing and control algorithms, the network node requires an intelligent co-optimization across all the layers. The proposed cross-layer design should drive optical technologies and architectures in an innovative way, in order to fulfill the void between the design of basic photonic devices and the networking protocols that use them. The performance of the entire network -- from the optical components, to the routing algorithms and user applications -- should be optimized in concert. This contribution to the area of cross-layer network design creates an adaptable optical pipe that is extremely flexible and intelligent aware of both the physical optical signals and higher-layer requirements. The impact of this work will be seen in the realization of dynamic, energy-efficient optical communication links in future networking infrastructures
A 2D DWT architecture suitable for the Embedded Zerotree Wavelet Algorithm
Digital Imaging has had an enormous impact on industrial applications such as the Internet and video-phone systems. However, demand for industrial applications is growing enormously. In particular, internet application users are, growing at a near exponential rate. The sharp increase in applications using digital images has caused much emphasis on the fields of image coding, storage, processing and communications. New techniques are continuously developed with the main aim of increasing efficiency. Image coding is in particular a field of great commercial interest. A digital image requires a large amount of data to be created. This large amount of data causes many problems when storing, transmitting or processing the image. Reducing the amount of data that can be used to represent an image is the main objective of image coding. Since the main objective is to reduce the amount of data that represents an image, various techniques have been developed and are continuously developed to increase efficiency. The JPEG image coding standard has enjoyed widespread acceptance, and the industry continues to explore its various implementation issues. However, recent research indicates multiresolution based image coding is a far superior alternative.
A recent development in the field of image coding is the use of Embedded Zerotree Wavelet (EZW) as the technique to achieve image compression. One of The aims of this theses is to explain how this technique is superior to other current coding standards. It will be seen that an essential part orthis method of image coding is the use of multi resolution analysis, a subband system whereby the subbands arc logarithmically spaced in frequency and represent an octave band decomposition. The block structure that implements this function is termed the two dimensional Discrete Wavelet Transform (2D-DWT). The 20 DWT is achieved by several architectures and these are analysed in order to choose the best suitable architecture for the EZW coder. Finally, this architecture is implemented and verified using the Synopsys Behavioural Compiler and recommendations are made based on experimental findings
Recent Advances in Machine Learning Applied to Ultrasound Imaging
Machine learning (ML) methods are pervading an increasing number of fields of application because of their capacity to effectively solve a wide variety of challenging problems. The employment of ML techniques in ultrasound imaging applications started several years ago but the scientific interest in this issue has increased exponentially in the last few years. The present work reviews the most recent (2019 onwards) implementations of machine learning techniques for two of the most popular ultrasound imaging fields, medical diagnostics and non-destructive evaluation. The former, which covers the major part of the review, was analyzed by classifying studies according to the human organ investigated and the methodology (e.g., detection, segmentation, and/or classification) adopted, while for the latter, some solutions to the detection/classification of material defects or particular patterns are reported. Finally, the main merits of machine learning that emerged from the study analysis are summarized and discussed. © 2022 by the authors. Licensee MDPI, Basel, Switzerland
Embedded dynamic programming networks for networks-on-chip
PhD ThesisRelentless technology downscaling and recent technological advancements
in three dimensional integrated circuit (3D-IC) provide a promising
prospect to realize heterogeneous system-on-chip (SoC) and homogeneous
chip multiprocessor (CMP) based on the networks-onchip
(NoCs) paradigm with augmented scalability, modularity and
performance. In many cases in such systems, scheduling and managing
communication resources are the major design and implementation
challenges instead of the computing resources. Past research
efforts were mainly focused on complex design-time or simple heuristic
run-time approaches to deal with the on-chip network resource
management with only local or partial information about the network.
This could yield poor communication resource utilizations and amortize
the benefits of the emerging technologies and design methods.
Thus, the provision for efficient run-time resource management in
large-scale on-chip systems becomes critical. This thesis proposes a
design methodology for a novel run-time resource management infrastructure
that can be realized efficiently using a distributed architecture,
which closely couples with the distributed NoC infrastructure. The
proposed infrastructure exploits the global information and status
of the network to optimize and manage the on-chip communication
resources at run-time.
There are four major contributions in this thesis. First, it presents a
novel deadlock detection method that utilizes run-time transitive closure
(TC) computation to discover the existence of deadlock-equivalence
sets, which imply loops of requests in NoCs. This detection scheme,
TC-network, guarantees the discovery of all true-deadlocks without
false alarms in contrast to state-of-the-art approximation and heuristic
approaches. Second, it investigates the advantages of implementing
future on-chip systems using three dimensional (3D) integration and
presents the design, fabrication and testing results of a TC-network
implemented in a fully stacked three-layer 3D architecture using a
through-silicon via (TSV) complementary metal-oxide semiconductor
(CMOS) technology. Testing results demonstrate the effectiveness
of such a TC-network for deadlock detection with minimal computational
delay in a large-scale network. Third, it introduces an adaptive
strategy to effectively diffuse heat throughout the three dimensional
network-on-chip (3D-NoC) geometry. This strategy employs a dynamic
programming technique to select and optimize the direction of data
manoeuvre in NoC. It leads to a tool, which is based on the accurate
HotSpot thermal model and SystemC cycle accurate model, to simulate
the thermal system and evaluate the proposed approach. Fourth, it
presents a new dynamic programming-based run-time thermal management
(DPRTM) system, including reactive and proactive schemes, to
effectively diffuse heat throughout NoC-based CMPs by routing packets
through the coolest paths, when the temperature does not exceed
chip’s thermal limit. When the thermal limit is exceeded, throttling is
employed to mitigate heat in the chip and DPRTM changes its course
to avoid throttled paths and to minimize the impact of throttling on
chip performance.
This thesis enables a new avenue to explore a novel run-time resource
management infrastructure for NoCs, in which new methodologies
and concepts are proposed to enhance the on-chip networks for
future large-scale 3D integration.Iraqi Ministry of Higher Education and Scientific Research (MOHESR)
NASA Space Engineering Research Center Symposium on VLSI Design
The NASA Space Engineering Research Center (SERC) is proud to offer, at its second symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories and the electronics industry. These featured speakers share insights into next generation advances that will serve as a basis for future VLSI design. Questions of reliability in the space environment along with new directions in CAD and design are addressed by the featured speakers
- …