279 research outputs found

    Statistical Learning in Chip (SLIC) (Invited Paper)

    Get PDF
    Abstract-Despite best efforts, integrated systems are "born" (manufactured) with a unique 'personality' that stems from our inability to precisely fabricate their underlying circuits, and create software a priori for controlling the resulting uncertainty. It is possible to use sophisticated test methods to identify the bestperforming systems but this would result in unacceptable yields and correspondingly high costs. The system personality is further shaped by its environment (e.g., temperature, noise and supply voltage) and usage (i.e., the frequency and type of applications executed), and since both can fluctuate over time, so can the system's personality. Systems also "grow old" and degrade due to various wear-out mechanisms (e.g., negative-bias temperature instability), and unexpectedly due to various early-life failure sources. These "nature and nurture" influences make it extremely difficult to design a system that will operate optimally for all possible personalities. To address this challenge, we propose to develop statistical learning in-chip (SLIC). SLIC is a holistic approach to integrated system design based on continuously learning key personality traits on-line, for selfevolving a system to a state that optimizes performance hierarchically across the circuit, platform, and application levels. SLIC will not only optimize integrated-system performance but also reduce costs through yield enhancement since systems that would have before been deemed to have weak personalities (unreliable, faulty, etc.) can now be recovered through the use of SLIC

    LEGaTO: first steps towards energy-efficient toolset for heterogeneous computing

    Get PDF
    LEGaTO is a three-year EU H2020 project which started in December 2017. The LEGaTO project will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC.Peer ReviewedPostprint (author's final draft

    Performance evaluation of feature sets for carried object detection in still images

    Get PDF
    2014 Summer.Includes bibliographical references.Human activity recognition has gathered a lot of interest. The ability to accurately detect carried objects on human beings will directly help activity recognition. This thesis performs evaluation of four different features for carried object detection. To detect carried objects, image chips in a video are extracted by tracking moving objects using an off the shelf tracker. Pixels with similar colors are grouped together by using a superpixel segmentation algorithm. Features are calculated with respect to every superpixel, encoding information regarding their location in the track chip, shape of the superpixel, pose of the person in the track chip, and appearance of the superpixel. ROC curves are used for analyzing the detection of a superpixel as a carried object using these features individually or in a combination. These ROC curves show that the detection using Shape features as they are calculated have very less information. The location features, though simple to calculate, have a significant usable information. Detection using pose of a person in the track chip and appearance of the superpixel depend largely on the data used for their calculation. Pose detections are more likely to be correct if there are no occlusions, while appearance work better if we have high resolution of input images

    Understanding High Resolution Aerial Imagery Using Computer Vision Techniques

    Get PDF
    Computer vision can make important contributions to the analysis of remote sensing satellite or aerial imagery. However, the resolution of early satellite imagery was not sufficient to provide useful spatial features. The situation is changing with the advent of very-high-spatial-resolution (VHR) imaging sensors. This change makes it possible to use computer vision techniques to perform analysis of man-made structures. Meanwhile, the development of multi-view imaging techniques allows the generation of accurate point clouds as ancillary knowledge. This dissertation aims at developing computer vision and machine learning algorithms for high resolution aerial imagery analysis in the context of application problems including debris detection, building detection and roof condition assessment. High resolution aerial imagery and point clouds were provided by Pictometry International for this study. Debris detection after natural disasters such as tornadoes, hurricanes or tsunamis, is needed for effective debris removal and allocation of limited resources. Significant advances in aerial image acquisition have greatly enabled the possibilities for rapid and automated detection of debris. In this dissertation, a robust debris detection algorithm is proposed. Large scale aerial images are partitioned into homogeneous regions by interactive segmentation. Debris areas are identified based on extracted texture features. Robust building detection is another important part of high resolution aerial imagery understanding. This dissertation develops a 3D scene classification algorithm for building detection using point clouds derived from multi-view imagery. Point clouds are divided into point clusters using Euclidean clustering. Individual point clusters are identified based on extracted spectral and 3D structural features. The inspection of roof condition is an important step in damage claim processing in the insurance industry. Automated roof condition assessment from remotely sensed images is proposed in this dissertation. Initially, texture classification and a bag-of-words model were applied to assess the roof condition using features derived from the whole rooftop. However, considering the complexity of residential rooftop, a more sophisticated method is proposed to divide the task into two stages: 1) roof segmentation, followed by 2) classification of segmented roof regions. Deep learning techniques are investigated for both segmentation and classification. A deep learned feature is proposed and applied in a region merging segmentation algorithm. A fine-tuned deep network is adopted for roof segment classification and found to achieve higher accuracy than traditional methods using hand-crafted features. Contributions of this study include the development of algorithms for debris detection using 2D images and building detection using 3D point clouds. For roof condition assessment, the solutions to this problem are explored in two directions: features derived from the whole rooftop and features extracted from each roof segments. Through our research, roof segmentation followed by segments classification was found to be a more promising method and the workflow processing developed and tested. Deep learning techniques are also investigated for both roof segmentation and segments classification. More unsupervised feature extraction techniques using deep learning can be explored in future work

    Search for Second-Generation Leptoquarks in Proton-Antiproton Collisions

    Get PDF
    This document describes the search for second-generation leptoquarks (LQ_2) in around 114 pb^-1 of proton-antiproton collisions, recorded with the D0 detector between September 2002 and June 2003 at a centre-of-mass energy of sqrt{s} = 1.96 TeV. The predictions of the Standard Model and models including scalar leptoquark production are compared to the data for various kinematic distributions. Since no excess of data over the Standard Model prediction has been observed, a lower limit on the leptoquark mass of M(LQ_2)_{beta=1} > 200 GeV/c^2 has been calculated at 95% confidence level (C.L.), assuming a branching fraction of beta = BF(LQ_2 --> mu j) = 100% into a charged lepton and a quark. The corresponding limit for beta = 1/2 is M(LQ_2)_{beta=1/2} > 152 GeV/c^2. Finally, the results were combined with those from the search in the same channel at D0 Run I. This combination yields the exclusion limit of 222 GeV/c^2 (177 GeV/c^2) for beta=1 (1/2) at 95% C.L., which is the best exclusion limit for scalar second-generation leptoquarks (for beta=1) from a single experiment to date.In diesem Dokument wird die Suche nach Leptoquarks der zweiten Generation (LQ_2) in Proton-Antiproton-Kollisionen beschrieben, die mit dem D0-Detektor am TeVatron-Beschleuniger aufgezeichnet wurden. Im Zeitraum von September 2002 bis Juni 2003 wurde eine integrierte Luminosität von rund 114 pb^-1 bei einer Schwerpunktsenergie von sqrt{s} = 1.96 TeV gesammelt. Die Vorhersagen des Standardmodells der Teilchenphysik und darüber hinausgehender Modelle mit skalaren Leptoquarks wurden mit den Daten verglichen. Da kein Überschuss an Daten über der Standardmodellvorhersage beobachtet werden konnte, wurde unter der Annahme, dass Leptoquarks zu 100% in geladene Leptonen und Quarks zerfallen (beta = BF(LQ_2 --> mu j) = 100%), eine untere Schranke von M(LQ_2)_{beta=1} > 200 GeV/c^2 (95% C.L.) für die Masse von skalaren Leptoquarks der zweiten Generation ermittelt. Die entsprechende Ausschlussgrenze für beta=1/2 liegt bei M(LQ_2)_{beta=1/2} > 152 GeV/c^2. Schließlich wurden die Resultate mit den Ergebnissen einer Suche im gleichen Kanal bei D0 Run I kombiniert. Diese Kombination liefert die Ausschlussgrenzen M(LQ_2)_{beta=1} > 222 GeV/c^2 (177 GeV/c^2) für beta=1 (1/2) und ist somit für beta=1 das zur Zeit beste Ergebnis für skalare Leptoquarks der zweiten Generation eines einzelnen Experimentes

    Application of artificial intelligence techniques to probeless fault diagnosis of printed circuit boards

    Get PDF
    This thesis describes investigations which led to the development of a failure diagnosis expert system for printed circuit boards which exploits functional test data. The boards considered are highly complex mixed signal (analogue and digital) systems. The data is output from automatic test equipment which is used to test every board subsequent to manufacture.The use of a conventional machine learning technique produced only limited success due to the very large search space of failure reports. This also ruled out the use of some conventional knowledge-based approaches. In addition, there was a requirement to track changes m printed circuit board design and manufacture which also ruled out some techniques.Our investigations lead to the development of a system which tracks changes by learning in a more restricted search space derived from the original space of reports. The system performs a diagnosis by matching a failure report with information about previously seen reports. Both exact and inexact matching were investigated. The matching rules used are heuristic. The system also uses basic circuit connectivity information in conjunction with the matching procedure to improve diagnostic performance especially in cases where matching fails to identify a unique component

    Efficient-VRNet: An Exquisite Fusion Network for Riverway Panoptic Perception based on Asymmetric Fair Fusion of Vision and 4D mmWave Radar

    Full text link
    Panoptic perception is essential to unmanned surface vehicles (USVs) for autonomous navigation. The current panoptic perception scheme is mainly based on vision only, that is, object detection and semantic segmentation are performed simultaneously based on camera sensors. Nevertheless, the fusion of camera and radar sensors is regarded as a promising method which could substitute pure vision methods, but almost all works focus on object detection only. Therefore, how to maximize and subtly fuse the features of vision and radar to improve both detection and segmentation is a challenge. In this paper, we focus on riverway panoptic perception based on USVs, which is a considerably unexplored field compared with road panoptic perception. We propose Efficient-VRNet, a model based on Contextual Clustering (CoC) and the asymmetric fusion of vision and 4D mmWave radar, which treats both vision and radar modalities fairly. Efficient-VRNet can simultaneously perform detection and segmentation of riverway objects and drivable area segmentation. Furthermore, we adopt an uncertainty-based panoptic perception training strategy to train Efficient-VRNet. In the experiments, our Efficient-VRNet achieves better performances on our collected dataset than other uni-modal models, especially in adverse weather and environment with poor lighting conditions. Our code and models are available at \url{https://github.com/GuanRunwei/Efficient-VRNet}

    Modeling and model-aware signal processing methods for enhancement of optical systems

    Full text link
    Theoretical and numerical modeling of optical systems are increasingly being utilized in a wide range of areas in physics and engineering for characterizing and improving existing systems or developing new methods. This dissertation focuses on determining and improving the performance of imaging and non-imaging optical systems through modeling and developing model-aware enhancement methods. We evaluate the performance, demonstrate enhancements in terms of resolution and light collection efficiency, and improve the capabilities of the systems through changes to the system design and through post-processing techniques. We consider application areas in integrated circuit (IC) imaging for fault analysis and malicious circuitry detection, and free-form lens design for creating prescribed illumination patterns. The first part of this dissertation focuses on sub-surface imaging of ICs for fault analysis using a solid immersion lens (SIL) microscope. We first derive the Green's function of the microscope and use it to determine its resolution limits for bulk silicon and silicon-on-insulator (SOI) chips. We then propose an optimization framework for designing super-resolving apodization masks that utilizes the developed model and demonstrate the trade-offs in designing such masks. Finally, we derive the full electromagnetic model of the SIL microscope that models the image of an arbitrary sub-surface structure. With the rapidly shrinking dimensions of ICs, we are increasingly limited in resolving the features and identifying potential modifications despite the resolution improvements provided by the state-of-the-art microscopy techniques and enhancement methods described here. In the second part of this dissertation, we shift our focus away from improving the resolution and consider an optical framework that does not require high resolution imaging for detecting malicious circuitry. We develop a classification-based high-throughput gate identification method that utilizes the physical model of the optical system. We then propose a lower-throughput system to increase the detection accuracy, based on higher resolution imaging to supplement the former method. Finally, we consider the problem of free-form lens design for forming prescribed illumination patterns as a non-imaging application. Common methods that design free-form lenses for forming patterns consider the input light source to be a point source, however using extended light sources with such lenses lead to significant blurring in the resulting pattern. We propose a deconvolution-based framework that utilizes the lens geometry to model the blurring effects and eliminates this degradation, resulting in sharper patterns
    corecore