12 research outputs found

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    Partial Discharge alert system in medium voltage switchgear

    Get PDF
    Partial discharge (PD) is a well-known indicator of insulation problems in high voltage equipment. We report on experience collected during the development of a new online PD detection and alert system for air insulated switchgear (AIS) installed base. The approach taken to integrate the sensor with minimal retrofit effort and operational disruption is described. Results from a test setup including a line-up of panels and different reference PD sources in comparison to a commercial PD system are presented. The effect of cables connected to the switchgear is investigated by testing the system including additional capacitive load and using a simulation for a typical geometry. We also address the question regarding the design of an alert system to be used in connection with the continuous data acquisition

    PhD thesis: Efficient image Duplicate Detection Based on image analysis

    No full text
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking.\ud The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step.\ud Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability.\ud Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also\ud outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements.\ud Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    Hierarchical Indexing using R-trees for Replica Detection

    Get PDF
    Replica detection is a prerequisite for the discovery of copyright infringement and detection of illicit content. For this purpose, content-based systems can be an e#cient alternative to watermarking. Rather than imperceptibly embedding a signal, content-based systems rely on content similarity concepts. Certain content-based systems use adaptive classifiers to detect replicas. In such systems, a suspected content is tested against every original, which can become computationally prohibitive as the number of original contents grows. In this paper, we propose an image detection approach which hierarchically estimates the partition of the image space where the replicas (of an original) lie by means of R-trees. Experimental results show that the proposed system achieves high performance. For instance, a fraction of 0.99975 of the test images are filtered by the system when the test images are unrelated to any of the originals while only a fraction of 0.02 of the test images are rejected when the test image is a replica of one of the originals

    SuperFGD prototype time resolution studies

    No full text
    The SuperFGD detector will be a novel and important upgrade to the ND280 near detector for both the T2K and Hyper-Kamiokande projects. The main goal of the ND280 upgrade is to reduce systematic uncertainties associated with neutrino flux and cross-section modeling for future studies of neutrino oscillations using the T2K and Hyper-Kamiokande experiments. The upgraded ND280 detector will be able to perform a full exclusive reconstruction of the final state from neutrino-nucleus interactions, including measurements of low momentum protons, pions and for the first time, event-by event measurements of neutron kinematics. Precisely understanding the time resolution is critical for the neutron energy measurements and hence an important factor in reducing the systematic uncertainties. In this paper we present the results of time resolution measurements made with the SuperFGD prototype that consists of 9216 plastic scintillator cubes (cube size is 1 cm3^{3}) readout with 1728 wavelength-shifting (WLS) fibers along the three orthogonal directions. We used data from a muon beam exposure at CERN. A time resolution of 0.97 ns was obtained for one readout channel after implementing the time calibration with a correction for time-walk effects. The time resolution improves with increasing energy deposited in a scintillator cube, improving to 0.87 ns for large pulses. Averaging two readout channels for one scintillator cube further improves the time resolution to 0.68 ns implying that signals in different channels are not synchronous. In addition the contribution from the time sampling interval of 2.5 ns is averaged as well. Most importantly, averaging time values from N channels improves the time resolution by ∼ 1/√(N). For example, averaging the time from 2 scintillator cubes with 2 fibers each improves the time resolution to 0.47 ns which is much better than the intrinsic electronics time resolution of 0.72 ns in one channel due to the 2.5 ns sampling window. This indicates that a very good time resolution should be achievable for neutrons since neutron recoils typically interact with several scintillator cubes and in addition produce larger signal amplitudes than muons. Measurements performed with a laser and a wide-bandwidth oscilloscope in which the contribution from the electronics time sampling window was removed demonstrated that the time resolution obtained with the muon beam is not far from the theoretical limit. The intrinsic time resolution of a scintillator cube and one WLS fiber is about 0.67 ns for signals of 56 photo electrons which is typical for minimum ionizing particles

    Varia

    No full text

    Assessment of recent process analytical technology (PAT) trends : a multiauthor review

    No full text
    This multiauthor review article aims to bring readers up to date with some of the current trends in the field of process analytical technology (PAT) by summarizing each aspect of the subject (sensor development, PAT based process monitoring and control methods) and presenting applications both in industrial laboratories and in manufacture e.g. at GSK, AstraZeneca and Roche. Furthermore, the paper discusses the PAT paradigm from the regulatory science perspective. Given the multidisciplinary nature of PAT, such an endeavour would be almost impossible for a single author, so the concept of a multiauthor review was born. Each section of the multiauthor review has been written by a single expert or group of experts with the aim to report on its own research results. This paper also serves as a comprehensive source of information on PAT topics for the novice reader
    corecore