46 research outputs found

    Selecting the number of clusters, clustering models, and algorithms. A unifying approach based on the quadratic discriminant score

    Get PDF
    Cluster analysis requires many decisions: the clustering method and the implied reference model, the number of clusters and, often, several hyper-parameters and algorithms' tunings. In practice, one produces several partitions, and a final one is chosen based on validation or selection criteria. There exist an abundance of validation methods that, implicitly or explicitly, assume a certain clustering notion. Moreover, they are often restricted to operate on partitions obtained from a specific method. In this paper, we focus on groups that can be well separated by quadratic or linear boundaries. The reference cluster concept is defined through the quadratic discriminant score function and parameters describing clusters' size, center and scatter. We develop two cluster-quality criteria called quadratic scores. We show that these criteria are consistent with groups generated from a general class of elliptically-symmetric distributions. The quest for this type of groups is common in applications. The connection with likelihood theory for mixture models and model-based clustering is investigated. Based on bootstrap resampling of the quadratic scores, we propose a selection rule that allows choosing among many clustering solutions. The proposed method has the distinctive advantage that it can compare partitions that cannot be compared with other state-of-the-art methods. Extensive numerical experiments and the analysis of real data show that, even if some competing methods turn out to be superior in some setups, the proposed methodology achieves a better overall performance.Comment: Supplemental materials are included at the end of the pape

    Three dimensional information estimation and tracking for moving objects detection using two cameras framework

    Get PDF
    Calibration, matching and tracking are major concerns to obtain 3D information consisting of depth, direction and velocity. In finding depth, camera parameters and matched points are two necessary inputs. Depth, direction and matched points can be achieved accurately if cameras are well calibrated using manual traditional calibration. However, most of the manual traditional calibration methods are inconvenient to use because markers or real size of an object in the real world must be provided or known. Self-calibration can solve the traditional calibration limitation, but not on depth and matched points. Other approaches attempted to match corresponding object using 2D visual information without calibration, but they suffer low matching accuracy under huge perspective distortion. This research focuses on achieving 3D information using self-calibrated tracking system. In this system, matching and tracking are done under self-calibrated condition. There are three contributions introduced in this research to achieve the objectives. Firstly, orientation correction is introduced to obtain better relationship matrices for matching purpose during tracking. Secondly, after having relationship matrices another post-processing method, which is status based matching, is introduced for improving object matching result. This proposed matching algorithm is able to achieve almost 90% of matching rate. Depth is estimated after the status based matching. Thirdly, tracking is done based on x-y coordinates and the estimated depth under self-calibrated condition. Results show that the proposed self-calibrated tracking system successfully differentiates the location of objects even under occlusion in the field of view, and is able to determine the direction and the velocity of multiple moving objects

    New authentication applications in the protection of caller ID and banknote

    Get PDF
    In the era of computers and the Internet, where almost everything is interconnected, authentication plays a crucial role in safeguarding online and offline data. As authentication systems face continuous testing from advanced attacking techniques and tools, the need for evolving authentication technology becomes imperative. In this thesis, we study attacks on authentication systems and propose countermeasures. Considering various nominated techniques, the thesis is divided into two parts. The first part introduces caller ID verification (CIV) protocol to address caller ID spoofing in telecommunication systems. This kind of attack usually follows fraud, which not only inflicts financial losses on victims but also reduces public trust in the telephone system. We propose CIV to authenticate the caller ID based on a challenge-response process. We show that spoofing can be leveraged, in conjunction with dual tone multi-frequency (DTMF), to efficiently implement the challenge-response process, i.e., using spoofing to fight against spoofing. We conduct extensive experiments showing that our solution can work reliably across the legacy and new telephony systems, including landline, cellular and Internet protocol (IP) network, without the cooperation of telecom providers. In the second part, we present polymer substrate fingerprinting (PSF) as a method to combat counterfeiting of banknotes in the financial area. Our technique is built on the observation that the opacity coating leaves uneven thickness in the polymer substrate, resulting in random translucent patterns when a polymer banknote is back-lit by a light source. With extensive experiments, we show that our method can reliably authenticate banknotes and is robust against rough daily handling of banknotes. Furthermore, we show that the extracted fingerprints are extremely scalable to identify every polymer note circulated globally. Our method ensures that even when counterfeiters have procured the same printing equipment and ink as used by a legitimate government, counterfeiting banknotes remains infeasible

    Practical free-space quantum key distribution

    Get PDF
    Within the last two decades, the world has seen an exponential increase in the quantity of data traffic exchanged electronically. Currently, the widespread use of classical encryption technology provides tolerable levels of security for data in day to day life. However, with one somewhat impractical exception these technologies are based on mathematical complexity and have never been proven to be secure. Significant advances in mathematics or new computer architectures could render these technologies obsolete in a very short timescale. By contrast, Quantum Key Distribution (or Quantum Cryptography as it is sometimes called) offers a theoretically secure method of cryptographic key generation and exchange which is guaranteed by physical laws. Moreover, the technique is capable of eavesdropper detection during the key exchange process. Much research and development work has been undertaken but most of this work has concentrated on the use of optical fibres as the transmission medium for the quantum channel. This thesis discusses the requirements, theoretical basis and practical development of a compact, free-space transmission quantum key distribution system from inception to system tests. Experiments conducted over several distances are outlined which verify the feasibility of quantum key distribution operating continuously over ranges from metres to intercity distances and finally to global reach via the use of satellites

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    A Systematic Review of Urban Navigation Systems for Visually Impaired People

    Get PDF
    Blind and Visually impaired people (BVIP) face a range of practical difficulties when undertaking outdoor journeys as pedestrians. Over the past decade, a variety of assistive devices have been researched and developed to help BVIP navigate more safely and independently. In~addition, research in overlapping domains are addressing the problem of automatic environment interpretation using computer vision and machine learning, particularly deep learning, approaches. Our aim in this article is to present a comprehensive review of research directly in, or relevant to, assistive outdoor navigation for BVIP. We breakdown the navigation area into a series of navigation phases and tasks. We then use this structure for our systematic review of research, analysing articles, methods, datasets and current limitations by task. We also provide an overview of commercial and non-commercial navigation applications targeted at BVIP. Our review contributes to the body of knowledge by providing a comprehensive, structured analysis of work in the domain, including the state of the art, and guidance on future directions. It will support both researchers and other stakeholders in the domain to establish an informed view of research progress

    Ultra-high-resolution optical imaging for silicon integrated-circuit inspection

    Get PDF
    This thesis concerns the development of novel resolution-enhancing optical techniques for the purposes of non-destructive sub-surface semiconductor integrated-circuit (IC) inspection. This was achieved by utilising solid immersion lens (SIL) technology, polarisation-dependent imaging, pupil-function engineering and optical coherence tomography (OCT). A SIL-enhanced two-photon optical beam induced current (TOBIC) microscope was constructed for the acquisition of ultra-high-resolution two- and three-dimensional images of a silicon flip-chip using a 1.55μm modelocked Er:fibre laser. This technology provided diffraction-limited lateral and axial resolutions of 166nm and 100nm, respectively - an order of magnitude improvement over previous TOBIC imaging work. The ultra-high numerical aperture (NA) provided by SIL-imaging in silicon (NA=3.5) was used to show, for the first time, the presence of polarisation-dependent vectorialfield effects in an image. These effects were modelled using vector diffraction theory to confirm the increasing ellipticity of the focal-plane energy density distribution as the NA of the system approaches unity. An unprecedented resolution performance ranging from 240nm to ~100nm was obtained, depending of the state of polarisation used. The resolution-enhancing effects of pupil-function engineering were investigated and implemented into a nonlinear polarisation-dependent SIL-enhanced laser microscope to demonstrate a minimum resolution performance of 70nm in a silicon flip-chip. The performance of the annular apertures used in this work was modelled using vectorial diffraction theory to interpret the experimentally-obtained images. The development of an ultra-high-resolution high-dynamic-range OCT system is reported which utilised a broadband supercontinuum source and a balanced-detection scheme in a time-domain Michelson interferometer to achieve an axial resolution of 2.5μm (in air). The examination of silicon ICs demonstrated both a unique substrate profiling and novel inspection technology for circuit navigation and characterisation. In addition, the application of OCT to the investigation of artwork samples and contemporary banknotes is demonstrated for the purposes of art conservation and counterfeit prevention

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Architecture design of video processing systems on a chip

    Get PDF

    Artificial Intelligence Technology

    Get PDF
    This open access book aims to give our readers a basic outline of today’s research and technology developments on artificial intelligence (AI), help them to have a general understanding of this trend, and familiarize them with the current research hotspots, as well as part of the fundamental and common theories and methodologies that are widely accepted in AI research and application. This book is written in comprehensible and plain language, featuring clearly explained theories and concepts and extensive analysis and examples. Some of the traditional findings are skipped in narration on the premise of a relatively comprehensive introduction to the evolution of artificial intelligence technology. The book provides a detailed elaboration of the basic concepts of AI, machine learning, as well as other relevant topics, including deep learning, deep learning framework, Huawei MindSpore AI development framework, Huawei Atlas computing platform, Huawei AI open platform for smart terminals, and Huawei CLOUD Enterprise Intelligence application platform. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud computing, and smart computing to artificial intelligence
    corecore