71 research outputs found

    Enhanced target detection in CCTV network system using colour constancy

    Get PDF
    The focus of this research is to study how targets can be more faithfully detected in a multi-camera CCTV network system using spectral feature for the detection. The objective of the work is to develop colour constancy (CC) methodology to help maintain the spectral feature of the scene into a constant stable state irrespective of variable illuminations and camera calibration issues. Unlike previous work in the field of target detection, two versions of CC algorithms have been developed during the course of this work which are capable to maintain colour constancy for every image pixel in the scene: 1) a method termed as Enhanced Luminance Reflectance CC (ELRCC) which consists of a pixel-wise sigmoid function for an adaptive dynamic range compression, 2) Enhanced Target Detection and Recognition Colour Constancy (ETDCC) algorithm which employs a bidirectional pixel-wise non-linear transfer PWNLTF function, a centre-surround luminance enhancement and a Grey Edge white balancing routine. The effectiveness of target detections for all developed CC algorithms have been validated using multi-camera ‘Imagery Library for Intelligent Detection Systems’ (iLIDS), ‘Performance Evaluation of Tracking and Surveillance’ (PETS) and ‘Ground Truth Colour Chart’ (GTCC) datasets. It is shown that the developed CC algorithms have enhanced target detection efficiency by over 175% compared with that without CC enhancement. The contribution of this research has been one journal paper published in the Optical Engineering together with 3 conference papers in the subject of research

    Lighting and Optical Tools for Image Forensics

    Get PDF
    We present new forensic tools that are capable of detecting traces of tampering in digital images without the use of watermarks or specialized hardware. These tools operate under the assumption that images contain natural properties from a variety of sources, including the world, the lens, and the sensor. These properties may be disturbed by digital tampering and by measuring them we can expose the forgery. In this context, we present the following forensic tools: (1) illuminant direction, (2) specularity, (3) lighting environment, and (4) chromatic aberration. The common theme of these tools is that they exploit lighting or optical properties of images. Although each tool is not applicable to every image, they add to a growing set of image forensic tools that together will complicate the process of making a convincing forgery

    Multispectral image analysis in laparoscopy – A machine learning approach to live perfusion monitoring

    Get PDF
    Modern visceral surgery is often performed through small incisions. Compared to open surgery, these minimally invasive interventions result in smaller scars, fewer complications and a quicker recovery. While to the patients benefit, it has the drawback of limiting the physician’s perception largely to that of visual feedback through a camera mounted on a rod lens: the laparoscope. Conventional laparoscopes are limited by “imitating” the human eye. Multispectral cameras remove this arbitrary restriction of recording only red, green and blue colors. Instead, they capture many specific bands of light. Although these could help characterize important indications such as ischemia and early stage adenoma, the lack of powerful digital image processing prevents realizing the technique’s full potential. The primary objective of this thesis was to pioneer fluent functional multispectral imaging (MSI) in laparoscopy. The main technical obstacles were: (1) The lack of image analysis concepts that provide both high accuracy and speed. (2) Multispectral image recording is slow, typically ranging from seconds to minutes. (3) Obtaining a quantitative ground truth for the measurements is hard or even impossible. To overcome these hurdles and enable functional laparoscopy, for the first time in this field physical models are combined with powerful machine learning techniques. The physical model is employed to create highly accurate simulations, which in turn teach the algorithm to rapidly relate multispectral pixels to underlying functional changes. To reduce the domain shift introduced by learning from simulations, a novel transfer learning approach automatically adapts generic simulations to match almost arbitrary recordings of visceral tissue. In combination with the only available video-rate capable multispectral sensor, the method pioneers fluent perfusion monitoring with MSI. This system was carefully tested in a multistage process, involving in silico quantitative evaluations, tissue phantoms and a porcine study. Clinical applicability was ensured through in-patient recordings in the context of partial nephrectomy; in these, the novel system characterized ischemia live during the intervention. Verified against a fluorescence reference, the results indicate that fluent, non-invasive ischemia detection and monitoring is now possible. In conclusion, this thesis presents the first multispectral laparoscope capable of videorate functional analysis. The system was successfully evaluated in in-patient trials, and future work should be directed towards evaluation of the system in a larger study. Due to the broad applicability and the large potential clinical benefit of the presented functional estimation approach, I am confident the descendants of this system are an integral part of the next generation OR

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    A Multicamera System for Gesture Tracking With Three Dimensional Hand Pose Estimation

    Get PDF
    The goal of any visual tracking system is to successfully detect then follow an object of interest through a sequence of images. The difficulty of tracking an object depends on the dynamics, the motion and the characteristics of the object as well as on the environ ment. For example, tracking an articulated, self-occluding object such as a signing hand has proven to be a very difficult problem. The focus of this work is on tracking and pose estimation with applications to hand gesture interpretation. An approach that attempts to integrate the simplicity of a region tracker with single hand 3D pose estimation methods is presented. Additionally, this work delves into the pose estimation problem. This is ac complished by both analyzing hand templates composed of their morphological skeleton, and addressing the skeleton\u27s inherent instability. Ligature points along the skeleton are flagged in order to determine their effect on skeletal instabilities. Tested on real data, the analysis finds the flagging of ligature points to proportionally increase the match strength of high similarity image-template pairs by about 6%. The effectiveness of this approach is further demonstrated in a real-time multicamera hand tracking system that tracks hand gestures through three-dimensional space as well as estimate the three-dimensional pose of the hand

    Communication of Digital Material Appearance Based on Human Perception

    Get PDF
    Im alltĂ€gliche Leben begegnen wir digitalen Materialien in einer Vielzahl von Situationen wie beispielsweise bei Computerspielen, Filmen, ReklamewĂ€nden in zB U-Bahn Stationen oder beim Online-Kauf von Kleidungen. WĂ€hrend einige dieser Materialien durch digitale Modelle reprĂ€sentiert werden, welche das Aussehen einer bestimmten OberflĂ€che in AbhĂ€ngigkeit des Materials der FlĂ€che sowie den Beleuchtungsbedingungen beschreiben, basieren andere digitale Darstellungen auf der simplen Verwendung von Fotos der realen Materialien, was zB bei Online-Shopping hĂ€ufig verwendet wird. Die Verwendung von computer-generierten Materialien ist im Vergleich zu einzelnen Fotos besonders vorteilhaft, da diese realistische Erfahrungen im Rahmen von virtuellen Szenarien, kooperativem Produkt-Design, Marketing wĂ€hrend der prototypischen Entwicklungsphase oder der Ausstellung von Möbeln oder Accesoires in spezifischen Umgebungen erlauben. WĂ€hrend mittels aktueller Digitalisierungsmethoden bereits eine beeindruckende ReproduktionsqualitĂ€t erzielt wird, wird eine hochprĂ€zise photorealistische digitale Reproduktion von Materialien fĂŒr die große Vielfalt von Materialtypen nicht erreicht. Daher verwenden viele Materialkataloge immer noch Fotos oder sogar physikalische Materialproben um ihre Kollektionen zu reprĂ€sentieren. Ein wichtiger Grund fĂŒr diese LĂŒcke in der Genauigkeit des Aussehens von digitalen zu echten Materialien liegt darin, dass die ZusammenhĂ€nge zwischen physikalischen Materialeigenschaften und der vom Menschen wahrgenommenen visuellen QualitĂ€t noch weitgehend unbekannt sind. Die im Rahmen dieser Arbeit durchgefĂŒhrten Untersuchungen adressieren diesen Aspekt. Zu diesem Zweck werden etablierte digitalie Materialmodellen bezĂŒglich ihrer Eignung zur Kommunikation von physikalischen und sujektiven Materialeigenschaften untersucht, wobei Beobachtungen darauf hinweisen, dass ein Teil der fĂŒhlbaren/haptischen Informationen wie z.B. MaterialstĂ€rke oder HĂ€rtegrad aufgrund der dem Modell anhaftenden geometrische Abstraktion verloren gehen. Folglich wird im Rahmen der Arbeit das Zusammenspiel der verschiedenen Sinneswahrnehmungen (mit Fokus auf die visuellen und akustischen ModalitĂ€ten) untersucht um festzustellen, welche Informationen wĂ€hrend des Digitalisierungsprozesses verloren gehen. Es zeigt sich, dass insbesondere akustische Informationen in Kombination mit der visuellen Wahrnehmung die EinschĂ€tzung fĂŒhlbarer Materialeigenschaften erleichtert. Eines der Defizite bei der Analyse des Aussehens von Materialien ist der Mangel bezĂŒglich sich an der Wahnehmung richtenden Metriken die eine Beantwortung von Fragen wie z.B. "Sind die Materialien A und B sich Ă€hnlicher als die Materialien C und D?" erlauben, wie sie in vielen Anwendungen der Computergrafik auftreten. Daher widmen sich die im Rahmen dieser Arbeit durchgefĂŒhrten Studien auch dem Vergleich von unterschiedlichen MaterialreprĂ€sentationen im Hinblick auf. Zu diesem Zweck wird eine Methodik zur Berechnung der wahrgenommenen paarweisen Ähnlichkeit von Material-Texturen eingefĂŒhrt, welche auf der Verwendung von Textursyntheseverfahren beruht und sich an der Idee/dem Begriff der geradenoch-wahrnehmbaren Unterschiede orientiert. Der vorgeschlagene Ansatz erlaubt das Überwinden einiger Probleme zuvor veröffentlichter Methoden zur Bestimmung der Änhlichkeit von Texturen und fĂŒhrt zu sinnvollen/plausiblen Distanzen von Materialprobem. Zusammenfassend fĂŒhren die im Rahmen dieser Dissertation dargestellten Inhalte/Verfahren zu einem tieferen VerstĂ€ndnis bezĂŒglich der menschlichen Wahnehmung von digitalen bzw. realen Materialien ĂŒber unterschiedliche Sinne, einem besseren VerstĂ€ndnis bzgl. der Bewertung der Ähnlichkeit von Texturen durch die Entwicklung einer neuen perzeptuellen Metrik und liefern grundlegende Einsichten fĂŒr zukĂŒnftige Untersuchungen im Bereich der Perzeption von digitalen Materialien.In daily life, we encounter digital materials and interact with them in numerous situations, for instance when we play computer games, watch a movie, see billboard in the metro station or buy new clothes online. While some of these virtual materials are given by computational models that describe the appearance of a particular surface based on its material and the illumination conditions, some others are presented as simple digital photographs of real materials, as is usually the case for material samples from online retailing stores. The utilization of computer-generated materials entails significant advantages over plain images as they allow realistic experiences in virtual scenarios, cooperative product design, advertising in prototype phase or exhibition of furniture and wearables in specific environments. However, even though exceptional material reproduction quality has been achieved in the domain of computer graphics, current technology is still far away from highly accurate photo-realistic virtual material reproductions for the wide range of existing categories and, for this reason, many material catalogs still use pictures or even physical material samples to illustrate their collections. An important reason for this gap between digital and real material appearance is that the connections between physical material characteristics and the visual quality perceived by humans are far from well-understood. Our investigations intend to shed some light in this direction. Concretely, we explore the ability of state-of-the-art digital material models in communicating physical and subjective material qualities, observing that part of the tactile/haptic information (eg thickness, hardness) is missing due to the geometric abstractions intrinsic to the model. Consequently, in order to account for the information deteriorated during the digitization process, we investigate the interplay between different sensing modalities (vision and hearing) and discover that particular sound cues, in combination with visual information, facilitate the estimation of such tactile material qualities. One of the shortcomings when studying material appearance is the lack of perceptually-derived metrics able to answer questions like "are materials A and B more similar than C and D?", which arise in many computer graphics applications. In the absence of such metrics, our studies compare different appearance models in terms of how capable are they to depict/transmit a collection of meaningful perceptual qualities. To address this problem, we introduce a methodology to compute the perceived pairwise similarity between textures from material samples that makes use of patch-based texture synthesis algorithms and is inspired on the notion of Just-Noticeable Differences. Our technique is able to overcome some of the issues posed by previous texture similarity collection methods and produces meaningful distances between samples. In summary, with the contents presented in this thesis we are able to delve deeply in how humans perceive digital and real materials through different senses, acquire a better understanding of texture similarity by developing a perceptually-based metric and provide a groundwork for further investigations in the perception of digital materials
    • 

    corecore