20 research outputs found

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    Automatic face recognition using stereo images

    Get PDF
    Face recognition is an important pattern recognition problem, in the study of both natural and artificial learning problems. Compaxed to other biometrics, it is non-intrusive, non- invasive and requires no paxticipation from the subjects. As a result, it has many applications varying from human-computer-interaction to access control and law-enforcement to crowd surveillance. In typical optical image based face recognition systems, the systematic vaxiability arising from representing the three-dimensional (3D) shape of a face by a two-dimensional (21)) illumination intensity matrix is treated as random vaxiability. Multiple examples of the face displaying vaxying pose and expressions axe captured in different imaging conditions. The imaging environment, pose and expressions are strictly controlled and the images undergo rigorous normalisation and pre-processing. This may be implemented in a paxtially or a fully automated system. Although these systems report high classification accuracies (>90%), they lack versatility and tend to fail when deployed outside laboratory conditions. Recently, more sophisticated 3D face recognition systems haxnessing the depth information have emerged. These systems usually employ specialist equipment such as laser scanners and structured light projectors. Although more accurate than 2D optical image based recognition, these systems are equally difficult to implement in a non-co-operative environment. Existing face recognition systems, both 2D and 3D, detract from the main advantages of face recognition and fail to fully exploit its non-intrusive capacity. This is either because they rely too much on subject co-operation, which is not always available, or because they cannot cope with noisy data. The main objective of this work was to investigate the role of depth information in face recognition in a noisy environment. A stereo-based system, inspired by the human binocular vision, was devised using a pair of manually calibrated digital off-the-shelf cameras in a stereo setup to compute depth information. Depth values extracted from 2D intensity images using stereoscopy are extremely noisy, and as a result this approach for face recognition is rare. This was cofirmed by the results of our experimental work. Noise in the set of correspondences, camera calibration and triangulation led to inaccurate depth reconstruction, which in turn led to poor classifier accuracy for both 3D surface matching and 211) 2 depth maps. Recognition experiments axe performed on the Sheffield Dataset, consisting 692 images of 22 individuals with varying pose, illumination and expressions

    Template based shape processing

    Get PDF
    As computers can only represent and process discrete data, information gathered from the real world always has to be sampled. While it is nowadays possible to sample many signals accurately and thus generate high-quality reconstructions (for example of images and audio data), accurately and densely sampling 3D geometry is still a challenge. The signal samples may be corrupted by noise and outliers, and contain large holes due to occlusions. These issues become even more pronounced when also considering the temporal domain. Because of this, developing methods for accurate reconstruction of shapes from a sparse set of discrete data is an important aspect of the computer graphics processing pipeline. In this thesis we propose novel approaches to including semantic knowledge into reconstruction processes using template based shape processing. We formulate shape reconstruction as a deformable template fitting process, where we try to fit a given template model to the sampled data. This approach allows us to present novel solutions to several fundamental problems in the area of shape reconstruction. We address static problems like constrained texture mapping and semantically meaningful hole-filling in surface reconstruction from 3D scans, temporal problems such as mesh based performance capture, and finally dynamic problems like the estimation of physically based material parameters of animated templates.Analoge Signale müssen digitalisiert werden um sie auf modernen Computern speichern und verarbeiten zu können. Für viele Signale, wie zum Beispiel Bilder oder Tondaten, existieren heutzutage effektive und effiziente Digitalisierungstechniken. Aus den so gewonnenen Daten können die ursprünglichen Signale hinreichend akkurat wiederhergestellt werden. Im Gegensatz dazu stellt das präzise und effiziente Digitalisieren und Rekonstruieren von 3D- oder gar 4D-Geometrie immer noch eine Herausforderung dar. So führen Verdeckungen und Fehler während der Digitalisierung zu Löchern und verrauschten Meßdaten. Die Erforschung von akkuraten Rekonstruktionsmethoden für diese groben digitalen Daten ist daher ein entscheidender Schritt in der Entwicklung moderner Verarbeitungsmethoden in der Computergrafik. In dieser Dissertation wird veranschaulicht, wie deformierbare geometrische Modelle als Vorlage genutzt werden können, um semantische Informationen in die robuste Rekonstruktion von 3D- und 4D Geometrie einfließen zu lassen. Dadurch wird es möglich, neue Lösungsansätze für mehrere grundlegenden Probleme der Computergrafik zu entwickeln. So können mit dieser Technik Löcher in digitalisierten 3D Modellen semantisch sinnvoll aufgefüllt, oder detailgetreue virtuelle Kopien von Darstellern und ihrer dynamischen Kleidung zu erzeugt werden

    Remote vision based multi gesture interaction in natural indoor environments

    Get PDF
    Der Einsatz von Computersehen als Sensor für die Interaktion mit technischen Systemen hat in den letzten Jahren starkes Interesse gefunden. In vielen der bekannt gewordenen Fallstudien und Anwendungen werden Posen oder Bewegungen einer interagierenden Person durch einen Rechner, der mit Kameras ausgestattet ist, beobachtet, und die Reaktionen des Rechners dem Benutzer angezeigt, der sein Verhalten dann so ändert, dass ein gewünschtes Interaktionsziel erreicht wird. Diese Arbeit greift zwei wesentliche Schwierigkeiten der computersehensbasierten oder perzeptuellen Mensch-Maschine-Interaktion auf: das Unterscheiden von Gesten von willkürlichen Körperhaltungen oder Bewegungen sowie der Umgang mit natürlichen Umgebungen. Ferner wird die Frage der Abtrennung der computersehensbasierten Schnittstelle von der Anwendung angegangen, analog zu heutigen anwendungsunabhängigen graphischen Benutzungsschnittstellen. Wesentliche Beiträge sind - eine so genannte "Interaktionsraumarchitektur", die die computersehensbasierte Schnittstelle von der Anwendung durch eine Folge von Interaktionsräumen entkoppelt, die aufeinander abgebildet werden, - eine so genannte "Interaktionsraumarchitektur", die die computersehensbasierte Schnittstelle von der Anwendung durch eine Folge von Interaktionsräumen entkoppelt, die aufeinander abgebildet werden, - ein Konzept der "Mehrtyp-Gesteninteraktion", die verschiedene Gesten mit örtlichen und zeitlichen Randbedingungen kombiniert, um so die Zuverlässigkeit der Gestenerkennung zu erhöhen, - zwei Konzepte zur optischen Kalibrierung des Interaktionsraumes, die den Aufwand der Integration von Kameras in die Interaktionsumgebung reduzieren, - eine Lösung des Problems der Kombination von Zeigegesten mit statischen Handgesten durch die Verwendung von statischen Kameras für globale Ansichten und rechnergesteuerten aktiven Kameras für lokal angepasste Ansichten, - eine Kombination von mehreren Methoden, um das Problem von unzuverlässigen Ergebnissen der Bildsegmentierung zu mindern, die durch wechselnde Beleuchtung, die für natürliche Umgebungen typisch ist, hervorgerufen werden: Fehlererkennung und Konturkorrektur auf Grundlage von Bildfolgen und mehreren Ansichten, situationsabhängige Signalverarbeitung sowie automatische Parameteranpassung. Die Tragfähigkeit der Konzepte wird anhand eines Systems zur computersehensbasierten Interaktion mit einer Rückprojektionswand nachgewiesen, das implementiert und evaluiert wurde.Computer vision as a sensor of interaction with technical systems has found increasing interest in the past few years. In many of the proposed case studies and applications, the user's current pose or motion is observed by cameras attached to a computer, and the computer's reaction is displayed to the user who changes the pose accordingly in order to reach a desired goal of interaction. The focus of this thesis is on two major difficulties of computer vision-based, or perceptual, human-computer interaction: distinguishing gestures from arbitrary postures or motions, and coping with troubles caused by natural environments. Furthermore, we address the question of decoupling the computer vision-based interface from the application in order to achieve independency between both, analogously to today's application-independent graphical user interfaces. The main contributions are - a so-called “interaction space architecture” which decouples the computer vision interface from the application by using a sequence of interaction spaces mapped on each other, - a concept of “multi-type gesture interaction” which combines several gestures with spatial and temporal constraints in order to increase the reliability of gesture recognition, two concepts of optical calibration of the interaction space which reduce the efforts of integrating the cameras as sensors in the environment of interaction, a solution to the problem of combining pointing gestures with static hand gestures, by using static cameras for global views and computer-controlled active cameras for locally adapted views, a combination of several methods for coping with unreliable results of image segmentation caused by varying illumination typical for natural environments: error detection and contour correction from image sequences and multiple views, situation-dependent signal processing, and automatic parameter control. The concepts are proved based on a system for computer vision-based interaction with a backprojection wall, which has been implemented and evaluated

    Linear and Exact Extended Formulations

    Get PDF
    Matematicko-fyzikální fakult

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Automatic registration of multi-modal airborne imagery

    Get PDF
    This dissertation presents a novel technique based on Maximization of Mutual Information (MMI) and multi-resolution to design an algorithm for automatic registration of multi-sensor images captured by various airborne cameras. In contrast to conventional methods that extract and employ feature points, MMI-based algorithms utilize the mutual information found between two given images to compute the registration parameters. These, in turn, are then utilized to perform multi-sensor registration for remote sensing images. The results indicate that the proposed algorithms are very effective in registering infrared images taken at three different wavelengths with a high resolution visual image of a given scene. The MMI technique has proven to be very robust with images acquired with the Wild Airborne Sensor Program (WASP) multi-sensor instrument. This dissertation also shows how wavelet based techniques can be used in a multi-resolution analysis framework to significantly increase computational efficiency for images captured at different resolutions. The fundamental result of this thesis is the technique of using features in the images to enhance the robustness, accuracy and speed of MMI registration. This is done by using features to focus MMI on places that are rich in information. The new algorithm smoothly integrates with MMI and avoids any need for feature-matching, and then the applications of such extensions are studied. The first extension is the registration of cartographic maps and image datum, which is very important for map updating and change detection. This is a difficult problem because map features such as roads and buildings may be mis-located and features extracted from images may not correspond to map features. Nonetheless, it is possible to obtain a general global registration of maps and images by applying statistical techniques to map and image features. To solve the map-to-image registration problem this research extends the MMI technique through a focus-of-attention mechanism that forces MMI to utilize correspondences that have a high probability of being information rich. The gradient-based parameter search and exhaustive parameter search methods are also compared. Both qualitative and quantitative analysis are used to assess the registration accuracy. Another difficult application is the fusion of the LIDAR elevation or intensity data with imagery. Such applications are even more challenging when automated registrations algorithms are needed. To improve the registration robustness, a salient area extraction algorithm is developed to overcome the distortion in the airborne and satellite images from different sensors. This extension combines the SIFT and Harris feature detection algorithms with MMI and the Harris corner label map to address difficult multi-modal registration problems through a combination of selection and focus-of-attention mechanisms together with mutual information. This two-step approach overcomes the above problems and provides a good initialization for the final step of the registration process. Experimental results are provided that demonstrate a variety of mapping applications including multi-modal IR imagery, map and image registration and image and LIDAR registration

    Sixth Biennial Report : August 2001 - May 2003

    No full text

    "Die Freude an der Gestalt" : méthodes, figures et pratiques de la géométrie au début du dix-neuvième siècle

    Get PDF
    The standard history of nineteenth century geometry began with Jean Victor Poncelet's contributions which then spread to Germany alongside an opposition between Julius Plücker, an analytic geometer, and Jakob Steiner, a synthetic geometer. Our questions centre on how geometers distinguished methods, when opposition arose, in what ways geometry disseminated from Poncelet to Plücker and Steiner, and whether this geometry was "modern'' as claimed.We first examine Poncelet's argument that within pure geometry the figure was never lost from view, while it could be obscured by the calculations of algebra. Our case study reveals visual attention within constructive problem solving, regardless of method. Further, geometers manipulated and represented figures through textual descriptions and coordinate equations. We also consider the debates involved as a medium for communicating geometry in which Poncelet and Gergonne in particular developed strategies for introducing new geometry to a conservative audience. We then turn to Plücker and Steiner. Through comparing their common research, we find that Plücker practiced a "pure analytic geometry'' that avoided calculation, while Steiner admired "synthetic geometry'' because of its organic unity. These qualities contradict usual descriptions of analytic geometry as computational or synthetic geometry as ad-hoc.Finally, we study contemporary French books on geometry and show that their methodological divide was grounded in student prerequisites, where "modern'' implied the use of algebra. By contrast, research publications exhibited evolving forms of geometry that evaded dichotomous categorization.The standard history of nineteenth century geometry began with Jean Victor Poncelet's contributions which then spread to Germany alongside an opposition between Julius Plücker, an analytic geometer, and Jakob Steiner, a synthetic geometer. Our questions centre on how geometers distinguished methods, when opposition arose, in what ways geometry disseminated from Poncelet to Plücker and Steiner, and whether this geometry was "modern'' as claimed.We first examine Poncelet's argument that within pure geometry the figure was never lost from view, while it could be obscured by the calculations of algebra. Our case study reveals visual attention within constructive problem solving, regardless of method. Further, geometers manipulated and represented figures through textual descriptions and coordinate equations. We also consider the debates involved as a medium for communicating geometry in which Poncelet and Gergonne in particular developed strategies for introducing new geometry to a conservative audience. We then turn to Plücker and Steiner. Through comparing their common research, we find that Plücker practiced a "pure analytic geometry'' that avoided calculation, while Steiner admired "synthetic geometry'' because of its organic unity. These qualities contradict usual descriptions of analytic geometry as computational or synthetic geometry as ad-hoc.Finally, we study contemporary French books on geometry and show that their methodological divide was grounded in student prerequisites, where "modern'' implied the use of algebra. By contrast, research publications exhibited evolving forms of geometry that evaded dichotomous categorization.L'histoire standard de la géométrie projective souligne l'opposition au 19e siècle entre méthodes analytiques et synthétiques. Nous nous interrogeons sur la manière dont les géomètres du 19e siècle ont vraiment opéré ou non des distinctions entre leurs méthodes et dans quelle mesure cette géométrie était "moderne'' comme le clamaient ses praticiens, et plus tard leurs historiens. Poncelet insistait sur le rôle central de la figure, qui selon lui pourrait être obscurci par les calculs de l'algèbre. Nous étudions son argument en action dans des problèmes de construction résolus par plusieurs auteurs différents -comme la construction d'une courbe du second ordre ayant un contact d'ordre trois avec une courbe plane donnée, dont cinq solutions paraissent entre 1817 et 1826. Nous montrons que l'attention visuelle est au coeur de la résolution, indépendamment de la méthode suivie, qu'elle n'est pas réservée aux figures, et que les débats sont aussi un moyen de signaler de nouvelles zones de recherche à un public en formation. Nous approfondissons ensuite la réception des techniques nouvelles et l'usage des figures dans les travaux de deux mathématiciens décrits d'ordinaire comme opposés, l'un algébriste, Plücker, et l'autre défendant l'approche synthétique, Steiner. Nous examinons enfin les affirmations de modernité dans les manuels français de géométrie publiés pendant le premier tiers du dix-neuvième siècle. Tant Gergonne et Plücker que Steiner ont développé des formes de géométrie qui ne se pliaient pas en fait à une caractérisation dichotomique, mais répondaient de manière spécifique aux pratiques mathématiques et aux modes d'interaction de leur temps
    corecore