103 research outputs found

    Pedestrian detection in far-infrared daytime images using a hierarchical codebook of SURF

    Get PDF
    One of the main challenges in intelligent vehicles concerns pedestrian detection for driving assistance. Recent experiments have showed that state-of-the-art descriptors provide better performances on the far-infrared (FIR) spectrum than on the visible one, even in daytime conditions, for pedestrian classification. In this paper, we propose a pedestrian detector with on-board FIR camera. Our main contribution is the exploitation of the specific characteristics of FIR images to design a fast, scale-invariant and robust pedestrian detector. Our system consists of three modules, each based on speeded-up robust feature (SURF) matching. The first module allows generating regions-of-interest (ROI), since in FIR images of the pedestrian shapes may vary in large scales, but heads appear usually as light regions. ROI are detected with a high recall rate with the hierarchical codebook of SURF features located in head regions. The second module consists of pedestrian full-body classification by using SVM. This module allows one to enhance the precision with low computational cost. In the third module, we combine the mean shift algorithm with inter-frame scale-invariant SURF feature tracking to enhance the robustness of our system. The experimental evaluation shows that our system outperforms, in the FIR domain, the state-of-the-art Haar-like Adaboost-cascade, histogram of oriented gradients (HOG)/linear SVM (linSVM) and MultiFtrpedestrian detectors, trained on the FIR images

    Dimensionality reduction and sparse representations in computer vision

    Get PDF
    The proliferation of camera equipped devices, such as netbooks, smartphones and game stations, has led to a significant increase in the production of visual content. This visual information could be used for understanding the environment and offering a natural interface between the users and their surroundings. However, the massive amounts of data and the high computational cost associated with them, encumbers the transfer of sophisticated vision algorithms to real life systems, especially ones that exhibit resource limitations such as restrictions in available memory, processing power and bandwidth. One approach for tackling these issues is to generate compact and descriptive representations of image data by exploiting inherent redundancies. We propose the investigation of dimensionality reduction and sparse representations in order to accomplish this task. In dimensionality reduction, the aim is to reduce the dimensions of the space where image data reside in order to allow resource constrained systems to handle them and, ideally, provide a more insightful description. This goal is achieved by exploiting the inherent redundancies that many classes of images, such as faces under different illumination conditions and objects from different viewpoints, exhibit. We explore the description of natural images by low dimensional non-linear models called image manifolds and investigate the performance of computer vision tasks such as recognition and classification using these low dimensional models. In addition to dimensionality reduction, we study a novel approach in representing images as a sparse linear combination of dictionary examples. We investigate how sparse image representations can be used for a variety of tasks including low level image modeling and higher level semantic information extraction. Using tools from dimensionality reduction and sparse representation, we propose the application of these methods in three hierarchical image layers, namely low-level features, mid-level structures and high-level attributes. Low level features are image descriptors that can be extracted directly from the raw image pixels and include pixel intensities, histograms, and gradients. In the first part of this work, we explore how various techniques in dimensionality reduction, ranging from traditional image compression to the recently proposed Random Projections method, affect the performance of computer vision algorithms such as face detection and face recognition. In addition, we discuss a method that is able to increase the spatial resolution of a single image, without using any training examples, according to the sparse representations framework. In the second part, we explore mid-level structures, including image manifolds and sparse models, produced by abstracting information from low-level features and offer compact modeling of high dimensional data. We propose novel techniques for generating more descriptive image representations and investigate their application in face recognition and object tracking. In the third part of this work, we propose the investigation of a novel framework for representing the semantic contents of images. This framework employs high level semantic attributes that aim to bridge the gap between the visual information of an image and its textual description by utilizing low level features and mid level structures. This innovative paradigm offers revolutionary possibilities including recognizing the category of an object from purely textual information without providing any explicit visual example

    From light rays to 3D models

    Get PDF

    A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems

    Get PDF
    As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation. Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way. The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved. This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches. A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system. This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach. While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach

    LIDAR based semi-automatic pattern recognition within an archaeological landscape

    Get PDF
    LIDAR-Daten bieten einen neuartigen Ansatz zur Lokalisierung und Überwachung des kulturellen Erbes in der Landschaft, insbesondere in schwierig zu erreichenden Gebieten, wie im Wald, im unwegsamen Gelände oder in sehr abgelegenen Gebieten. Die manuelle Lokalisation und Kartierung von archäologischen Informationen einer Kulturlandschaft ist in der herkömmlichen Herangehensweise eine sehr zeitaufwändige Aufgabe des Fundstellenmanagements (Cultural Heritage Management). Um die Möglichkeiten in der Erkennung und bei der Verwaltung des kulturellem Erbes zu verbessern und zu ergänzen, können computergestützte Verfahren einige neue Lösungsansätze bieten, die darüber hinaus sogar die Identifizierung von für das menschliche Auge bei visueller Sichtung nicht erkennbaren Details ermöglichen. Aus archäologischer Sicht ist die vorliegende Dissertation dadurch motiviert, dass sie LIDAR-Geländemodelle mit archäologischen Befunden durch automatisierte und semiautomatisierte Methoden zur Identifizierung weiterer archäologischer Muster zu Bodendenkmalen als digitale „LIDAR-Landschaft“ bewertet. Dabei wird auf möglichst einfache und freie verfügbare algorithmische Ansätze (Open Source) aus der Bildmustererkennung und Computer Vision zur Segmentierung und Klassifizierung der LIDAR-Landschaften zur großflächigen Erkennung archäologischer Denkmäler zurückgegriffen. Die Dissertation gibt dabei einen umfassenden Überblick über die archäologische Nutzung und das Potential von LIDAR-Daten und definiert anhand qualitativer und quantitativer Ansätze den Entwicklungsstand der semiautomatisierten Erkennung archäologischer Strukturen im Rahmen archäologischer Prospektion und Fernerkundungen. Darüber hinaus erläutert sie Best Practice-Beispiele und den einhergehenden aktuellen Forschungsstand. Und sie veranschaulicht die Qualität der Erkennung von Bodendenkmälern durch die semiautomatisierte Segmentierung und Klassifizierung visualisierter LIDAR-Daten. Letztlich identifiziert sie das Feld für weitere Anwendungen, wobei durch eigene, algorithmische Template Matching-Verfahren großflächige Untersuchungen zum kulturellen Erbe ermöglicht werden. Resümierend vergleicht sie die analoge und computergestützte Bildmustererkennung zu Bodendenkmalen, und diskutiert abschließend das weitere Potential LIDAR-basierter Mustererkennung in archäologischen Kulturlandschaften

    Principal Component Analysis

    Get PDF
    This book is aimed at raising awareness of researchers, scientists and engineers on the benefits of Principal Component Analysis (PCA) in data analysis. In this book, the reader will find the applications of PCA in fields such as image processing, biometric, face recognition and speech processing. It also includes the core concepts and the state-of-the-art methods in data analysis and feature extraction

    Social media mining as an opportunistic citizen science model in ecological monitoring: a case study using invasive alien species in forest ecosystems.

    Get PDF
    Dramatische ökologische, ökonomische und soziale Veränderungen bedrohen die Stabilität von Ökosystemen weltweit und stellen zusammen mit neuen Ansprüchen an die vielfältigen Ökosystemdienstleistungen von Wäldern neue Herausforderungen für das forstliche Management und Monitoring dar. Neue Risiken und Gefahren, wie zum Beispiel eingebürgerte invasive Arten (Neobiota), werfen grundsätzliche Fragen hinsichtlich etablierter forstlicher Managementstrategien auf, da diese Strategien auf der Annahme stabiler Ökosysteme basieren. Anpassungsfähige Management- und Monitoringstrategien sind deshalb notwendig, um diese neuen Bedrohungen und Veränderungen frühzeitig zu erkennen. Dies erfordert jedoch ein großflächiges und umfassendes Monitoring, was unter Maßgabe begrenzter Ressourcen nur bedingt möglich ist. Angesichts dieser Herausforderungen haben Forstpraktiker und Wissenschaftler begonnen auch auf die Unterstützung von Freiwilligen in Form sogenannter „Citizen Science“-Projekte (Bürgerwissenschaft) zurückzugreifen, um zusätzliche Informationen zu sammeln und flexibel auf spezifische Fragestellungen reagieren zu können. Mit der allgemeinen Verfügbarkeit des Internets und mobiler Geräte ist in Form sogenannter sozialer Medien zudem eine neue digitale Informationsquelle entstanden. Mittels dieser Technologien übernehmen Nutzer prinzipiell die Funktion von Umweltsensoren und erzeugen indirekt ein ungeheures Volumen allgemein zugänglicher Umgebungs- und Umweltinformationen. Die automatische Analyse von sozialen Medien wie Facebook, Twitter, Wikis oder Blogs, leistet inzwischen wichtige Beiträge zu Bereichen wie dem Monitoring von Infektionskrankheiten, Katastrophenschutz oder der Erkennung von Erdbeben. Anwendungen mit einem ökologischen Bezug existieren jedoch nur vereinzelt, und eine methodische Bearbeitung dieses Anwendungsbereichs fand bisher nicht statt. Unter Anwendung des Mikroblogging-Dienstes Twitter und des Beispiels eingebürgerter invasiver Arten in Waldökosystemen, verfolgt die vorliegende Arbeit eine solche methodische Bearbeitung und Bewertung sozialer Medien im Monitoring von Wäldern. Die automatische Analyse sozialer Medien wird dabei als opportunistisches „Citizen Science“-Modell betrachtet und die verfügbaren Daten, Aktivitäten und Teilnehmer einer vergleichenden Analyse mit existierenden bewusst geplanten „Citizen Science“-Projekten im Umweltmonitoring unterzogen. Die vorliegenden Ergebnisse zeigen, dass Twitter eine wertvolle Informationsquelle über invasive Arten darstellt und dass soziale Medien im Allgemeinen traditionelle Umweltinformationen ergänzen könnten. Twitter ist eine reichhaltige Quelle von primären Biodiversitätsbeobachtungen, einschließlich solcher zu eingebürgerten invasiven Arten. Zusätzlich kann gezeigt werden, dass die analysierten Twitterinhalte für die untersuchten Arten markante Themen- und Informationsprofile aufweisen, die wichtige Beiträge im Management invasiver Arten leisten können. Allgemein zeigt die Studie, dass einerseits das Potential von „Citizen Science“ im forstlichen Monitoring derzeit nicht ausgeschöpft wird, aber andererseits mit denjenigen Nutzern, die Biodiversitätsbeobachtungen auf Twitter teilen, eine große Zahl von Individuen mit einem Interesse an Umweltbeobachtungen zur Verfügung steht, die auf der Basis ihres dokumentierten Interesses unter Umständen für bewusst geplante „Citizen Science“-Projekte mobilisiert werden könnten. Zusammenfassend dokumentiert diese Studie, dass soziale Medien eine wertvolle Quelle für Umweltinformationen allgemein sind und eine verstärkte Untersuchung verdienen, letztlich mit dem Ziel, operative Systeme zur Unterstützung von Risikobewertungen in Echtzeit zu entwickeln.Major environmental, social and economic changes threatening the resilience of ecosystems world-wide and new demands on a broad range of forest ecosystem services present new challenges for forest management and monitoring. New risks and threats such as invasive alien species imply fundamental challenges for traditional forest management strategies, which have been based on assumptions of permanent ecosystem stability. Adaptive management and monitoring is called for to detect new threats and changes as early as possible, but this requires large-scale monitoring and monitoring resources remain a limiting factor. Accordingly, forest practitioners and scientists have begun to turn to public support in the form of “citizen science” to react flexibly to specific challenges and gather critical information. The emergence of ubiquitous mobile and internet technologies provides a new digital source of information in the form of so-called social media that essentially turns users of these media into environmental sensors and provides an immense volume of publicly accessible, ambient environmental information. Mining social media content, such as Facebook, Twitter, Wikis or Blogs, has been shown to make critical contributions to epidemic disease monitoring, emergency management or earthquake detection. Applications in the ecological domain remain anecdotal and a methodical exploration for this domain is lacking. Using the example of the micro-blogging service Twitter and invasive alien species in forest ecosystems, this study provides a methodical exploration and assessment of social media for forest monitoring. Social media mining is approached as an opportunistic citizen science model and the data, activities and contributors are analyzed in comparison to deliberate ecological citizen science monitoring. The results show that Twitter is a valuable source of information on invasive alien species and that social media in general could be a supplement to traditional monitoring data. Twitter proves to be a rich source of primary biodiversity observations including those of the selected invasive species. In addition, it is shown that Twitter content provides distinctive thematic profiles that relate closely to key characteristics of the explored invasive alien species and provide valuable insights for invasive species management. Furthermore, the study shows that while there are underutilized opportunities for citizen science in forest monitoring, the contributors of biodiversity observations on Twitter show a more than casual interest in this subject and represent a large pool of potential contributors to deliberate citizen science monitoring efforts. In summary, social online media are a valuable source for ecological monitoring information in general and deserve intensified exploration to arrive at operational systems supporting real-time risk assessments

    Cell-Free Enabled Bioproduction and Biological Discovery

    Get PDF
    As our understanding of the microbial world has progressed, so too has the backlog of information and open questions generated from the thousands of uncharacterized proteins and metabolites with potential applications as biofuels, therapeutics, and biomaterials. To address this problem, new tools need to be developed in order to rapidly test and take advantage of uncharacterized proteins and metabolites. Cell-free systems have developed into a high-throughput and scalable tool for synthetic biology and metabolic engineering with applications across multiple disciplines. The work presented in this dissertation leverages cell-free systems as a conduit for the exploration of protein function and metabolite production using two complementary approaches. The first elucidates interaction networks associated with secondary metabolite production using a computationally assisted pathway description pipeline that employs bioinformatic searches of genome databases, structural modeling, and ligand-docking simulations to predict the gene products most likely to be involved in a metabolic pathway. In vitro reconstructions of the pathway are then modularly assembled and chemically verified in Escherichia coli lysates in order to differentiate between active and inactive pathways. The second takes a systems and synthetic biology approach to engineer Escherichia coli extracts capable of directing flux towards specific metabolites. Using growth and genome engineering-based methods, we produced cell-free proteomes capable of creating unconventional metabolic states with minimal impact on the cell in vivo. As a result of this work, we have significantly expanded our ability to use cell extracts outside of their native context to solve metabolic engineering problems and provide engineers new tools that can rapidly explore the functions of proteins and test novel metabolic pathways
    corecore