32 research outputs found

    Contextual and Human Factors in Information Fusion

    Get PDF
    Proceedings of: NATO Advanced Research Workshop on Human Systems Integration to Enhance Maritime Domain Awareness for Port/Harbour Security Systems, Opatija (Croatia), December 8-12, 2008Context and human factors may be essential to improving measurement processes for each sensor, and the particular context of each sensor could be used to obtain a global definition of context in multisensor environments. Reality may be captured by human sensorial domain based only on machine stimulus and then generate a feedback which can be used by the machine at its different processing levels, adapting its algorithms and methods accordingly. Reciprocally, human perception of the environment could also be modelled by context in the machine. In the proposed model, both machine and man take sensorial information from the environment and process it cooperatively until a decision or semantic synthesis is produced. In this work, we present a model for context representation and reasoning to be exploited by fusion systems. In the first place, the structure and representation of contextual information must be determined before being exploited by a specific application. Under complex circumstances, the use of context information and human interaction can help to improve a tracking system's performance (for instance, video-based tracking systems may fail when dealing with object interaction, occlusions, crosses, etc.).Publicad

    5th International Symposium on Ambient Intelligence

    Get PDF
    Ambient Intelligence (AmI) is a recent paradigm emerging from Artificial Intelligence (AI), where computers are used as proactive tools assisting people with their day-to-day activities, making everyone’s life more comfortable. Another main concern of AmI originates from the human computer interaction domain and focuses on offering ways to interact with systems in a more natural way by means user friendly interfaces. This field is evolving quickly as can be witnessed by the emerging natural language and gesture based types of interaction. The inclusion of computational power and communication technologies in everyday objects is growing and their embedding into our environments should be as invisible as possible. In order for AmI to be successful, human interaction with computing power and embedded systems in the surroundings should be smooth and happen without people actually noticing it. The only awareness people should have arises from AmI: more safety, comfort and wellbeing, emerging in a natural and inherent way. ISAmI is the International Symposium on Ambient Intelligence and aiming to bring together researchers from various disciplines that constitute the scientific field of Ambient Intelligence to present and discuss the latest results, new ideas, projects and lessons learned, namely in terms of software and applications, and aims to bring together researchers from various disciplines that are interested in all aspects of this area

    Role of Pre-processing in Textual Data Fusion: Learn From the Croydon Tram Tragedy

    Get PDF
    Tram/train derailment subject to human mistakes makes investments in an advanced control room as well as information gathering system exaggerated. A disaster in Croydon in year 2016 is recent evidence of limitation of the acquired systems to mitigate human shortcoming in disrupted circumstances. One intriguing way of resolution could be is to fuse continuous online textual data obtained from tram travelers and apply the information for early cautioning of risk discovery. This resolution conveys our consideration regarding a resource of data fusion. The focal subject of this paper is to discuss about role of pre-processing ventures in a low-level data fusion that have been distinguished as a pass to avoid time and exertion squandering amid information retrieval. Inclines in online text data pre-processing is reviewed which comes about an outline suggestion that concede traveler's responses through social media channels. The research outcome shows by a case of data fusion could go about as an impetus to railway industry to effectively partake in data exploration and information investigation

    Action recognition in visual sensor networks: a data fusion perspective

    Get PDF
    Visual Sensor Networks have emerged as a new technology to bring computer vision algorithms to the real world. However, they impose restrictions in the computational resources and bandwidth available to solve target problems. This thesis is concerned with the definition of new efficient algorithms to perform Human Action Recognition with Visual Sensor Networks. Human Action Recognition systems apply sequence modelling methods to integrate the temporal sensor measurements available. Among sequence modelling methods, the Hidden Conditional Random Field has shown a great performance in sequence classification tasks, outperforming many other methods. However, a parameter estimation procedure has not been proposed with feature and model selection properties. This thesis fills this lack proposing a new objective function to optimize during training. The L2 regularizer employed in the standard objective function is replaced by an overlapping group-L1 regularizer that produces feature and model selection effects in the optima. A gradient-based search strategy is proposed to find the optimal parameters of the objective function. Experimental evidence shows that Hidden Conditional Random Fields with their parameters estimated employing the proposed method have a higher predictive accuracy than those estimated with the standard method, with an smaller inference cost. This thesis also deals with the problem of human action recognition from multiple cameras, with the focus on reducing the amount of network bandwidth required. A multiple view dimensionality reduction framework is developed to obtain similar low dimensional representation for the motion descriptors extracted from multiple cameras. An alternative is proposed predicting the action class locally at each camera with the motion descriptors extracted from each view and integrating the different action decisions to make a global decision on the action performed. The reported experiments show that the proposed framework has a predictive performance similar to 3D state of the art methods, but with a lower computational complexity and lower bandwidth requirements. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Las Redes de Sensores Visuales son una nueva tecnología que permite el despliegue de algoritmos de visión por computador en el mundo real. Sin embargo, estas imponen restricciones en los recursos de computo y de ancho de banda disponibles para la resolución del problema en cuestión. Esta tesis tiene por objeto la definición de nuevos algoritmos con los que realizar reconocimiento de actividades humanas en redes de sensores visuales, teniendo en cuenta las restricciones planteadas. Los sistemas de reconocimiento de acciones aplican métodos de modelado de secuencias para la integración de las medidas temporales proporcionadas por los sensores. Entre los modelos para el modelado de secuencias, el Hidden Conditional Random Field a mostrado un gran rendimiento en la clasificación de secuencias, superando a otros métodos existentes. Sin embargo, no se ha definido un procedimiento para la integración de sus parámetros que incluya selección de atributos y selección de modelo. Esta tesis tiene por objeto cubrir esta carencia proponiendo una nueva función objetivo para optimizar durante la estimación de los parámetros obtimos. El regularizador L2 empleado en la función objetivo estandar se va a remplazar for un regularizador grupo-L1 solapado que va a producir los efectos de selección de modelo y atributos deseados en el óptimo. Se va a proponer una estrategia de búsqueda con la que obtener el valor óptimo de estos parámetros. Los experimentos realizados muestran que los modelos estimados utilizando la función objetivo prouesta tienen un mayor poder de predicción, reduciendo al mismo tiempo el coste computacional de la inferencia. Esta tesis también trata el problema del reconocimiento de acciones humanas emepleando multiples cámaras, centrándonos en reducir la cantidad de ancho de banda requerido par el proceso. Para ello se propone un nueva estructura en la que definir algoritmos de reducción de dimensionalidad para datos definidos en multiples vistas. Mediante su aplicación se obtienen representaciones de baja dimensionalidad similares para los descriptores de movimiento calculados en cada una de las cámaras.También se propone un método alternativo basado en la predicción de la acción realizada con los descriptores obtenidos en cada una de las cámaras, para luego combinar las diferentes predicciones en una global. La experimentación realizada muestra que estos métodos tienen una eficacia similar a la alcanzada por los métodos existentes basados en reconstrucción 3D, pero con una menor complejidad computacional y un menor uso de la red

    An evolutionary approach to optimising neural network predictors for passive sonar target tracking

    Get PDF
    Object tracking is important in autonomous robotics, military applications, financial time-series forecasting, and mobile systems. In order to correctly track through clutter, algorithms which predict the next value in a time series are essential. The competence of standard machine learning techniques to create bearing prediction estimates was examined. The results show that the classification based algorithms produce more accurate estimates than the state-of-the-art statistical models. Artificial Neural Networks (ANNs) and K-Nearest Neighbour were used, demonstrating that this technique is not specific to a single classifier. [Continues.

    Architectures for embedded multimodal sensor data fusion systems in the robotics : and airport traffic suveillance ; domain

    Get PDF
    Smaller autonomous robots and embedded sensor data fusion systems often suffer from limited computational and hardware resources. Many ‘Real Time’ algorithms for multi modal sensor data fusion cannot be executed on such systems, at least not in real time and sometimes not at all, because of the computational and energy resources needed, resulting from the architecture of the computational hardware used in these systems. Alternative hardware architectures for generic tracking algorithms could provide a solution to overcome some of these limitations. For tracking and self localization sequential Bayesian filters, in particular particle filters, have been shown to be able to handle a range of tracking problems that could not be solved with other algorithms. But particle filters have some serious disadvantages when executed on serial computational architectures used in most systems. The potential increase in performance for particle filters is huge as many of the computational steps can be done concurrently. A generic hardware solution for particle filters can relieve the central processing unit from the computational load associated with the tracking task. The general topic of this research are hardware-software architectures for multi modal sensor data fusion in embedded systems in particular tracking, with the goal to develop a high performance computational architecture for embedded applications in robotics and airport traffic surveillance domain. The primary concern of the research is therefore: The integration of domain specific concept support into hardware architectures for low level multi modal sensor data fusion, in particular embedded systems for tracking with Bayesian filters; and a distributed hardware-software tracking systems for airport traffic surveillance and control systems. Runway Incursions are occurrences at an aerodrome involving the incorrect presence of an aircraft, vehicle, or person on the protected area of a surface designated for the landing and take-off of aircraft. The growing traffic volume kept runway incursions on the NTSB’s ‘Most Wanted’ list for safety improvements for over a decade. Recent incidents show that problem is still existent. Technological responses that have been deployed in significant numbers are ASDE-X and A-SMGCS. Although these technical responses are a significant improvement and reduce the frequency of runway incursions, some runway incursion scenarios are not optimally covered by these systems, detection of runway incursion events is not as fast as desired, and they are too expensive for all but the biggest airports. Local, short range sensors could be a solution to provide the necessary affordable surveillance accuracy for runway incursion prevention. In this context the following objectives shall be reached. 1) Show the feasibility of runway incursion prevention systems based on localized surveillance. 2) Develop a design for a local runway incursion alerting system. 3) Realize a prototype of the system design using the developed tracking hardware.Kleinere autonome Roboter und eingebettete Sensordatenfusionssysteme haben oft mit stark begrenzter Rechenkapazität und eingeschränkten Hardwareressourcen zu kämpfen. Viele Echtzeitalgorithmen für die Fusion von multimodalen Sensordaten können, bedingt durch den hohen Bedarf an Rechenkapazität und Energie, auf solchen Systemen überhaupt nicht ausgeführt werden, oder zu mindesten nicht in Echtzeit. Der hohe Bedarf an Energie und Rechenkapazität hat seine Ursache darin, dass die Architektur der ausführenden Hardware und der ausgeführte Algorithmus nicht aufeinander abgestimmt sind. Dies betrifft auch Algorithmen zu Spurverfolgung. Mit Hilfe von alternativen Hardwarearchitekturen für die generische Ausführung solcher Algorithmen könnten sich einige der typischerweise vorliegenden Einschränkungen überwinden lassen. Eine Reihe von Aufgaben, die sich mit anderen Spurverfolgungsalgorithmen nicht lösen lassen, lassen sich mit dem Teilchenfilter, einem Algorithmus aus der Familie der Bayesschen Filter lösen. Bei der Ausführung auf traditionellen Architekturen haben Teilchenfilter gegenüber anderen Algorithmen einen signifikanten Nachteil, allerdings ist hier ein großer Leistungszuwachs durch die nebenläufige Ausführung vieler Rechenschritte möglich. Eine generische Hardwarearchitektur für Teilchenfilter könnte deshalb die oben genannten Systeme stark entlasten. Das allgemeine Thema dieses Forschungsvorhabens sind Hardware-Software-Architekturen für die multimodale Sensordatenfusion auf eingebetteten Systemen - speziell für Aufgaben der Spurverfolgung, mit dem Ziel eine leistungsfähige Architektur für die Berechnung entsprechender Algorithmen auf eingebetteten Systemen zu entwickeln, die für Anwendungen in der Robotik und Verkehrsüberwachung auf Flughäfen geeignet ist. Das Augenmerk des Forschungsvorhabens liegt dabei auf der Integration von vom Einsatzgebiet abhängigen Konzepten in die Architektur von Systemen zur Spurverfolgung mit Bayeschen Filtern, sowie auf verteilten Hardware-Software Spurverfolgungssystemen zur Überwachung und Führung des Rollverkehrs auf Flughäfen. Eine „Runway Incursion“ (RI) ist ein Vorfall auf einem Flugplatz, bei dem ein Fahrzeug oder eine Person sich unerlaubt in einem Abschnitt der Start- bzw. Landebahn befindet, der einem Verkehrsteilnehmer zur Benutzung zugewiesen wurde. Der wachsende Flugverkehr hat dafür gesorgt, das RIs seit über einem Jahrzehnt auf der „Most Wanted“-Liste des NTSB für Verbesserungen der Sicherheit stehen. Jüngere Vorfälle zeigen, dass das Problem noch nicht behoben ist. Technologische Maßnahmen die in nennenswerter Zahl eingesetzt wurden sind das ASDE-X und das A-SMGCS. Obwohl diese Maßnahmen eine deutliche Verbesserung darstellen und die Zahl der RIs deutlich reduzieren, gibt es einige RISituationen die von diesen Systemen nicht optimal abgedeckt werden. Außerdem detektieren sie RIs ist nicht so schnell wie erwünscht und sind - außer für die größten Flughäfen - zu teuer. Lokale Sensoren mit kurzer Reichweite könnten eine Lösung sein um die für die zuverlässige Erkennung von RIs notwendige Präzision bei der Überwachung des Rollverkehrs zu erreichen. Vor diesem Hintergrund sollen die folgenden Ziele erreicht werden. 1) Die Machbarkeit eines Runway Incursion Vermeidungssystems, das auf lokalen Sensoren basiert, zeigen. 2) Einen umsetzbaren Entwurf für ein solches System entwickeln. 3) Einen Prototypen des Systems realisieren, das die oben gennannte Hardware zur Spurverfolgung einsetzt

    Proceedings of the 2010 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    On the annual Joint Workshop of the Fraunhofer IOSB and the Karlsruhe Institute of Technology (KIT), Vision and Fusion Laboratory, the students of both institutions present their latest research findings on image processing, visual inspection, pattern recognition, tracking, SLAM, information fusion, non-myopic planning, world modeling, security in surveillance, interoperability, and human-computer interaction. This book is a collection of 16 reviewed technical reports of the 2010 Joint Workshop
    corecore