90 research outputs found

    Unsupervised online activity discovery using temporal behaviour assumption

    Get PDF
    We present a novel unsupervised approach, UnADevs, for discovering activity clusters corresponding to periodic and stationary activities in streaming sensor data. Such activities usually last for some time, which is exploited by our method; it includes mechanisms to regulate sensitivity to brief outliers and can discover multiple clusters overlapping in time to better deal with deviations from nominal behaviour. The method was evaluated on two activity datasets containing large number of activities (14 and 33 respectively) against online agglomerative clustering and DBSCAN. In a multi-criteria evaluation, our approach achieved significantly better performance on majority of the measures, with the advantages that: (i) it does not require to specify the number of clusters beforehand (it is open ended); (ii) it is online and can find clusters in real time; (iii) it has constant time complexity; (iv) and it is memory efficient as it does not keep the data samples in memory. Overall, it has managed to discover 616 of the total 717 activities. Because it discovers clusters of activities in real time, it is ideal to work alongside an active learning system

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav

    Proceedings of the 2009 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    The joint workshop of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, and the Vision and Fusion Laboratory (Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)), is organized annually since 2005 with the aim to report on the latest research and development findings of the doctoral students of both institutions. This book provides a collection of 16 technical reports on the research results presented on the 2009 workshop

    Energy-efficient Continuous Context Sensing on Mobile Phones

    Get PDF
    With the ever increasing adoption of smartphones worldwide, researchers have found the perfect sensor platform to perform context-based research and to prepare for context-based services to be also deployed for the end-users. However, continuous context sensing imposes a considerable challenge in balancing the energy consumption of the sensors, the accuracy of the recognized context and its latency. After outlining the common characteristics of continuous sensing systems, we present a detailed overview of the state of the art, from sensors sub-systems to context inference algorithms. Then, we present the three main contribution of this thesis. The first approach we present is based on the use of local communications to exchange sensing information with neighboring devices. As proximity, location and environmental information can be obtained from nearby smartphones, we design a protocol for synchronizing the exchanges and fairly distribute the sensing tasks. We show both theoretically and experimentally the reduction in energy needed when the devices can collaborate. The second approach focuses on the way to schedule mobile sensors, optimizing for both the accuracy and energy needs. We formulate the optimal sensing problem as a decision problem and propose a two-tier framework for approximating its solution. The first tier is responsible for segmenting the sensor measurement time series, by fitting various models. The second tier takes care of estimating the optimal sampling, selecting the measurements that contributes the most to the model accuracy. We provide near-optimal heuristics for both tiers and evaluate their performances using environmental sensor data. In the third approach we propose an online algorithm that identifies repeated patterns in time series and produces a compressed symbolic stream. The first symbolic transformation is based on clustering with the raw sensor data. Whereas the next iterations encode repetitive sequences of symbols into new symbols. We define also a metric to evaluate the symbolization methods with regard to their capacity at preserving the systems' states. We also show that the output of symbols can be used directly for various data mining tasks, such as classification or forecasting, without impacting much the accuracy, but greatly reducing the complexity and running time. In addition, we also present an example of application, assessing the user's exposure to air pollutants, which demonstrates the many opportunities to enhance contextual information when fusing sensor data from different sources. On one side we gather fine grained air quality information from mobile sensor deployments and aggregate them with an interpolation model. And, on the other side, we continuously capture the user's context, including location, activity and surrounding air quality. We also present the various models used for fusing all these information in order to produce the exposure estimation

    Migrating to a real-time distributed parallel simulator architecture

    Get PDF
    The South African National Defence Force (SANDF) currently requires a system of systems simulation capability for supporting the different phases of a Ground Based Air Defence System (GBADS) acquisition program. A non-distributed, fast-as-possible simulator and its architectural predecessors developed by the Council for Scientific and Industrial Research (CSIR) was able to provide the required capability during the concept and definition phases of the acquisition life cycle. The non-distributed simulator implements a 100Hz logical time Discrete Time System Specification (DTSS) in support of the existing models. However, real-time simulation execution has become a prioritised requirement to support the development phase of the acquisition life cycle. This dissertation is about the ongoing migration of the non-distributed simulator to a practical simulation architecture that supports the real-time requirement. The simulator simulates a synthetic environment inhabited by interacting GBAD systems and hostile airborne targets. The non-distributed simulator was parallelised across multiple Commod- ity Off the Shelf (COTS) PC nodes connected by a commercial Gigabit Eth- ernet infrastructure. Since model reuse was important for cost effectiveness, it was decided to reuse all the existing models, by retaining their 100Hz logical time DTSSs. The large scale and event-based High Level Architecture (HLA), an IEEE standard for large-scale distributed simulation interoperability, had been identified as the most suitable distribution and parallelisation technology. However, two categories of risks in directly migrating to the HLA were iden- tified. The choice was made, with motivations, to mitigate the identified risks by developing a specialised custom distributed architecture. In this dissertation, the custom discrete time, distributed, peer-to-peer, message-passing architecture that has been built by the author in support of the parallelised simulator requirements, is described and analysed. It reports on empirical studies in regard to performance and flexibility. The architecture is shown to be a suitable and cost effective distributed simulator architecture for supporting a speed-up of three to four times through parallelisation of the 100 Hz logical time DTSS. This distributed architecture is currently in use and working as expected, but results in a parallelisation speed-up ceiling irrespective of the number of distributed processors. In addition, a hybrid discrete-time/discrete-event modelling approach and simulator is proposed that lowers the distributed communication and time synchronisation overhead—to improve on the scalability of the discrete time simulator—while still economically reusing the existing models. The pro- posed hybrid architecture was implemented and its real-time performance analysed. The hybrid architecture is found to support a parallelisation speed- up that is not bounded, but linearly related to the number of distributed pro- cessors up to at least the 11 processing nodes available for experimentation.Dissertation (MSc)--University of Pretoria, 2009.Computer Scienceunrestricte

    A Distributed Intelligent Sensing Approach for Environmental Monitoring Applications

    Get PDF
    Scientific reports from around the world present us with the undeniable fact that the global ecosystem is undergoing severe change. As this shift accelerates, it is ever more critical that we are able to quantify the local effects of such changes, and further, their implications, from our daily life to the biological processes that put food on our tables. In this thesis, we study the application of sensor network technology to the observation and estimation of highly local phenomena---specifically at a scale between ten to several hundred square meters. Embedding knowledge about the observed process directly into the sensor nodes' behavior via dedicated resource management or control algorithms allows us to deploy dense networks with low power requirements. Ecological systems are notoriously complex. In our work we must thus be highly experimental; it is our highest goal that we construct an approach to environmental monitoring that is not only realistic, but practical for real-world use. Our approach is centered on a commercially available sensor network product, aided by an off-the-shelf quadrotor with minimal customization. We validate our approach through a series of experiments performed from simulation all the way to reality, in deployments lasting days to several months. We motivate the need for local data via two case studies examining physical phenomena. First, employing novel modalities, we study the eclosion of a common agricultural pest. We present our efforts to acquire data that is more local than commonly employed methods, culminating in a six month deployment in a Swiss apple orchard. Next, we apply a environmental fluid dynamics model to enable the estimation of sensible heat flux using an inexpensive sensor. We integrate the sensor with a wireless sensor network and validate its capabilities in a short-term deployment. Acquiring meaningful data on a local scale requires that we advance the state of the art in multiple aspects. Static sensor networks present a classical tension between resolution, autonomy, and accuracy. We explore the performance of algorithms aimed at providing all three, showing explicitly what is required to implement these approaches for real-world applications in an autonomous deployment under uncontrolled conditions. Eventually, spatial resolution is limited by network density. Such limits may be overcome by the use of mobile sensors. We explore the use of an off-the-shelf quadrotor, equipped with environmental sensors, as an additional element in system of heterogeneous sensing nodes. Through a series of indoor and outdoor experiments, we quantify the contribution of a such a mobile sensor, and various strategies for planning its path

    Real time tracking using nature-inspired algorithms

    Get PDF
    This thesis investigates the core difficulties in the tracking field of computer vision. The aim is to develop a suitable tuning free optimisation strategy so that a real time tracking could be achieved. The population and multi-solution based approaches have been applied first to analyse the convergence behaviours in the evolutionary test cases. The aim is to identify the core misconceptions in the manner the search characteristics of particles are defined in the literature. A general perception in the scientific community is that the particle based methods are not suitable for the real time applications. This thesis improves the convergence properties of particles by a novel scale free correlation approach. By altering the fundamental definition of a particle and by avoiding the nostalgic operations the tracking was expedited to a rate of 250 FPS. There is a reasonable amount of similarity between the tracking landscapes and the ones generated by three dimensional evolutionary test cases. Several experimental studies are conducted that compares the performances of the novel optimisation to the ones observed with the swarming methods. It is therefore concluded that the modified particle behaviour outclassed the traditional approaches by huge margins in almost every test scenario

    Design and computational aspects of compliant tensegrity robots

    Get PDF
    corecore