158 research outputs found

    Seamless Positioning and Navigation in Urban Environment

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Pixel-Level Deep Multi-Dimensional Embeddings for Homogeneous Multiple Object Tracking

    Get PDF
    The goal of Multiple Object Tracking (MOT) is to locate multiple objects and keep track of their individual identities and trajectories given a sequence of (video) frames. A popular approach to MOT is tracking by detection consisting of two processing components: detection (identification of objects of interest in individual frames) and data association (connecting data from multiple frames). This work addresses the detection component by introducing a method based on semantic instance segmentation, i.e., assigning labels to all visible pixels such that they are unique among different instances. Modern tracking methods often built around Convolutional Neural Networks (CNNs) and additional, explicitly-defined post-processing steps. This work introduces two detection methods that incorporate multi-dimensional embeddings. We train deep CNNs to produce easily-clusterable embeddings for semantic instance segmentation and to enable object detection through pose estimation. The use of embeddings allows the method to identify per-pixel instance membership for both tasks. Our method specifically targets applications that require long-term tracking of homogeneous targets using a stationary camera. Furthermore, this method was developed and evaluated on a livestock tracking application which presents exceptional challenges that generalized tracking methods are not equipped to solve. This is largely because contemporary datasets for multiple object tracking lack properties that are specific to livestock environments. These include a high degree of visual similarity between targets, complex physical interactions, long-term inter-object occlusions, and a fixed-cardinality set of targets. For the reasons stated above, our method is developed and tested with the livestock application in mind and, specifically, group-housed pigs are evaluated in this work. Our method reliably detects pigs in a group housed environment based on the publicly available dataset with 99% precision and 95% using pose estimation and achieves 80% accuracy when using semantic instance segmentation at 50% IoU threshold. Results demonstrate our method\u27s ability to achieve consistent identification and tracking of group-housed livestock, even in cases where the targets are occluded and despite the fact that they lack uniquely identifying features. The pixel-level embeddings used by the proposed method are thoroughly evaluated in order to demonstrate their properties and behaviors when applied to real data. Adivser: Lance C. PĂ©re

    Measuring Behavior 2018 Conference Proceedings

    Get PDF
    These proceedings contain the papers presented at Measuring Behavior 2018, the 11th International Conference on Methods and Techniques in Behavioral Research. The conference was organised by Manchester Metropolitan University, in collaboration with Noldus Information Technology. The conference was held during June 5th – 8th, 2018 in Manchester, UK. Building on the format that has emerged from previous meetings, we hosted a fascinating program about a wide variety of methodological aspects of the behavioral sciences. We had scientific presentations scheduled into seven general oral sessions and fifteen symposia, which covered a topical spread from rodent to human behavior. We had fourteen demonstrations, in which academics and companies demonstrated their latest prototypes. The scientific program also contained three workshops, one tutorial and a number of scientific discussion sessions. We also had scientific tours of our facilities at Manchester Metropolitan Univeristy, and the nearby British Cycling Velodrome. We hope this proceedings caters for many of your interests and we look forward to seeing and hearing more of your contributions

    Implementation of Sensors and Artificial Intelligence for Environmental Hazards Assessment in Urban, Agriculture and Forestry Systems

    Get PDF
    The implementation of artificial intelligence (AI), together with robotics, sensors, sensor networks, Internet of Things (IoT), and machine/deep learning modeling, has reached the forefront of research activities, moving towards the goal of increasing the efficiency in a multitude of applications and purposes related to environmental sciences. The development and deployment of AI tools requires specific considerations, approaches, and methodologies for their effective and accurate applications. This Special Issue focused on the applications of AI to environmental systems related to hazard assessment in urban, agriculture, and forestry areas

    Gesture Object Interfaces to enable a world of multiple projections

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. [209]-226).Tangible Media as an area has not explored how the tangible handle is more than a marker or place-holder for digital data. Tangible Media can do more. It has the power to materialize and redefine our conception of space and content during the creative process. It can vary from an abstract token that represents a movie to an anthropomorphic plush that reflects the behavior of a sibling during play. My work begins by extending tangible concepts of representation and token-based interactions into movie editing and play scenarios. Through several design iterations and research studies, I establish tangible technologies to drive visual and oral perspectives along with finalized creative works, all during a child's play and exploration. I define the framework, Gesture Object Interfaces, expanding on the fields of Tangible User Interaction and Gesture Recognition. Gesture is a mechanism that can reinforce or create the anthropomorphism of an object. It can give the object life. A Gesture Object is an object in hand while doing anthropomorphized gestures. Gesture Object Interfaces engender new visual and narrative perspectives as part of automatic film assembly during children's play. I generated a suite of automatic film assembly tools accessible to diverse users. The tools that I designed allow for capture, editing and performing to be completely indistinguishable from one another. Gestures integrated with objects become a coherent interface on top of natural play. I built a distributed, modular camera environment and gesture interaction to control that environment. The goal of these new technologies is to motivate children to take new visual and narrative perspectives. In this dissertation I present four tangible platforms that I created as alternatives to the usual fragmented and sequential capturing, editing and performing of narratives available to users of current storytelling tools. I developed Play it by Eye, Frame it by hand, a new generation of narrative tools that shift the frame of reference from the eye to the hand, from the viewpoint (where the eye is) to the standpoint (where the hand is). In Play it by Eye, Frame it by Hand environments, children discover atypical perspectives through the lens of everyday objects. When using Picture This!, children imagine how an object would appear relative to the viewpoint of the toy. They iterate between trying and correcting in a world of multiple perspectives. The results are entirely new genres of child-created films, where children finally capture the cherished visual idioms of action and drama. I report my design process over the course of four tangible research projects that I evaluate during qualitative observations with over one hundred 4- to 14-year-old users. Based on these research findings, I propose a class of moviemaking tools that transform the way users interpret the world visually, and through storytelling.by Catherine Nicole Vaucelle.Ph.D

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    How and Why to Read and Create Children's Digital Books

    Get PDF
    How and Why to Read and Create Children's Digital Books outlines effective ways of using digital books in early years and primary classrooms, and specifies the educational potential of using digital books and apps in physical spaces and virtual communities. With a particular focus on apps and personalised reading, Natalia Kucirkova combines theory and practice to argue that personalised reading is only truly personalised when it is created or co-created by reading communities. Divided into two parts, Part I suggests criteria to evaluate the educational quality of digital books and practical strategies for their use in the classroom. Specific attention is paid to the ways in which digital books can support individual children’s strengths and difficulties, digital literacies, language and communication skills. Part II explores digital books created by children, their caregivers, teachers and librarians, and Kucirkova also offers insights into how smart toys, tangibles and augmented/virtual reality tools can enrich children’s reading for pleasure

    Measuring spatial and temporal features of physical interaction dynamics in the workplace

    Get PDF
    Human behavior unfolding through organisational life is a topic tackled from different disciplines, with emphasis on different aspects and with an overwhelming reliance on humans as observation instruments. Advances in pervasive technologies allow for the first time to capture and record location and time information behavior in real time, accurately, continuously and for multiparty events. This thesis concerns itself with the examination of the question: can these technologies provide insights into human behavior that current methods cannot? The way people use the buildings they work in, relate and physically interact with others, through time, is information that designers and managers make use of to create better buildings and better organisations. Current methods’ depiction of these issues - fairly static, discrete and short term, mostly dyadic - pales in comparison with the potential offered by location and time technologies. Or does it? Having found an organisation, where fifty-one workers each carried a tag sending out location and time information to one such system for six weeks, two parallel studies were conducted. One using current manual and other methods and the other the automated method developed in this thesis, both aiming to understand spatial and temporal characteristics of interpersonal behavior in the workplace. This new method is based on the concepts and measures of personal space and interaction distance that are used to define the mathematical boundaries of the behaviors subject of study, interaction and solo events. Outcome information from both methods is used to test hypotheses on some aspects of the spatial and temporal nature of knowledge work affected by interpersonal dynamics. This thesis proves that the data obtained through the technology can be converted in rich information on some aspects of workplace interaction dynamics offering unprecedented insights for designers and managers to produce better buildings and better organisations

    How and Why to Read and Create Children's Digital Books

    Get PDF
    How and Why to Read and Create Children's Digital Books outlines effective ways of using digital books in early years and primary classrooms, and specifies the educational potential of using digital books and apps in physical spaces and virtual communities. With a particular focus on apps and personalised reading, Natalia Kucirkova combines theory and practice to argue that personalised reading is only truly personalised when it is created or co-created by reading communities. Divided into two parts, Part I suggests criteria to evaluate the educational quality of digital books and practical strategies for their use in the classroom. Specific attention is paid to the ways in which digital books can support individual children’s strengths and difficulties, digital literacies, language and communication skills. Part II explores digital books created by children, their caregivers, teachers and librarians, and Kucirkova also offers insights into how smart toys, tangibles and augmented/virtual reality tools can enrich children’s reading for pleasure
    • …
    corecore