56 research outputs found

    Tribal ecopoesis in the Eastern United States

    Get PDF
    This study examines the place-making and cultural invention of newly-recognized federal Indian tribes (NRTs) in the Eastern United States. This new place-making is an effect of a modernist technical-rational bureaucracy and, paradoxically, of a countercultural and self-inventive tum away from the sterility of that technocracy. Contemporary federal policy encourages and enables Americans of Indian descent to organize themselves as tribes and to spatialize new identities as reservation Indians. Once recognized as federal tribes, however, NRTs are bound by the options for cultural revitalization that accompany the federalization of their identities. That is, this revitalization follows the cultural lines of Indian exceptionalism, a romanticized set of generalizations about Indian history that shrouds to a significant degree who NRTs are. Indian exceptionalism shapes the place-based group identity emerging on NRT reservations. This new identity is an Eastern version of the Western identity forged in the bureaucratic reservation system. The Western reservation-based identity makes a poor model for NRTs who have no historical experience as federal tribes. NRT histories reveal people with unique cultural qualities, but this uniqueness is not expressed on the landscapes of new reservations

    Automatic detection of falls and fainting

    Get PDF
    Healthcare environments have always been considered an important scenario in which to apply new technologies to improve residents and employees conditions, solve problems and facilitate the performance of tasks. In this way, the use of sensors based on user movement interaction allows solving complicated situations that should be immediately addressed, such as controlling falls and fainting spells in residential care homes. However, ensuring that all the residents are always visually controlled by at least one employee is quite complicated. In this paper, we present a ubiquitous and context-aware system focused on geriatrics and residential care homes, but it could be applied to any other healthcare centre. This system has been designed to automatically detect falls and fainting spells, alerting the most appropriate employees to address the emergency. To that end, the system is based on movement interaction through a set of Kinect devices that allows the identification of the position of a person. These devices imply some development problems that authors have had to deal with, including camera location, the detection of head movements and people in horizontal position. The proposed system allows controlling each resident posture through a notification and warning procedure. When an anomalous situation is detected, the system analyses the resident posture and, if necessary, the most adequate employee will be warned to react urgently. Ubiquity and context-awareness are essential features since the proposed system has to be able to know where any employee is and what they are doing at any time. Finally, we present the outcomes of an evaluation based on the ISO 9126-4 about the usability of the system.We would like to acknowledge the project CICYT TIN2011-27767-C02-01 from the Spanish Ministerio de Ciencia e Innovación and the Regional Goverment: Junta de Comunidades de Castilla-La Mancha PPII10-0300-4174 and PII2C09-0185-1030 projects for partially funding this work

    Learning to understand the world in 3D

    Get PDF
    3D Computer vision is a research topic gathering even increasing attention thanks to the more and more widespread availability of off-the-shelf depth sensors and large-scale 3D datasets. The main purpose of 3D computer vision is to understand the geometry of the objects in order to interact with them. Recently, the success of deep neural networks for processing images has fostered a data driven approach to solve 3D vision problems. Inspired by the potential of this field, in this thesis we will address two main problems: (a) how to leverage machine/deep learning techniques to build a robust and effective pipeline to establish correspondences between surfaces, and (b) how to obtain a reliable 3D reconstruction of an object using RGB images sparsely acquired from different point of views by means of deep neural networks. At the heart of many 3D computer vision applications lies surface matching, an effective paradigm aimed at finding correspondences between points belonging to different shapes. To this end, it is essential to first identify the characteristic points of an object and then create an adequate representation of them. We will refer to these two steps as keypoint detection and keypoint description, respectively. As a first contribution (a) of this Ph.D thesis, we will propose data driven solutions to tackle the problems of keypoint detection and description. As a further interesting direction of research, we investigate the problem of 3D object reconstruction from RGB data only (b). If in the past this application has been addressed by SLAM and Structure from motion (SfM) techniques, this radically changed in recent years thanks to the dawn of deep learning. Following this trend, we will introduce a novel approach that combines traditional computer vision techniques with deep learning to perform a view point variant 3D object reconstruction from non-overlapping RGB views

    Reconstruction and recognition of confusable models using three-dimensional perception

    Get PDF
    Perception is one of the key topics in robotics research. It is about the processing of external sensor data and its interpretation. The necessity of fully autonomous robots makes it crucial to help them to perform tasks more reliably, flexibly, and efficiently. As these platforms obtain more refined manipulation capabilities, they also require expressive and comprehensive environment models: for manipulation and affordance purposes, their models have to involve each one of the objects present in the world, coincidentally with their location, pose, shape and other aspects. The aim of this dissertation is to provide a solution to several of these challenges that arise when meeting the object grasping problem, with the aim of improving the autonomy of the mobile manipulator robot MANFRED-2. By the analysis and interpretation of 3D perception, this thesis covers in the first place the localization of supporting planes in the scenario. As the environment will contain many other things apart from the planar surface, the problem within cluttered scenarios has been solved by means of Differential Evolution, which is a particlebased evolutionary algorithm that evolves in time to the solution that yields the cost function lowest value. Since the final purpose of this thesis is to provide with valuable information for grasping applications, a complete model reconstructor has been developed. The proposed method holdsmany features such as robustness against abrupt rotations, multi-dimensional optimization, feature extensibility, compatible with other scan matching techniques, management of uncertain information and an initialization process to reduce convergence timings. It has been designed using a evolutionarybased scan matching optimizer that takes into account surface features of the object, global form and also texture and color information. The last tackled challenge regards the recognition problem. In order to procure with worthy information about the environment to the robot, a meta classifier that discerns efficiently the observed objects has been implemented. It is capable of distinguishing between confusable objects, such as mugs or dishes with similar shapes but different size or color. The contributions presented in this thesis have been fully implemented and empirically evaluated in the platform. A continuous grasping pipeline covering from perception to grasp planning including visual object recognition for confusable objects has been developed. For that purpose, an indoor environment with several objects on a table is presented in the nearby of the robot. Items are recognized from a database and, if one is chosen, the robot will calculate how to grasp it taking into account the kinematic restrictions associated to the anthropomorphic hand and the 3D model for this particular object. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------La percepción es uno de los temas más relevantes en el mundo de la investigaci ón en robótica. Su objetivo es procesar e interpretar los datos recibidos por un sensor externo. La gran necesidad de desarrollar robots autónomos hace imprescindible proporcionar soluciones que les permita realizar tareas más precisas, flexibles y eficientes. Dado que estas plataformas cada día adquieren mejores capacidades para manipular objetos, también necesitarán modelos expresivos y comprensivos: para realizar tareas de manipulación y prensión, sus modelos han de tener en cuenta cada uno de los objetos presentes en su entorno, junto con su localizaci ón, orientación, forma y otros aspectos. El objeto de la presente tesis doctoral es proponer soluciones a varios de los retos que surgen al enfrentarse al problema del agarre, con el propósito final de aumentar la capacidad de autonomía del robot manipulador MANFRED-2. Mediante el análisis e interpretación de la percepción tridimensional, esta tesis cubre en primer lugar la localización de planos de soporte en sus alrededores. Dado que el entorno contendrá muchos otros elementos aparte de la superficie de apoyo buscada, el problema en entornos abarrotados ha sido solucionado mediante Evolución Diferencial, que es un algoritmo evolutivo basado en partículas que evoluciona temporalmente a la solución que contempla el menor resultado en la función de coste. Puesto que el propósito final de este trabajo de investigación es proveer de información valiosa a las aplicaciones de prensión, se ha desarrollado un reconstructor de modelos completos. El método propuesto posee diferentes características como robustez a giros abruptos, optimización multidimensional, extensión a otras características, compatibilidad con otras técnicas de reconstrucción, manejo de incertidumbres y un proceso de inicialización para reducir el tiempo de convergencia. Ha sido diseñado usando un registro optimizado mediante técnicas evolutivas que tienen en cuenta las particularidades de la superficie del objeto, su forma global y la información relativa a la textura. El último problema abordado está relacionado con el reconocimiento de objetos. Con la intención de abastecer al robot con la mayor información posible sobre el entorno, se ha implementado un meta clasificador que diferencia de manera eficaz los objetos observados. Ha sido capacitado para distinguir objetos confundibles como tazas o platos con formas similares pero con diferentes colores o tamaños. Las contribuciones presentes en esta tesis han sido completamente implementadas y probadas de manera empírica en la plataforma. Se ha desarrollado un sistema que cubre el problema de agarre desde la percepción al cálculo de la trayectoria incluyendo el sistema de reconocimiento de objetos confundibles. Para ello, se ha presentado una mesa con objetos en un entorno cerrado cercano al robot. Los elementos son comparados con una base de datos y si se desea agarrar uno de ellos, el robot estimará cómo cogerlo teniendo en cuenta las restricciones cinemáticas asociadas a una mano antropomórfica y el modelo tridimensional generado del objeto en cuestión

    Scene understanding by robotic interactive perception

    Get PDF
    This thesis presents a novel and generic visual architecture for scene understanding by robotic interactive perception. This proposed visual architecture is fully integrated into autonomous systems performing object perception and manipulation tasks. The proposed visual architecture uses interaction with the scene, in order to improve scene understanding substantially over non-interactive models. Specifically, this thesis presents two experimental validations of an autonomous system interacting with the scene: Firstly, an autonomous gaze control model is investigated, where the vision sensor directs its gaze to satisfy a scene exploration task. Secondly, autonomous interactive perception is investigated, where objects in the scene are repositioned by robotic manipulation. The proposed visual architecture for scene understanding involving perception and manipulation tasks has four components: 1) A reliable vision system, 2) Camera-hand eye calibration to integrate the vision system into an autonomous robot’s kinematic frame chain, 3) A visual model performing perception tasks and providing required knowledge for interaction with scene, and finally, 4) A manipulation model which, using knowledge received from the perception model, chooses an appropriate action (from a set of simple actions) to satisfy a manipulation task. This thesis presents contributions for each of the aforementioned components. Firstly, a portable active binocular robot vision architecture that integrates a number of visual behaviours are presented. This active vision architecture has the ability to verge, localise, recognise and simultaneously identify multiple target object instances. The portability and functional accuracy of the proposed vision architecture is demonstrated by carrying out both qualitative and comparative analyses using different robot hardware configurations, feature extraction techniques and scene perspectives. Secondly, a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot are described. For this purpose, the forward kinematic model of the active robot head is derived and the methodology for calibrating and integrating the robot head is described in detail. A rigid calibration methodology has been implemented to provide a closed-form hand-to-eye calibration chain and this has been extended with a mechanism to allow the camera external parameters to be updated dynamically for optimal 3D reconstruction to meet the requirements for robotic tasks such as grasping and manipulating rigid and deformable objects. It is shown from experimental results that the robot head achieves an overall accuracy of fewer than 0.3 millimetres while recovering the 3D structure of a scene. In addition, a comparative study between current RGB-D cameras and our active stereo head within two dual-arm robotic test-beds is reported that demonstrates the accuracy and portability of our proposed methodology. Thirdly, this thesis proposes a visual perception model for the task of category-wise objects sorting, based on Gaussian Process (GP) classification that is capable of recognising objects categories from point cloud data. In this approach, Fast Point Feature Histogram (FPFH) features are extracted from point clouds to describe the local 3D shape of objects and a Bag-of-Words coding method is used to obtain an object-level vocabulary representation. Multi-class Gaussian Process classification is employed to provide a probability estimate of the identity of the object and serves the key role of modelling perception confidence in the interactive perception cycle. The interaction stage is responsible for invoking the appropriate action skills as required to confirm the identity of an observed object with high confidence as a result of executing multiple perception-action cycles. The recognition accuracy of the proposed perception model has been validated based on simulation input data using both Support Vector Machine (SVM) and GP based multi-class classifiers. Results obtained during this investigation demonstrate that by using a GP-based classifier, it is possible to obtain true positive classification rates of up to 80\%. Experimental validation of the above semi-autonomous object sorting system shows that the proposed GP based interactive sorting approach outperforms random sorting by up to 30\% when applied to scenes comprising configurations of household objects. Finally, a fully autonomous visual architecture is presented that has been developed to accommodate manipulation skills for an autonomous system to interact with the scene by object manipulation. This proposed visual architecture is mainly made of two stages: 1) A perception stage, that is a modified version of the aforementioned visual interaction model, 2) An interaction stage, that performs a set of ad-hoc actions relying on the information received from the perception stage. More specifically, the interaction stage simply reasons over the information (class label and associated probabilistic confidence score) received from perception stage to choose one of the following two actions: 1) An object class has been identified with high confidence, so remove from the scene and place it in the designated basket/bin for that particular class. 2) An object class has been identified with less probabilistic confidence, since from observation and inspired from the human behaviour of inspecting doubtful objects, an action is chosen to further investigate that object in order to confirm the object’s identity by capturing more images from different views in isolation. The perception stage then processes these views, hence multiple perception-action/interaction cycles take place. From an application perspective, the task of autonomous category based objects sorting is performed and the experimental design for the task is described in detail

    Optimizing Deployment and Maintenance of Indoor Localization Systems

    Get PDF
    Pervasive computing envisions the achievement of seamless and distraction-free support for tasks by means of context-aware applications. Context can be defined as the information which can be used to characterize the situation of an entity such as persons or objects which are relevant for the behaviour of an application. A context-aware application is one which can adapt its functionality based on changes in the context of the user or entity. Location is an important piece of context because a lot of information can be inferred about the situation of an entity just by knowing where it is. This makes location very useful for many context-aware applications. In outdoor scenarios, the Global Positioning System (GPS) is used for acquiring location information. However, GPS signals are relatively weak and do not penetrate buildings well, rendering them less than suitable for location estimation in indoor environments. However, people spend most of their time in indoor locations and therefore it is necessary to have location systems which would work in these scenarios. In the last two decades, there has been a lot of research into and development of indoor localization systems. A wide range of technologies have been applied in the development of these systems ranging from vision-based systems, sound-based systems as well as Radio Frequency (RF) signal based systems. In a typical indoor localization system deployment, an indoor environment is setup with different signal sources and then the distribution of the signals in the environment is recorded in a process known as calibration. The distribution of signals, also known as a radio map, is then later employed to estimate location of users by matching their signal observations to the radio map. However, not all the different signal technologies and approaches provide the right balance of accuracy, precision and cost to be suitable for most real world deployment scenarios. Of the different RF signal technologies, WLAN and Bluetooth based indoor localization systems are the most common due to the ubiquity of the signal deployments for communication purposes, and the accessibility of compatible mobile computing devices to the users of the system. Many of the indoor localization systems have been developed under laboratory conditions or only with small-scale controlled indoor areas taken into account. This poses a challenge when transposing these systems to real-world indoor environments which can be rather large and dynamic, thereby significantly raising the cost, effort and practicality of the deployment. Furthermore, due to the fact that indoor environments are rarely static, changes in the environment such as moving of furniture or changes in the building layout could adversely impact the performance of the localization system deployment. The system would then need to be recalibrated to the new environmental conditions in order to achieve and maintain optimal localization performance in the indoor environment. If this happens regularly, it can significantly increase the cost and effort for maintenance of the indoor localization system over time. In order to address these issues, this dissertation develops methods for more efficient deployment and maintenance of the indoor localization systems. A localization system deployment consists of three main phases; setup and calibration, localization and maintenance. The main contributions of this dissertation are proposed optimizations to the different stages of the localization system deployment lifecycle. First, the focus is on optimizing setup and calibration of fingerprinting-based indoor localization systems. A new method for dense and efficient calibration of the indoor environmental areas is proposed, with minimal effort and consequently reduced cost. During calibration, the signal distribution in the indoor environment is distorted by the presence of the person doing the calibration. This leads to a radio map which is not a very accurate representation of the environment. Therefore a model for WLAN signal attenuation by the human body is proposed in this dissertation. The model captures the pattern of change to the signal due the presence of the human body in the signal path. By applying the model, we can compensate for the attenuation caused by the person and thereby generate a more accurate map of the signal distribution in the environment. A more precise signal distribution leads to better precision during location estimation. Secondly, some optimizations to the localization phase are presented. The dense fingerprints of the environment created during the setup phase are used for generating location estimates by matching the captured signal distribution with the pre-recorded distribution in the environment. However, the location estimates can be further refined given additional context information. This approach makes use of sensor fusion and ambient intelligence in order to improve the accuracy of the location estimates. The ambient intelligence can be gotten from smart environments such as smart homes or offices, which trigger events that can be applied to location estimation. These optimizations are especially useful for indoor tracking applications where continuous location estimation and accurate high frequency location updates are critical. Lastly, two methods for autonomous recalibration of localization systems are presented as optimizations to the maintenance phase of the deployment. One approach is based on using the localization system infrastructure to monitor the signal characteristic distribution in the environment. The results from the monitoring are used by the system to recalibrate the signal distribution map as needed. The second approach evaluates the Received Signal Strength Indicator (RSSI) of the signals as measured by the devices using the localization system. An algorithm for detecting signal displacements and changes in the distribution is proposed, as well as an approach for subsequently applying the measurements to update the radio map. By constantly self-evaluating and recalibrating the system, it is possible to maintain the system over time by limiting the degradation of the localization performance. It is demonstrated that the proposed approach achieves results comparable to those obtained by manual calibration of the system. The above optimizations to the different stages of the localization deployment lifecycle serve to reduce the effort and cost of running the system while increasing the accuracy and reliability. These optimizations can be applied individually or together depending on the scenario and the localization system considered

    Object recognition for an autonomous wheelchair equipped with a RGB-D camera

    Get PDF
    This thesis has been carried out within a project at AR Lab (Autonomous Robot Laboratory) and IAS-Lab (Intelligent Autonomous Systems Lab) of Shanghai Jiao Tong University and University of Padua respectively. The project aims to create a system to recognize and localize multiple object classes for an autonomous wheelchair called JiaoLong, and in general for a mobile robot. The thesis had as main objective the creation of an object recognition and localization system in an indoor environment through a RGB-D sensor. The approach we followed is based on the recognition of the object by using 2D algorithm and 3D information to identify location and size of it. This will help to obtained robust performance for the recognition step and accurate estimation for the localization, thus changing the behavior of the robot in accordance with the class and the location of the object in the room. This thesis is mainly based on two aspects: • the creation of a 2D module to recognize and detect the object in a RGB image; • the creation of a 3D module to filter point cloud and estimate pose and size of the object. In this thesis we used the Bag of Features algorithm to perform the recognition of objects and a variation of the Constellation Method algorithm for the detection; 3D data are computed with several filtering algorithms which lead to a 3D analysis of the object, then are used the intrinsic information of point cloud for the pose and size estimation. We will also analyze the performance of the algorithm and propose some improvements aimed to increase the overall performance of the system besides research directions that this project could lea

    On the Recognition of Emotion from Physiological Data

    Get PDF
    This work encompasses several objectives, but is primarily concerned with an experiment where 33 participants were shown 32 slides in order to create ‗weakly induced emotions‘. Recordings of the participants‘ physiological state were taken as well as a self report of their emotional state. We then used an assortment of classifiers to predict emotional state from the recorded physiological signals, a process known as Physiological Pattern Recognition (PPR). We investigated techniques for recording, processing and extracting features from six different physiological signals: Electrocardiogram (ECG), Blood Volume Pulse (BVP), Galvanic Skin Response (GSR), Electromyography (EMG), for the corrugator muscle, skin temperature for the finger and respiratory rate. Improvements to the state of PPR emotion detection were made by allowing for 9 different weakly induced emotional states to be detected at nearly 65% accuracy. This is an improvement in the number of states readily detectable. The work presents many investigations into numerical feature extraction from physiological signals and has a chapter dedicated to collating and trialing facial electromyography techniques. There is also a hardware device we created to collect participant self reported emotional states which showed several improvements to experimental procedure

    Toward Effective Physical Human-Robot Interaction

    Get PDF
    With the fast advancement of technology, in recent years, robotics technology has significantly matured and produced robots that are able to operate in unstructured environments such as domestic environments, offices, hospitals and other human-inhabited locations. In this context, the interaction and cooperation between humans and robots has become an important and challenging aspect of robot development. Among the various kinds of possible interactions, in this Ph.D. thesis I am particularly interested in physical human-robot interaction (pHRI). In order to study how a robot can successfully engage in physical interaction with people and which factors are crucial during this kind of interaction, I investigated how humans and robots can hand over objects to each other. To study this specific interactive task I developed two robotic prototypes and conducted human-robot user studies. Although various aspects of human-robot handovers have been deeply investigated in the state of the art, during my studies I focused on three issues that have been rarely investigated so far: Human presence and motion analysis during the interaction in order to infer non-verbal communication cues and to synchronize the robot actions with the human motion; Development and evaluation of human-aware pro-active robot behaviors that enable robots to behave actively in the proximity of the human body in order to negotiate the handover location and to perform the transfer of the object; Consideration of objects grasp affordances during the handover in order to make the interaction more comfortable for the human

    Integrated visual perception architecture for robotic clothes perception and manipulation

    Get PDF
    This thesis proposes a generic visual perception architecture for robotic clothes perception and manipulation. This proposed architecture is fully integrated with a stereo vision system and a dual-arm robot and is able to perform a number of autonomous laundering tasks. Clothes perception and manipulation is a novel research topic in robotics and has experienced rapid development in recent years. Compared to the task of perceiving and manipulating rigid objects, clothes perception and manipulation poses a greater challenge. This can be attributed to two reasons: firstly, deformable clothing requires precise (high-acuity) visual perception and dexterous manipulation; secondly, as clothing approximates a non-rigid 2-manifold in 3-space, that can adopt a quasi-infinite configuration space, the potential variability in the appearance of clothing items makes them difficult to understand, identify uniquely, and interact with by machine. From an applications perspective, and as part of EU CloPeMa project, the integrated visual perception architecture refines a pre-existing clothing manipulation pipeline by completing pre-wash clothes (category) sorting (using single-shot or interactive perception for garment categorisation and manipulation) and post-wash dual-arm flattening. To the best of the author’s knowledge, as investigated in this thesis, the autonomous clothing perception and manipulation solutions presented here were first proposed and reported by the author. All of the reported robot demonstrations in this work follow a perception-manipulation method- ology where visual and tactile feedback (in the form of surface wrinkledness captured by the high accuracy depth sensor i.e. CloPeMa stereo head or the predictive confidence modelled by Gaussian Processing) serve as the halting criteria in the flattening and sorting tasks, respectively. From scientific perspective, the proposed visual perception architecture addresses the above challenges by parsing and grouping 3D clothing configurations hierarchically from low-level curvatures, through mid-level surface shape representations (providing topological descriptions and 3D texture representations), to high-level semantic structures and statistical descriptions. A range of visual features such as Shape Index, Surface Topologies Analysis and Local Binary Patterns have been adapted within this work to parse clothing surfaces and textures and several novel features have been devised, including B-Spline Patches with Locality-Constrained Linear coding, and Topology Spatial Distance to describe and quantify generic landmarks (wrinkles and folds). The essence of this proposed architecture comprises 3D generic surface parsing and interpretation, which is critical to underpinning a number of laundering tasks and has the potential to be extended to other rigid and non-rigid object perception and manipulation tasks. The experimental results presented in this thesis demonstrate that: firstly, the proposed grasp- ing approach achieves on-average 84.7% accuracy; secondly, the proposed flattening approach is able to flatten towels, t-shirts and pants (shorts) within 9 iterations on-average; thirdly, the proposed clothes recognition pipeline can recognise clothes categories from highly wrinkled configurations and advances the state-of-the-art by 36% in terms of classification accuracy, achieving an 83.2% true-positive classification rate when discriminating between five categories of clothes; finally the Gaussian Process based interactive perception approach exhibits a substantial improvement over single-shot perception. Accordingly, this thesis has advanced the state-of-the-art of robot clothes perception and manipulation
    • …
    corecore