4,322 research outputs found

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Data Informed Health Simulation Modeling

    Get PDF
    Combining reliable data with dynamic models can enhance the understanding of health-related phenomena. Smartphone sensor data characterizing discrete states is often suitable for analysis with machine learning classifiers. For dynamic models with continuous states, high-velocity data also serves an important role in model parameterization and calibration. Particle filtering (PF), combined with dynamic models, can support accurate recurrent estimation of continuous system state. This thesis explored these and related ideas with several case studies. The first employed multivariate Hidden Markov models (HMMs) to identify smoking intervals, using time-series of smartphone-based sensor data. Findings demonstrated that multivariate HMMs can achieve notable accuracy in classifying smoking state, with performance being strongly elevated by appropriate data conditioning. Reflecting the advantages of dynamic simulation models, this thesis has contributed two applications of articulated dynamic models: An agent-based model (ABM) of smoking and E-Cigarette use and a hybrid multi-scale model of diabetes in pregnancy (DIP). The ABM of smoking and E-Cigarette use, informed by cross-sectional data, supports investigations of smoking behavior change in light of the influence of social networks and E-Cigarette use. The DIP model was evidenced by both longitudinal and cross-sectional data, and is notable for its use of interwoven ABM, system dynamics (SD), and discrete event simulation elements to explore the interaction of risk factors, coupled dynamics of glycemia regulation, and intervention tradeoffs to address the growing incidence of DIP in the Australia Capital Territory. The final study applied PF with an SD model of mosquito development to estimate the underlying Culex mosquito population using various direct observations, including time series of weather-related factors and mosquito trap counts. The results demonstrate the effectiveness of PF in regrounding the states and evolving model parameters based on incoming observations. Using PF in the context of automated model calibration allows optimization of the values of parameters to markedly reduce model discrepancy. Collectively, the thesis demonstrates how characteristics and availability of data can influence model structure and scope, how dynamic model structure directly affects the ways that data can be used, and how advanced analysis methods for calibration and filtering can enhance model accuracy and versatility

    Knowledge-based vision and simple visual machines

    Get PDF
    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling

    Interoperability and machine-to-machine translation model with mappings to machine learning tasks

    Get PDF
    Modern large-scale automation systems integrate thousands to hundreds of thousands of physical sensors and actuators. Demands for more flexible reconfiguration of production systems and optimization across different information models, standards and legacy systems challenge current system interoperability concepts. Automatic semantic translation across information models and standards is an increasingly important problem that needs to be addressed to fulfill these demands in a cost-efficient manner under constraints of human capacity and resources in relation to timing requirements and system complexity. Here we define a translator-based operational interoperability model for interacting cyber-physical systems in mathematical terms, which includes system identification and ontology-based translation as special cases. We present alternative mathematical definitions of the translator learning task and mappings to similar machine learning tasks and solutions based on recent developments in machine learning. Possibilities to learn translators between artefacts without a common physical context, for example in simulations of digital twins and across layers of the automation pyramid are briefly discussed.Comment: 7 pages, 2 figures, 1 table, 1 listing. Submitted to the IEEE International Conference on Industrial Informatics 2019, INDIN'1

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Representation recovers information

    Get PDF
    Early agreement within cognitive science on the topic of representation has now given way to a combination of positions. Some question the significance of representation in cognition. Others continue to argue in favor, but the case has not been demonstrated in any formal way. The present paper sets out a framework in which the value of representation-use can be mathematically measured, albeit in a broadly sensory context rather than a specifically cognitive one. Key to the approach is the use of Bayesian networks for modeling the distal dimension of sensory processes. More relevant to cognitive science is the theoretical result obtained, which is that a certain type of representational architecture is *necessary* for achievement of sensory efficiency. While exhibiting few of the characteristics of traditional, symbolic encoding, this architecture corresponds quite closely to the forms of embedded representation now being explored in some embedded/embodied approaches. It becomes meaningful to view that type of representation-use as a form of information recovery. A formal basis then exists for viewing representation not so much as the substrate of reasoning and thought, but rather as a general medium for efficient, interpretive processing

    Centralized learning and planning : for cognitive robots operating in human domains

    Get PDF
    corecore