921 research outputs found

    Annotated Bibliography: Anticipation

    Get PDF

    Deep reinforcement learning for soft, flexible robots : brief review with impending challenges

    Get PDF
    The increasing trend of studying the innate softness of robotic structures and amalgamating it with the benefits of the extensive developments in the field of embodied intelligence has led to the sprouting of a relatively new yet rewarding sphere of technology in intelligent soft robotics. The fusion of deep reinforcement algorithms with soft bio-inspired structures positively directs to a fruitful prospect of designing completely self-sufficient agents that are capable of learning from observations collected from their environment. For soft robotic structures possessing countless degrees of freedom, it is at times not convenient to formulate mathematical models necessary for training a deep reinforcement learning (DRL) agent. Deploying current imitation learning algorithms on soft robotic systems has provided competent results. This review article posits an overview of various such algorithms along with instances of being applied to real-world scenarios, yielding frontier results. Brief descriptions highlight the various pristine branches of DRL research in soft robotics

    Spatial relational learning and foraging in cotton-top tamarins

    Get PDF
    Spatial relationalleaming can be defined as the use of the spatial (geometric) relationship between two or more cues (landmarks) in order to locate additional points in space (O'Keefe and Nadel, 1979). An internal spatial representation enables an animal to compute novel locations and travel routes from familiar landmarks and routes (Dyer, 1993). A spatial representation is an internal construct mediating between perceived stimuli in the environment and the behaviour of the animal (Tolman, 1948). In this type of spatial representation the information encoded must be isomorphic with the physical environment such that the geometric relations of distance, angle and direction are maintained or can be computed from the stored information (Gallistel, 1990). A series of spatial and foraging task experiments were conducted to investigate the utilisation of spatial relational learning as a spatial strategy available to cotton-top tamarins (Sag uinus oedipus oedipus). The apparatus used was an 8x8 matrix of holes set in an upright wooden board to allow for the manipulation of visual cues and hidden food items such that the spatial configuration of cues and food could be transformed (translated or rotated) with respect to the perimeter of the board. The definitive test of spatial relational learning was whether the monkeys relied upon the spatial relationship between the visual cues to locate the position of the hidden food items. In a control experiment testing for differential use of perceptual information the results showed that if given the choice, tamarins relied on visual over olfactory cues in a foraging task. Callitrichids typically depend on olfactory communication in socio-sexual contexts so it was unusual that olfaction did not also play a significant role in foraging. In the first spatial learning experiment, the tamarins were found to rely on the three visually presented cues to locate the eleven hidden food items. However, their performance was not very accurate. In the next experiment the task was simplified so that the types of spatial strategies the monkeys were using to solve the foraging task could be clearly identified. In this experiment, only two visual cues were presented on either end of a line of four hidden food items. Once the monkeys were trained to these cues, the cues and food were translated and/or rotated on the board. Data from the beginning and middle of each testing session were used in the final analysis: in a previous analysis it was found that the monkeys initially searched the baited holes in the beginning of a testing session and thereafter predominantly searched unbaited holes. This suggests that they followed a win-stay/lose-shift foraging strategy, a finding that is supported by other studies of tamarins in captivity (Menzel and Juno, 1982) and the wild (Garber, 1989). The results also showed that the monkeys were searching predominately between the cues and not outside or around of them, indicating that they were locating the hidden food by using the spatial relationship between the visual cues. This provides evidence for the utilisation of spatial relational learning as a foraging strategy by cotton-top tamarins and the existence of complex internal spatial representations. Further studies are suggested to test captive monkeys' spatial relational capabilities and their foraging strategies. In addition, comparative and field studies are outlined that would provide information regarding New World monkeys' spatial learning abilities, neurophysiological organisation and the evolution of complex computational processes

    Software agents & human behavior

    Get PDF
    People make important decisions in emergencies. Often these decisions involve high stakes in terms of lives and property. Bhopal disaster (1984), Piper Alpha disaster (1988), Montara blowout (2009), and explosion on Deepwater Horizon (2010) are a few examples among many industrial incidents. In these incidents, those who were in-charge took critical decisions under various ental stressors such as time, fatigue, and panic. This thesis presents an application of naturalistic decision-making (NDM), which is a recent decision-making theory inspired by experts making decisions in real emergencies. This study develops an intelligent agent model that can be programed to make human-like decisions in emergencies. The agent model has three major components: (1) A spatial learning module, which the agent uses to learn escape routes that are designated routes in a facility for emergency evacuation, (2) a situation recognition module, which is used to recognize or distinguish among evolving emergency situations, and (3) a decision-support module, which exploits modules in (1) and (2), and implements an NDM based decision-logic for producing human-like decisions in emergencies. The spatial learning module comprises a generalized stochastic Petri net-based model of spatial learning. The model classifies routes into five classes based on landmarks, which are objects with salient spatial features. These classes deal with the question of how difficult a landmark turns out to be when an agent observes it the first time during a route traversal. An extension to the spatial learning model is also proposed where the question of how successive route traversals may impact retention of a route in the agent’s memory is investigated. The situation awareness module uses Markov logic network (MLN) to define different offshore emergency situations using First-order Logic (FOL) rules. The purpose of this module is to give the agent the necessary experience of dealing with emergencies. The potential of this module lies in the fact that different training samples can be used to produce agents having different experience or capability to deal with an emergency situation. To demonstrate this fact, two agents were developed and trained using two different sets of empirical observations. The two are found to be different in recognizing the prepare-to-abandon-platform alarm (PAPA ), and similar to each other in recognition of an emergency using other cues. Finally, the decision-support module is proposed as a union of spatial-learning module, situation awareness module, and NDM based decision-logic. The NDM-based decision-logic is inspired by Klein’s (1998) recognition primed decision-making (RPDM) model. The agent’s attitudes related to decision-making as per the RPDM are represented in the form of belief, desire, and intention (BDI). The decision-logic involves recognition of situations based on experience (as proposed in situation-recognition module), and recognition of situations based on classification, where ontological classification is used to guide the agent in cases where the agent’s experience about confronting a situation is inadequate. At the planning stage, the decision-logic exploits the agent’s spatial knowledge (as proposed in spatial-learning module) about the layout of the environment to make adjustments in the course of actions relevant to a decision that has already been made as a by-product of situation recognition. The proposed agent model has potential to be used to improve virtual training environment’s fidelity by adding agents that exhibit human-like intelligence in performing tasks related to emergency evacuation. Notwithstanding, the potential to exploit the basis provided here, in the form of an agent representing human fallibility, should not be ignored for fields like human reliability analysis

    Influence of habitat ecology on spatial learning by the threespine stickleback

    Get PDF

    Visual navigation in ants

    Get PDF
    Les remarquables capacités de navigation des insectes nous prouvent à quel point ces " mini-cerveaux " peuvent produire des comportements admirablement robustes et efficaces dans des environnements complexes. En effet, être capable de naviguer de façon efficace et autonome dans un environnement parfois hostile (désert, forêt tropicale) sollicite l'intervention de nombreux processus cognitifs impliquant l'extraction, la mémorisation et le traitement de l'information spatiale préalables à une prise de décision locomotrice orientée dans l'espace. Lors de leurs excursions hors du nid, les insectes tels que les abeilles, guêpes ou fourmis, se fient à un processus d'intégration du trajet, mais également à des indices visuels qui leur permettent de mémoriser des routes et de retrouver certains sites alimentaires familiers et leur nid. L'étude des mécanismes d'intégration du trajet a fait l'objet de nombreux travaux, par contre, nos connaissances à propos de l'utilisation d'indices visuels sont beaucoup plus limitées et proviennent principalement d'études menées dans des environnements artificiellement simplifiés, dont les conclusions sont parfois difficilement transposables aux conditions naturelles. Cette thèse propose une approche intégrative, combinant 1- des études de terrains et de laboratoire conduites sur deux espèces de fourmis spécialistes de la navigation visuelle (Melophorus bagoti et Gigantiops destructor) et 2- des analyses de photos panoramiques prisent aux endroits où les fourmis naviguent qui permettent de quantifier objectivement l'information visuelle accessible à l'insecte. Les résultats convergents obtenus sur le terrain et au laboratoire permettent de montrer que, chez ces deux espèces, les fourmis ne fragmentent pas leur monde visuel en multiples objets indépendants, et donc ne mémorisent pas de 'repères visuels' ou de balises particuliers comme le ferait un être humain. En fait, l'efficacité de leur navigation émergerait de l'utilisation de paramètres visuels étendus sur l'ensemble de leur champ visuel panoramique, incluant repères proximaux comme distaux, sans les individualiser. Contre-intuitivement, de telles images panoramiques, même à basse résolution, fournissent une information spatiale précise et non ambiguë dans les environnements naturels. Plutôt qu'une focalisation sur des repères isolés, l'utilisation de vues dans leur globalité semble être plus efficace pour représenter la complexité des scènes naturelles et être mieux adaptée à la basse résolution du système visuel des insectes. Les photos panoramiques enregistrées peuvent également servir à l'élaboration de modèles navigationnels. Les prédictions de ces modèles sont ici directement comparées au comportement des fourmis, permettant ainsi de tester et d'améliorer les différentes hypothèses envisagées. Cette approche m'a conduit à la conclusion selon laquelle les fourmis utilisent leurs vues panoramiques de façons différentes suivant qu'elles se déplacent en terrain familier ou non. Par exemple, aligner son corps de manière à ce que la vue perçue reproduise au mieux l'information mémorisée est une stratégie très efficace pour naviguer le long d'une route bien connue ; mais n'est d'aucune efficacité si l'insecte se retrouve en territoire nouveau, écarté du chemin familier. Dans ces cas critiques, les fourmis semblent recourir à une seconde stratégie qui consiste à se déplacer vers les régions présentant une ligne d'horizon plus basse que celle mémorisée, ce qui généralement conduit vers le terrain familier. Afin de choisir parmi ces deux différentes stratégies, les fourmis semblent tout simplement se fier au degré de familiarisation avec le panorama perçu. Cette thèse soulève aussi la question de la nature de l'information visuelle mémorisée par les insectes. Le modèle du " snapshot " qui prédomine dans la littérature suppose que les fourmis mémorisent une séquence d'instantanés photographiques placés à différents points le long de leurs routes. A l'inverse, les résultats obtenus dans le présent travail montrent que l'information visuelle mémorisée au bout d'une route (15 mètres) modifie l'information mémorisée à l'autre extrémité de cette même route, ce qui suggère que la connaissance visuelle de l'ensemble de la route soit compactée en une seule et même représentation mémorisée. Cette hypothèse s'accorde aussi avec d'autres de nos résultats montrant que la mémoire visuelle ne s'acquiert pas instantanément, mais se développe et s'affine avec l'expérience répétée. Lorsqu'une fourmi navigue le long de sa route, ses récepteurs visuels sont stimulés de façon continue par une scène évoluant doucement et régulièrement au fur et à mesure du déplacement. Mémoriser un pattern général de stimulations, plutôt qu'une série de " snapshots " indépendants et très ressemblants les uns aux autres, constitue une hypothèse parcimonieuse. Cette hypothèse s'applique en outre particulièrement bien aux modèles en réseaux de neurones, suggérant sa pertinence biologique. Dans l'ensemble, cette thèse s'intéresse à la nature des perceptions et de la mémoire visuelle des fourmis, ainsi qu'à la manière dont elles sont intégrées et traitées afin de produire une réponse navigationnelle appropriée. Nos résultats sont aussi discutés dans le cadre de la cognition comparée. Insectes comme vertébrés ont résolu le même problème qui consiste à naviguer de façon efficace sur terre. A la lumière de la théorie de l'évolution de Darwin, il n'y a 'a priori' aucune raison de penser qu'il existe une forme de transition brutale entre les mécanismes cognitifs des différentes espèces animales. Le fossé marqué entre insectes et vertébrés au sein des sciences cognitives pourrait bien être dû à des approches différentes plutôt qu'à de vraies différences ontologiques. Historiquement, l'étude de la navigation de l'insecte a suivi une approche de type 'bottom-up' qui recherche comment des comportements apparemment complexes peuvent découler de mécanismes simples. Ces solutions parcimonieuses, comme celles explorées dans cette thèse, peuvent fournir de remarquables hypothèses de base pour expliquer la navigation chez d'autres espèces animales aux cerveaux et comportements apparemment plus complexes, contribuant ainsi à une véritable cognition comparée.Navigating efficiently in the outside world requires many cognitive abilities like extracting, memorising, and processing information. The remarkable navigational abilities of insects are an existence proof of how small brains can produce exquisitely efficient, robust behaviour in complex environments. During their foraging trips, insects, like ants or bees, are known to rely on both path integration and learnt visual cues to recapitulate a route or reach familiar places like the nest. The strategy of path integration is well understood, but much less is known about how insects acquire and use visual information. Field studies give good descriptions of visually guided routes, but our understanding of the underlying mechanisms comes mainly from simplified laboratory conditions using artificial, geometrically simple landmarks. My thesis proposes an integrative approach that combines 1- field and lab experiments on two visually guided ant species (Melophorus bagoti and Gigantiops destructor) and 2- an analysis of panoramic pictures recorded along the animal's route. The use of panoramic pictures allows an objective quantification of the visual information available to the animal. Results from both species, in the lab and the field, converged, showing that ants do not segregate their visual world into objects, such as landmarks or discrete features, as a human observers might assume. Instead, efficient navigation seems to arise from the use of cues widespread on the ants' panoramic visual field, encompassing both proximal and distal objects together. Such relatively unprocessed panoramic views, even at low resolution, provide remarkably unambiguous spatial information in natural environment. Using such a simple but efficient panoramic visual input, rather than focusing on isolated landmarks, seems an appropriate strategy to cope with the complexity of natural scenes and the poor resolution of insects' eyes. Also, panoramic pictures can serve as a basis for running analytical models of navigation. The predictions of these models can be directly compared with the actual behaviour of real ants, allowing the iterative tuning and testing of different hypotheses. This integrative approach led me to the conclusion that ants do not rely on a single navigational technique, but might switch between strategies according to whether they are on or off their familiar terrain. For example, ants can recapitulate robustly a familiar route by simply aligning their body in a way that the current view matches best their memory. However, this strategy becomes ineffective when displaced away from the familiar route. In such a case, ants appear to head instead towards the regions where the skyline appears lower than the height recorded in their memory, which generally leads them closer to a familiar location. How ants choose between strategies at a given time might be simply based on the degree of familiarity of the panoramic scene currently perceived. Finally, this thesis raises questions about the nature of ant memories. Past studies proposed that ants memorise a succession of discrete 2D 'snapshots' of their surroundings. Contrastingly, results obtained here show that knowledge from the end of a foraging route (15 m) impacts strongly on the behaviour at the beginning of the route, suggesting that the visual knowledge of a whole foraging route may be compacted into a single holistic memory. Accordingly, repetitive training on the exact same route clearly affects the ants' behaviour, suggesting that the memorised information is processed and not 'obtained at once'. While navigating along their familiar route, ants' visual system is continually stimulated by a slowly evolving scene, and learning a general pattern of stimulation rather than storing independent but very similar snapshots appears a reasonable hypothesis to explain navigation on a natural scale; such learning works remarkably well with neural networks. Nonetheless, what the precise nature of ants' visual memories is and how elaborated they are remain wide open question. Overall, my thesis tackles the nature of ants' perception and memory as well as how both are processed together to output an appropriate navigational response. These results are discussed in the light of comparative cognition. Both vertebrates and insects have resolved the same problem of navigating efficiently in the world. In light of Darwin's theory of evolution, there is no a priori reason to think that there is a clear division between cognitive mechanisms of different species. The actual gap between insect and vertebrate cognitive sciences may result more from different approaches rather than real differences. Research on insect navigation has been approached with a bottom-up philosophy, one that examines how simple mechanisms can produce seemingly complex behaviour. Such parsimonious solutions, like the ones explored in the present thesis, can provide useful baseline hypotheses for navigation in other larger-brained animals, and thus contribute to a more truly comparative cognition

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.https://digitalcommons.unomaha.edu/isqafacbooks/1000/thumbnail.jp
    • …
    corecore