46 research outputs found

    Developmental Bootstrapping of AIs

    Full text link
    Although some current AIs surpass human abilities in closed artificial worlds such as board games, their abilities in the real world are limited. They make strange mistakes and do not notice them. They cannot be instructed easily, fail to use common sense, and lack curiosity. They do not make good collaborators. Mainstream approaches for creating AIs are the traditional manually-constructed symbolic AI approach and generative and deep learning AI approaches including large language models (LLMs). These systems are not well suited for creating robust and trustworthy AIs. Although it is outside of the mainstream, the developmental bootstrapping approach has more potential. In developmental bootstrapping, AIs develop competences like human children do. They start with innate competences. They interact with the environment and learn from their interactions. They incrementally extend their innate competences with self-developed competences. They interact and learn from people and establish perceptual, cognitive, and common grounding. They acquire the competences they need through bootstrapping. However, developmental robotics has not yet produced AIs with robust adult-level competences. Projects have typically stopped at the Toddler Barrier corresponding to human infant development at about two years of age, before their speech is fluent. They also do not bridge the Reading Barrier, to skillfully and skeptically draw on the socially developed information resources that power current LLMs. The next competences in human cognitive development involve intrinsic motivation, imitation learning, imagination, coordination, and communication. This position paper lays out the logic, prospects, gaps, and challenges for extending the practice of developmental bootstrapping to acquire further competences and create robust, resilient, and human-compatible AIs.Comment: 102 pages, 29 figure

    Advanced materials research for a green future

    Get PDF

    4D Printing Shape Memory Polymers for Biomedical Applications

    Get PDF
    The development of 3D printing techniques using shape-memory polymers (SMPs) hascreated potentials for creating dynamic, three-dimensional structures that can be produced rapidly and be customized for specific and complex architectures. These qualities have made 3D printing a popular fabrication method for future SMP parts and devices. While important information about is known about the effects of printing parameters on 3D printed SMPs, there remains a gap in the understanding of these parameters on fundamental shape memory properties. Understanding the shape memory behavior of the SMPs post-printing can implicate potential advantages or weaknesses in using these materials in biomedical applications. Furthermore, understanding how these materials perform can lead to new advancements in platforms for cell culture, personalized medicine, and medical devices. The primary goal of this dissertation was to evaluate a cytocompatible SMP to develop techniques to 3D print predictable substrates for biomedical applications. This was accomplished through two major aims: 1) by printing and performing material characterization of cytocompatible SMP dogbones, and 2) studying and applying programming via printing in different geometric constructs. The first part of this thesis covered the preparation of cytocompatible SMP filament and the fundamental materials characterization. The second portion addressed the development and implementation of PvP. Chapter 2 described the process for selecting the appropriate material and developing a protocol for a printer-compatible filament for printing during the fundamental and PvP studies later in the thesis. It was determined that a commercially available SMP (SMP MM4520) would best fit the needs of the remaining experiments. A custom-made melt-spinner was chosen to produce filament from the SMP pellets. Next, a study was carried out to evaluate the shape memory behavior of the SMP (chapter 3). While several studies have reported the effects certain parameters of the printing process has on mechanical properties or part quality, the effects of printing parameters on the shape memory abilities of the printed SMP structures is not well understood. To determine the extent to which the 3D printing process affects the fundamental shape-memory properties of a printed SMP structure, we systematically varied temperature, multiplier, and fiber orientation, that is, the direction of the individual fibers that make up the sample, and studied the effect on fixing and recovery ratios of shape-memory dogbone samples. It was found that fiber orientation significantly impacted the fixing ratio, while temperature and multiplier had little effect. No significant effects on recovery ratio were seen from any of the parameters. However, as fiber orientation went from 0° to 90°, the variability of the recovery ratios increased. These results indicate that fiber orientation is a dominant factor in the resulting shape memory capacities, specifically the fixity, of a 3D printed SMP. Further, these results suggest that the parameters have an impact on the reliability of the shape memory polymer to recover back to its original shape. A technique for trapping strains in the SMP during printing was developed (chapter 4) for fabricating ready-to-trigger objects immediately after printing. Trapped strains were measured in 1D, 2D, and 3D samples with varied temperature, multiplier, and fiber orientation. Different geometries were observed post-triggering and simulated, and an application in vitro was presented in chapter 5

    Opinions and Outlooks on Morphological Computation

    Get PDF
    Morphological Computation is based on the observation that biological systems seem to carry out relevant computations with their morphology (physical body) in order to successfully interact with their environments. This can be observed in a whole range of systems and at many different scales. It has been studied in animals – e.g., while running, the functionality of coping with impact and slight unevenness in the ground is "delivered" by the shape of the legs and the damped elasticity of the muscle-tendon system – and plants, but it has also been observed at the cellular and even at the molecular level – as seen, for example, in spontaneous self-assembly. The concept of morphological computation has served as an inspirational resource to build bio-inspired robots, design novel approaches for support systems in health care, implement computation with natural systems, but also in art and architecture. As a consequence, the field is highly interdisciplinary, which is also nicely reflected in the wide range of authors that are featured in this e-book. We have contributions from robotics, mechanical engineering, health, architecture, biology, philosophy, and others

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    Artificial Intelligence and Ambient Intelligence

    Get PDF
    This book includes a series of scientific papers published in the Special Issue on Artificial Intelligence and Ambient Intelligence at the journal Electronics MDPI. The book starts with an opinion paper on “Relations between Electronics, Artificial Intelligence and Information Society through Information Society Rules”, presenting relations between information society, electronics and artificial intelligence mainly through twenty-four IS laws. After that, the book continues with a series of technical papers that present applications of Artificial Intelligence and Ambient Intelligence in a variety of fields including affective computing, privacy and security in smart environments, and robotics. More specifically, the first part presents usage of Artificial Intelligence (AI) methods in combination with wearable devices (e.g., smartphones and wristbands) for recognizing human psychological states (e.g., emotions and cognitive load). The second part presents usage of AI methods in combination with laser sensors or Wi-Fi signals for improving security in smart buildings by identifying and counting the number of visitors. The last part presents usage of AI methods in robotics for improving robots’ ability for object gripping manipulation and perception. The language of the book is rather technical, thus the intended audience are scientists and researchers who have at least some basic knowledge in computer science

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping

    Study on the design of DIY social robots

    Get PDF
    corecore