12 research outputs found

    SPOTS: Stable Placement of Objects with Reasoning in Semi-Autonomous Teleoperation Systems

    Full text link
    Pick-and-place is one of the fundamental tasks in robotics research. However, the attention has been mostly focused on the ``pick'' task, leaving the ``place'' task relatively unexplored. In this paper, we address the problem of placing objects in the context of a teleoperation framework. Particularly, we focus on two aspects of the place task: stability robustness and contextual reasonableness of object placements. Our proposed method combines simulation-driven physical stability verification via real-to-sim and the semantic reasoning capability of large language models. In other words, given place context information (e.g., user preferences, object to place, and current scene information), our proposed method outputs a probability distribution over the possible placement candidates, considering the robustness and reasonableness of the place task. Our proposed method is extensively evaluated in two simulation and one real world environments and we show that our method can greatly increase the physical plausibility of the placement as well as contextual soundness while considering user preferences.Comment: 7 page

    CLARA: Classifying and Disambiguating User Commands for Reliable Interactive Robotic Agents

    Full text link
    In this paper, we focus on inferring whether the given user command is clear, ambiguous, or infeasible in the context of interactive robotic agents utilizing large language models (LLMs). To tackle this problem, we first present an uncertainty estimation method for LLMs to classify whether the command is certain (i.e., clear) or not (i.e., ambiguous or infeasible). Once the command is classified as uncertain, we further distinguish it between ambiguous or infeasible commands leveraging LLMs with situational aware context in a zero-shot manner. For ambiguous commands, we disambiguate the command by interacting with users via question generation with LLMs. We believe that proper recognition of the given commands could lead to a decrease in malfunction and undesired actions of the robot, enhancing the reliability of interactive robot agents. We present a dataset for robotic situational awareness, consisting pair of high-level commands, scene descriptions, and labels of command type (i.e., clear, ambiguous, or infeasible). We validate the proposed method on the collected dataset, pick-and-place tabletop simulation. Finally, we demonstrate the proposed approach in real-world human-robot interaction experiments, i.e., handover scenarios

    C-ITS Environment Modeling and Attack Modeling

    Full text link
    As technology advances, cities are evolving into smart cities, with the ability to process large amounts of data and the increasing complexity and diversification of various elements within urban areas. Among the core systems of a smart city is the Cooperative-Intelligent Transport Systems (C-ITS). C-ITS is a system where vehicles provide real-time information to drivers about surrounding traffic conditions, sudden stops, falling objects, and other accident risks through roadside base stations. It consists of road infrastructure, C-ITS centers, and vehicle terminals. However, as smart cities integrate many elements through networks and electronic control, they are susceptible to cybersecurity issues. In the case of cybersecurity problems in C-ITS, there is a significant risk of safety issues arising. This technical document aims to model the C-ITS environment and the services it provides, with the purpose of identifying the attack surface where security incidents could occur in a smart city environment. Subsequently, based on the identified attack surface, the document aims to construct attack scenarios and their respective stages. The document provides a description of the concept of C-ITS, followed by the description of the C-ITS environment model, service model, and attack scenario model defined by us.Comment: in Korean Language, 14 Figures, 15 Page

    AI-based Attack Graph Generation

    Full text link
    With the advancement of IoT technology, many electronic devices are interconnected through networks, communicating with each other and performing specific roles. However, as numerous devices join networks, the threat of cyberattacks also escalates. Preventing and detecting cyber threats are crucial, and one method of preventing such threats involves using attack graphs. Attack graphs are widely used to assess security threats within networks. However, a drawback emerges as the network scales, as generating attack graphs becomes time-consuming. To overcome this limitation, artificial intelligence models can be employed. By utilizing AI models, attack graphs can be created within a short period, approximating optimal outcomes. AI models designed for attack graph generation consist of encoders and decoders, trained using reinforcement learning algorithms. After training the AI models, we confirmed the model's learning effectiveness by observing changes in loss and reward values. Additionally, we compared attack graphs generated by the AI model with those created through conventional methods.Comment: in Korean Language, 8 Figures, 14 Page

    Some identities of degenerate higher-order Daehee polynomials based on λ-umbral calculus

    Get PDF
    The degenerate versions of special polynomials and numbers, initiated by Carlitz, have regained the attention of some mathematicians by replacing the usual exponential function in the generating function of special polynomials with the degenerate exponential function. To study the relations between degenerate special polynomials, λ \lambda -umbral calculus, an analogue of umbral calculus, is intensively applied to obtain related formulas for expressing one λ \lambda -Sheffer polynomial in terms of other λ \lambda -Sheffer polynomials. In this paper, we study the connection between degenerate higher-order Daehee polynomials and other degenerate type of special polynomials. We present explicit formulas for representations of the polynomials using λ \lambda -umbral calculus and confirm the presented formulas between the degenerate higher-order Daehee polynomials and the degenerate Bernoulli polynomials, for example. Additionally, we investigate the pattern of the root distribution of these polynomials

    Similarity Graph-Based Camera Tracking for Effective 3D Geometry Reconstruction with Mobile RGB-D Camera

    No full text
    In this paper, we present a novel approach for reconstructing 3D geometry from a stream of images captured by a consumer-grade mobile RGB-D sensor. In contrast to previous real-time online approaches that process each incoming image in acquisition order, we show that applying a carefully selected order of (possibly a subset of) frames for pose estimation enables the performance of robust 3D reconstruction while automatically filtering out error-prone images. Our algorithm first organizes the input frames into a weighted graph called the similarity graph. A maximum spanning tree is then found in the graph, and its traversal determines the frames and their processing order. The basic algorithm is then extended by locally repairing the original spanning tree and merging disconnected tree components, if they exist, as much as possible, enhancing the result of 3D reconstruction. The capability of our method to generate a less error-prone stream from an input RGB-D stream may also be effectively combined with more sophisticated state-of-the-art techniques, which further increases their effectiveness in 3D reconstruction

    Semi-Autonomous Teleoperation via Learning Non-Prehensile Manipulation Skills

    Full text link
    In this paper, we present a semi-autonomous teleoperation framework for a pick-and-place task using an RGB-D sensor. In particular, we assume that the target object is located in a cluttered environment where both prehensile grasping and non-prehensile manipulation are combined for efficient teleoperation. A trajectory-based reinforcement learning is utilized for learning the non-prehensile manipulation to rearrange the objects for enabling direct grasping. From the depth image of the cluttered environment and the location of the goal object, the learned policy can provide multiple options of non-prehensile manipulation to the human operator. We carefully design a reward function for the rearranging task where the policy is trained in a simulational environment. Then, the trained policy is transferred to a real-world and evaluated in a number of real-world experiments with the varying number of objects where we show that the proposed method outperforms manual keyboard control in terms of the time duration for the grasping

    Human behavior analysis by means of multimodal context mining

    Get PDF
    There is sufficient evidence proving the impact that negative lifestyle choices have on people’s health and wellness. Changing unhealthy behaviours requires raising people’s self-awareness and also providing healthcare experts with a thorough and continuous description of the user’s conduct. Several monitoring techniques have been proposed in the past to track users’ behaviour; however, these approaches are either subjective and prone to misreporting, such as questionnaires, or only focus on a specific component of context, such as activity counters. This work presents an innovative multimodal context mining framework to inspect and infer human behaviour in a more holistic fashion. The proposed approach extends beyond the state-of-the-art, since it not only explores a sole type of context, but also combines diverse levels of context in an integral manner. Namely, low-level contexts, including activities, emotions and locations, are identified from heterogeneous sensory data through machine learning techniques. Low-level contexts are combined using ontological mechanisms to derive a more abstract representation of the user’s context, here referred to as high-level context. An initial implementation of the proposed framework supporting real-time context identification is also presented. The developed system is evaluated for various realistic scenarios making use of a novel multimodal context open dataset and data on-the-go, demonstrating prominent context-aware capabilities at both low and high levels
    corecore