47 research outputs found

    Using an HSV-based approach for detecting and grasping an object by the industrial manipulator system

    Get PDF
    In the context of the industrialization era, robots are gradually replacing workers in some production stages. There is an irreversible trend toward incorporating image processing techniques in the realm of robot control. In recent years, vision-based techniques have achieved significant milestones. However, most of these techniques require complex setups, specialized cameras, and skilled operators for burden computation. This paper presents an efficient vision-based solution for object detection and grasping in indoor environments. The framework of the system, encompassing geometrical constraints, robot control theories, and the hardware platform, is described. The proposed method, covering calibration to visual estimation, is detailed for handling the detection and grasping task. Our approach's efficiency, feasibility, and applicability are evident from the results of both theoretical simulations and experiments

    Using tactile sensors and IMUs with an unsupervised feature-learning approach for robotic grasp stability assessment

    Get PDF
    In a world where industry has a need for ever more complex automated machines, robot grasping is still far from human capabilities. Despite recent innovations in computer vision and grasping planning, it is still a challenge for a robot to pick never-before-seen objects. Researchers are trying to combine vision with tactile sensing to augment the performance of modern intelligent machines. In this thesis, we will present a novel way to improve robotic grasping using tactile sensors and an unsupervised feature-learning algorithm. Using a test bench and sensors at the Control and Robotics (CoRo) laboratory of the ÉTS, we have developed and tested a series of classifiers to predict the outcome of a robotic grasp. Our method improves upon the results of hand-crafted feature learning. We have collected data from 100 different everyday objects, executing 10 grasping attemps per object, for a total of 1000 grasping attemps. The optimal system we developed recognized grasp failures 84.23% of the time

    Program and Proceedings: The Nebraska Academy of Sciences 1880-2023. 142th Anniversary Year. One Hundred-Thirty-Third Annual Meeting April 21, 2023. Hybrid Meeting: Nebraska Wesleyan University & Online, Lincoln, Nebraska

    Get PDF
    AERONAUTICS & SPACE SCIENCE Chairperson(s): Dr. Scott Tarry & Michaela Lucas HUMANS PAST AND PRESENT Chairperson(s): Phil R. Geib & Allegra Ward APPLIED SCIENCE & TECHNOLOGY SECTION Chairperson(s): Mary Ettel BIOLOGY Chairpersons: Lauren Gillespie, Steve Heinisch, and Paul Davis BIOMEDICAL SCIENCES Chairperson(s): Annemarie Shibata, Kimberly Carlson, Joseph Dolence, Alexis Hobbs, James Fletcher, Paul Denton CHEM Section Chairperson(s): Nathanael Fackler EARTH SCIENCES Chairpersons: Irina Filina, Jon Schueth, Ross Dixon, Michael Leite ENVIRONMENTAL SCIENCE Chairperson: Mark Hammer PHYSICS Chairperson(s): Dr. Adam Davis SCIENCE EDUCATION Chairperson: Christine Gustafson 2023 Maiben Lecturer: Jason Bartz 2023 FRIEND OF SCIENCE AWARD TO: Ray Ward and Jim Lewi

    Multi-Robot Systems: Challenges, Trends and Applications

    Get PDF
    This book is a printed edition of the Special Issue entitled “Multi-Robot Systems: Challenges, Trends, and Applications” that was published in Applied Sciences. This Special Issue collected seventeen high-quality papers that discuss the main challenges of multi-robot systems, present the trends to address these issues, and report various relevant applications. Some of the topics addressed by these papers are robot swarms, mission planning, robot teaming, machine learning, immersive technologies, search and rescue, and social robotics

    Real-time 3D hand reconstruction in challenging scenes from a single color or depth camera

    Get PDF
    Hands are one of the main enabling factors for performing complex tasks and humans naturally use them for interactions with their environment. Reconstruction and digitization of 3D hand motion opens up many possibilities for important applications. Hands gestures can be directly used for human–computer interaction, which is especially relevant for controlling augmented or virtual reality (AR/VR) devices where immersion is of utmost importance. In addition, 3D hand motion capture is a precondition for automatic sign-language translation, activity recognition, or teaching robots. Different approaches for 3D hand motion capture have been actively researched in the past. While being accurate, gloves and markers are intrusive and uncomfortable to wear. Hence, markerless hand reconstruction based on cameras is desirable. Multi-camera setups provide rich input, however, they are hard to calibrate and lack the flexibility for mobile use cases. Thus, the majority of more recent methods uses a single color or depth camera which, however, makes the problem harder due to more ambiguities in the input. For interaction purposes, users need continuous control and immediate feedback. This means the algorithms have to run in real time and be robust in uncontrolled scenes. These requirements, achieving 3D hand reconstruction in real time from a single camera in general scenes, make the problem significantly more challenging. While recent research has shown promising results, current state-of-the-art methods still have strong limitations. Most approaches only track the motion of a single hand in isolation and do not take background-clutter or interactions with arbitrary objects or the other hand into account. The few methods that can handle more general and natural scenarios run far from real time or use complex multi-camera setups. Such requirements make existing methods unusable for many aforementioned applications. This thesis pushes the state of the art for real-time 3D hand tracking and reconstruction in general scenes from a single RGB or depth camera. The presented approaches explore novel combinations of generative hand models, which have been used successfully in the computer vision and graphics community for decades, and powerful cutting-edge machine learning techniques, which have recently emerged with the advent of deep learning. In particular, this thesis proposes a novel method for hand tracking in the presence of strong occlusions and clutter, the first method for full global 3D hand tracking from in-the-wild RGB video, and a method for simultaneous pose and dense shape reconstruction of two interacting hands that, for the first time, combines a set of desirable properties previously unseen in the literature.Hände sind einer der Hauptfaktoren für die Ausführung komplexer Aufgaben, und Menschen verwenden sie auf natürliche Weise für Interaktionen mit ihrer Umgebung. Die Rekonstruktion und Digitalisierung der 3D-Handbewegung eröffnet viele Möglichkeiten für wichtige Anwendungen. Handgesten können direkt als Eingabe für die Mensch-Computer-Interaktion verwendet werden. Dies ist insbesondere für Geräte der erweiterten oder virtuellen Realität (AR / VR) relevant, bei denen die Immersion von größter Bedeutung ist. Darüber hinaus ist die Rekonstruktion der 3D Handbewegung eine Voraussetzung zur automatischen Übersetzung von Gebärdensprache, zur Aktivitätserkennung oder zum Unterrichten von Robotern. In der Vergangenheit wurden verschiedene Ansätze zur 3D-Handbewegungsrekonstruktion aktiv erforscht. Handschuhe und physische Markierungen sind zwar präzise, aber aufdringlich und unangenehm zu tragen. Daher ist eine markierungslose Handrekonstruktion auf der Basis von Kameras wünschenswert. Multi-Kamera-Setups bieten umfangreiche Eingabedaten, sind jedoch schwer zu kalibrieren und haben keine Flexibilität für mobile Anwendungsfälle. Daher verwenden die meisten neueren Methoden eine einzelne Farb- oder Tiefenkamera, was die Aufgabe jedoch schwerer macht, da mehr Ambiguitäten in den Eingabedaten vorhanden sind. Für Interaktionszwecke benötigen Benutzer kontinuierliche Kontrolle und sofortiges Feedback. Dies bedeutet, dass die Algorithmen in Echtzeit ausgeführt werden müssen und robust in unkontrollierten Szenen sein müssen. Diese Anforderungen, 3D-Handrekonstruktion in Echtzeit mit einer einzigen Kamera in allgemeinen Szenen, machen das Problem erheblich schwieriger. Während neuere Forschungsarbeiten vielversprechende Ergebnisse gezeigt haben, weisen aktuelle Methoden immer noch Einschränkungen auf. Die meisten Ansätze verfolgen die Bewegung einer einzelnen Hand nur isoliert und berücksichtigen keine alltäglichen Umgebungen oder Interaktionen mit beliebigen Objekten oder der anderen Hand. Die wenigen Methoden, die allgemeinere und natürlichere Szenarien verarbeiten können, laufen nicht in Echtzeit oder verwenden komplexe Multi-Kamera-Setups. Solche Anforderungen machen bestehende Verfahren für viele der oben genannten Anwendungen unbrauchbar. Diese Dissertation erweitert den Stand der Technik für die Echtzeit-3D-Handverfolgung und -Rekonstruktion in allgemeinen Szenen mit einer einzelnen RGB- oder Tiefenkamera. Die vorgestellten Algorithmen erforschen neue Kombinationen aus generativen Handmodellen, die seit Jahrzehnten erfolgreich in den Bereichen Computer Vision und Grafik eingesetzt werden, und leistungsfähigen innovativen Techniken des maschinellen Lernens, die vor kurzem mit dem Aufkommen neuronaler Netzwerke entstanden sind. In dieser Arbeit werden insbesondere vorgeschlagen: eine neuartige Methode zur Handbewegungsrekonstruktion bei starken Verdeckungen und in unkontrollierten Szenen, die erste Methode zur Rekonstruktion der globalen 3D Handbewegung aus RGB-Videos in freier Wildbahn und die erste Methode zur gleichzeitigen Rekonstruktion von Handpose und -form zweier interagierender Hände, die eine Reihe wünschenwerter Eigenschaften komibiniert

    Serious Games and Mixed Reality Applications for Healthcare

    Get PDF
    Virtual reality (VR) and augmented reality (AR) have long histories in the healthcare sector, offering the opportunity to develop a wide range of tools and applications aimed at improving the quality of care and efficiency of services for professionals and patients alike. The best-known examples of VR–AR applications in the healthcare domain include surgical planning and medical training by means of simulation technologies. Techniques used in surgical simulation have also been applied to cognitive and motor rehabilitation, pain management, and patient and professional education. Serious games are ones in which the main goal is not entertainment, but a crucial purpose, ranging from the acquisition of knowledge to interactive training.These games are attracting growing attention in healthcare because of their several benefits: motivation, interactivity, adaptation to user competence level, flexibility in time, repeatability, and continuous feedback. Recently, healthcare has also become one of the biggest adopters of mixed reality (MR), which merges real and virtual content to generate novel environments, where physical and digital objects not only coexist, but are also capable of interacting with each other in real time, encompassing both VR and AR applications.This Special Issue aims to gather and publish original scientific contributions exploring opportunities and addressing challenges in both the theoretical and applied aspects of VR–AR and MR applications in healthcare

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data
    corecore