703 research outputs found

    DroidStorm: Development of a Bluetooth based mobile application for autonomous systems

    Full text link
    Android application for controlling LEGO MINNDSTORMS NXT robots using Bluetooth. Capable of creating a collaborative robotic system where two robots work together to achieve a given objective.Tormo Franco, TI. (2011). DroidStorm: Development of a Bluetooth based mobile application for autonomous systems. http://hdl.handle.net/10251/15822Archivo delegad

    Workshop sensing a changing world : proceedings workshop November 19-21, 2008

    Get PDF

    Teaching humanoid robotics by means of human teleoperation through RGB-D sensors

    Get PDF
    This paper presents a graduate course project on humanoid robotics offered by the University of Padova. The target is to safely lift an object by teleoperating a small humanoid. Students have to map human limbs into robot joints, guarantee the robot stability during the motion, and teleoperate the robot to perform the correct movement. We introduce the following innovative aspects with respect to classical robotic classes: i) the use of humanoid robots as teaching tools; ii) the simplification of the stable locomotion problem by exploiting the potential of teleoperation; iii) the adoption of a Project-Based Learning constructivist approach as teaching methodology. The learning objectives of both course and project are introduced and compared with the students\u2019 background. Design and constraints students have to deal with are reported, together with the amount of time they and their instructors dedicated to solve tasks. A set of evaluation results are provided in order to validate the authors\u2019 purpose, including the students\u2019 personal feedback. A discussion about possible future improvements is reported, hoping to encourage further spread of educational robotics in schools at all levels

    Programming Robots for Activities of Everyday Life

    Get PDF
    Text-based programming remains a challenge to novice programmers in\ua0all programming domains including robotics. The use of robots is gainingconsiderable traction in several domains since robots are capable of assisting\ua0humans in repetitive and hazardous tasks. In the near future, robots willbe used in tasks of everyday life in homes, hotels, airports, museums, etc.\ua0However, robotic missions have been either predefined or programmed usinglow-level APIs, making mission specification task-specific and error-prone.\ua0To harness the full potential of robots, it must be possible to define missionsfor specific applications domains as needed. The specification of missions of\ua0robotic applications should be performed via easy-to-use, accessible ways, and\ua0at the same time, be accurate, and unambiguous. Simplicity and flexibility in\ua0programming such robots are important, since end-users come from diverse\ua0domains, not necessarily with suffcient programming knowledge.The main objective of this licentiate thesis is to empirically understand the\ua0state-of-the-art in languages and tools used for specifying robot missions byend-users. The findings will form the basis for interventions in developing\ua0future languages for end-user robot programming.During the empirical study, DSLs for robot mission specification were\ua0analyzed through published literature, their websites, user manuals, samplemissions and using the languages to specify missions for supported robots.After extracting data from 30 environments, 133 features were identified.\ua0A feature matrix mapping the features to the environments was developedwith a feature model for robotic mission specification DSLs.Our results show that most end-user facing environments exist in the\ua0education domain for teaching novice programmers and STEM subjects. Mostof the visual languages are developed using Blockly and Scratch libraries.\ua0The end-user domain abstraction needs more work since most of the visualenvironments abstract robotic and programming language concepts but not\ua0end-user concepts. In future works, it is important to focus on the development\ua0of reusable libraries for end-user concepts; and further, explore how end-user\ua0facing environments can be adapted for novice programmers to learn\ua0general programming skills and robot programming in low resource settings\ua0in developing countries, like Uganda

    Assessment of Human Performance in Industry 5.0 Research Via Eye-Tracking and Cognitive Biases

    Get PDF
    Manufacturing assembly is combining previously made components or subassemblies into a final finished product. The assembly process can be manual, hybrid, or fully automated. Human operators who are involved in assembly use their judgment to perform the process. They collaborate with the other work agents such as assembly machines, robots, smart technologies, and computer interfaces. The recent Industrial revolution, Industry 5.0, exploits human expertise in collaboration with efficient and accurate machines. Manufacturing facilities that feature Industry 5.0 work settings require higher expectations, higher accuracy, sustainability solutions, mass customization of products, more human involvement, and digital technologies in smart workstations. Given these features, the cognitive load exerted on human workers in this environment is continuously increasing, leading to the use of cognitive heuristics. Cognitive biases are getting more attention in the cognitive ergonomics field, to help understand the operational behavior of workers. Manufacturing facilities can integrate cognitive assistance systems to work in parallel with physical and sensorial assistance systems. Cognitive assistance systems help toward better work conditions for workers and better overall system performance. This research explores the impact of human thinking style and using a cognitive assistance system on workers\u27 cognitive load, bias-related human performance, and user satisfaction. This research presents the design and experimental implementation of a research framework based on a well-established three-layer model for implementing Industry 5.0 in manufacturing. The research framework was designed to apply the dual-system theory and cognitive assistance in Assembly 5.0. Two experiments are presented to show the effectiveness of the proposed research framework. A cognitive assistance system was designed and compared to a benchmark system from LEGO ® Company. Subjective and objective measures were used to assess the thinking style, cognitive load, bias-related human performance, and user satisfaction in Assembly 5.0. As Industry 5.0 requires higher expectations, higher accuracy, smart workstations, and higher complexity, cognitive assistance systems can reduce the cognitive load and maintain the work efficiency and user satisfaction. Therefore, this work is important to industry to expand the use of cognitive ergonomic tools and employ them for A5.0 workers\u27 benefits

    Assessment of Human Performance in Industry 5.0 Research Via Eye-Tracking and Cognitive Biases

    Get PDF
    Manufacturing assembly is combining previously made components or subassemblies into a final finished product. The assembly process can be manual, hybrid, or fully automated. Human operators who are involved in assembly use their judgment to perform the process. They collaborate with the other work agents such as assembly machines, robots, smart technologies, and computer interfaces. The recent Industrial revolution, Industry 5.0, exploits human expertise in collaboration with efficient and accurate machines. Manufacturing facilities that feature Industry 5.0 work settings require higher expectations, higher accuracy, sustainability solutions, mass customization of products, more human involvement, and digital technologies in smart workstations. Given these features, the cognitive load exerted on human workers in this environment is continuously increasing, leading to the use of cognitive heuristics. Cognitive biases are getting more attention in the cognitive ergonomics field, to help understand the operational behavior of workers. Manufacturing facilities can integrate cognitive assistance systems to work in parallel with physical and sensorial assistance systems. Cognitive assistance systems help toward better work conditions for workers and better overall system performance. This research explores the impact of human thinking style and using a cognitive assistance system on workers\u27 cognitive load, bias-related human performance, and user satisfaction. This research presents the design and experimental implementation of a research framework based on a well-established three-layer model for implementing Industry 5.0 in manufacturing. The research framework was designed to apply the dual-system theory and cognitive assistance in Assembly 5.0. Two experiments are presented to show the effectiveness of the proposed research framework. A cognitive assistance system was designed and compared to a benchmark system from LEGO ® Company. Subjective and objective measures were used to assess the thinking style, cognitive load, bias-related human performance, and user satisfaction in Assembly 5.0. As Industry 5.0 requires higher expectations, higher accuracy, smart workstations, and higher complexity, cognitive assistance systems can reduce the cognitive load and maintain the work efficiency and user satisfaction. Therefore, this work is important to industry to expand the use of cognitive ergonomic tools and employ them for A5.0 workers\u27 benefits

    EUD-MARS: End-User Development of Model-Driven Adaptive Robotics Software Systems

    Get PDF
    Empowering end-users to program robots is becoming more significant. Introducing software engineering principles into end-user programming could improve the quality of the developed software applications. For example, model-driven development improves technology independence and adaptive systems act upon changes in their context of use. However, end-users need to apply such principles in a non-daunting manner and without incurring a steep learning curve. This paper presents EUD-MARS that aims to provide end-users with a simple approach for developing model-driven adaptive robotics software. End-users include people like hobbyists and students who are not professional programmers but are interested in programming robots. EUD-MARS supports robots like hobby drones and educational humanoids that are available for end-users. It offers a tool for software developers and another one for end-users. We evaluated EUD-MARS from three perspectives. First, we used EUD-MARS to program different types of robots and assessed its visual programming language against existing design principles. Second, we asked software developers to use EUD-MARS to configure robots and obtained their feedback on strengths and points for improvement. Third, we observed how end-users explain and develop EUD-MARS programs, and obtained their feedback mainly on understandability, ease of programming, and desirability. These evaluations yielded positive indications of EUD-MARS

    Immersive Telerobotic Modular Framework using stereoscopic HMD's

    Get PDF
    Telepresença é o termo utilizado para descrever o conjunto de tecnologias que proporcionam aos utilizadores a sensação de que se encontram num local onde não estão fisicamente. Telepresença imersiva é o próximo passo e o objetivo passa por proporcionar a sensação de que o utilizador se encontra completamente imerso num ambiente remoto, estimulando para isso o maior número possível de sentidos e utilizando novas tecnologias tais como: visão estereoscópica, visão panorâmica, áudio 3D e Head Mounted Displays (HMDs).Telerobótica é um sub-campo da telepresença ligando a mesma à robótica, e que essencialmente consiste em proporcionar ao utilizador a possibilidade de controlar um robô de forma remota. Nas soluções do estado da arte da telerobótica existe uma falha, uma vez que a telerobótica não tem usufruido, no geral, das recentes evoluções em tecnologias de controlo e interfaces de interação pessoa- computador. Além da falta de estudos que apostam em soluções de imersividade, tais como visão estereoscópica, a telerobótica imersiva pode também incluir controlos mais intuitivos, tais como controladores de toque ou baseados em movimentos e gestos. Estes controlos são mais naturais e podem ser traduzidos de forma mais natural no sistema. Neste documento propomos uma abordagem alternativa a métodos mais comuns encontrados na teleoperação de robôs, como, por exemplo, os que se encontram em robôs de busca e salvamento (SAR). O nosso principal foco é testar o impacto que características imersivas, tais como visão estereoscópica e HMDs podem trazer para os robôs de telepresença e sistemas de telerobótica. Além disso, e tendo em conta que este é um novo e crescente campo, vamos mais além estando também a desenvolver uma framework modular que possuí a capacidade de ser extendida com diferentes robôs, com o fim de proporcionar aos investigadores uma plataforma com que podem testar diferentes casos de estudo.Pretendemos provar que adicionando tecnologias imersivas a um sistema de telerobótica é possível obter uma plataforma mais intuitiva, ou seja, menos propensa a erros induzidos por uma perceção e interação errada com o sistema de teleoperação do robô, por parte do operador. A perceção de profundidade e do ambiente em geral são significativamente melhoradas quando se utiliza esta solução de imersão. E o desempenho, tanto em tempo de operação numa tarefa como numa bem-sucedida identificação de objetos de interesse, é também reforçado. Desenvolvemos uma plataforma modular, de baixo/médio custo, de telerobótica imersiva que pode ser estendida com aplicações Android hardware-based no lado do robô. Esta solução tem por objetivo proporcionar a possibilidade de utilizar a mesma plataforma, em qualquer tipo de caso de estudo, estendendo a plataforma com diferentes tipos de robô. Em adição a uma framework modular e extensível, o projeto conta também com três principais módulos de interação, nomeadamente: - Módulo que contém um head mounted display com suporte a head tracking no ambiente do operador - Stream de visão estereoscópica através de Android - E um módulo que proporciona ao utilizador a possibilidade de interagir com o sistema com positional tracking No que respeita ao hardware não apenas a área móvel (e.g. smartphones, tablets, arduino) expandiu de forma avassaladora nos últimos anos, como também assistimos ao despertar de tecnologias de imersão a baixo custo, tais como o Oculus Rift, Google Cardboard ou Leap Motion.Estas soluções de hardware, de custo acessível, associadas aos avanços em stream de vídeo e áudio fornecidas pelas tecnologias WebRTC, principalmente pelo Google, tornam o desenvolvimento de uma solução de software em tempo real possível. Atualmente existe uma falta de métodos de software em tempo real em estereoscopia, mas acreditamos que a chegada de tecnologias WebRTC vai marcar o ponto de viragem, permitindo um plataforma económica e elevando a fasquia em termos de especificações.Telepresence is the term used to describe the set of technologies that enable people to feel or appear as if they were present in a location which they are not physically in. Immersive telepresence is the next step and the objective is to make the operator feel like he is immersed in a remote location, using as many senses as possible and new technologies such as stereoscopic vision, panoramic vision, 3D audio and Head Mounted Displays (HMDs).Telerobotics is a subfield of telepresence and merge it with robotics, providing the operator with the ability to control a robot remotely. In the current state of the art solutions there is a gap, since telerobotics have not enjoyed, in general, of the recent developments in control and human-computer interfaces technology. Besides the lack of studies investing on immersive solutions, such as stereoscopic vision, immersive telerobotics can also include more intuitive control capabilities such as haptic based controls or movement and gestures that would feel more natural and translated more naturally into the system. In this paper we propose an alternative approach to common teleoperation methods. As an example of common solutions, the reader can think about some of the methods found, for instance, in search and rescue (SAR) robots. Our main focus is to test the impact that immersive characteristics like stereoscopic vision and HMDs can bring to telepresence robots and telerobotics systems. Besides that, and since this is a new and growing field, we are also aiming to a modular framework capable of being extended with different robots in order to test different cases and aid researchers with an extensible platform.We claim that with immersive solutions the operator in a telerobotics system will have a more intuitive perception of the remote environment, and will be less error prone induced by a wrong perception and interaction with the teleoperation of the robot. We believe that the operator's depth perception and situational awareness are significantly improved when using immersive solutions, the performance both in terms of operation time and on successful identification, of particular objects, in remote environments are also enhanced.We have developed a low cost immersive telerobotic modular platform, this platform can be extended with hardware based Android applications in slave side (robot side). This solution provides the possibility of using the same platform, in any type of case study, by just extending it with different robots.In addition to the modular and extensible framework, the project will also features three main modules of interaction, namely:* A module that supports an head mounted display and head tracking in the operator environment* Stream of stereoscopic vision through Android with software synchronization* And a module that enables the operator to control the robot with positional tracking In the hardware side not only the mobile area (e.g. smartphones, tablets, arduino) expanded greatly in the last years but we also saw the raise of low cost immersive technologies, like the Oculus Rift DK2, Google Cardboard or Leap Motion. This cost effective hardware solutions associated with the advances in video and audio streaming provided by WebRTC technologies, achieved mostly by Google, make the development of a real-time software solution possible. Currently there is a lack of real-time software methods in stereoscopy, but the arrival of WebRTC technologies can be a game changer.We take advantage of this recent evolution in hardware and software in order to keep the platform economic and low cost, but at same time raising the flag in terms of performance and technical specifications of this kind of platform
    corecore