281 research outputs found

    Tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi MARA terhadap mata pelajaran Bahasa Inggeris

    Get PDF
    Kajian ini dilakukan untuk mengenal pasti tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi Mara Sri Gading terhadap Bahasa Inggeris. Kajian yang dijalankan ini berbentuk deskriptif atau lebih dikenali sebagai kaedah tinjauan. Seramai 325 orang pelajar Diploma in Construction Technology dari Kolej Kemahiran Tinggi Mara di daerah Batu Pahat telah dipilih sebagai sampel dalam kajian ini. Data yang diperoleh melalui instrument soal selidik telah dianalisis untuk mendapatkan pengukuran min, sisihan piawai, dan Pekali Korelasi Pearson untuk melihat hubungan hasil dapatan data. Manakala, frekuensi dan peratusan digunakan bagi mengukur penguasaan pelajar. Hasil dapatan kajian menunjukkan bahawa tahap penguasaan Bahasa Inggeris pelajar adalah berada pada tahap sederhana manakala faktor utama yang mempengaruhi penguasaan Bahasa Inggeris tersebut adalah minat diikuti oleh sikap. Hasil dapatan menggunakan pekali Korelasi Pearson juga menunjukkan bahawa terdapat hubungan yang signifikan antara sikap dengan penguasaan Bahasa Inggeris dan antara minat dengan penguasaan Bahasa Inggeris. Kajian menunjukkan bahawa semakin positif sikap dan minat pelajar terhadap pengajaran dan pembelajaran Bahasa Inggeris semakin tinggi pencapaian mereka. Hasil daripada kajian ini diharapkan dapat membantu pelajar dalam meningkatkan penguasaan Bahasa Inggeris dengan memupuk sikap positif dalam diri serta meningkatkan minat mereka terhadap Bahasa Inggeris dengan lebih baik. Oleh itu, diharap kajian ini dapat memberi panduan kepada pihak-pihak yang terlibat dalam membuat kajian yang akan datang

    Encoderless position estimation and error correction techniques for miniature mobile robots

    Get PDF
    This paper presents an encoderless position estimation technique for miniature-sized mobile robots. Odometry techniques, which are based on the hardware components, are commonly used for calculating the geometric location of mobile robots. Therefore, the robot must be equipped with an appropriate sensor to measure the motion. However, due to the hardware limitations of some robots, employing extra hardware is impossible. On the other hand, in swarm robotic research, which uses a large number of mobile robots, equipping the robots with motion sensors might be costly. In this study, the trajectory of the robot is divided into several small displacements over short spans of time. Therefore, the position of the robot is calculated within a short period, using the speed equations of the robot's wheel. In addition, an error correction function is proposed that estimates the errors of the motion using a current monitoring technique. The experiments illustrate the feasibility of the proposed position estimation and error correction techniques to be used in miniature-sized mobile robots without requiring an additional sensor

    Independent Motion Detection with Event-driven Cameras

    Full text link
    Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ~ 90 % and show that the method is robust to changes in speed of both the head and the target.Comment: 7 pages, 6 figure

    Hybrid Position and Orientation Tracking for a Passive Rehabilitation Table-Top Robot

    Get PDF
    This paper presents a real time hybrid 2D position and orientation tracking system developed for an upper limb rehabilitation robot. Designed to work on a table-top, the robot is to enable home-based upper-limb rehabilitative exercise for stroke patients. Estimates of the robot's position are computed by fusing data from two tracking systems, each utilizing a different sensor type: laser optical sensors and a webcam. Two laser optical sensors are mounted on the underside of the robot and track the relative motion of the robot with respect to the surface on which it is placed. The webcam is positioned directly above the workspace, mounted on a fixed stand, and tracks the robot's position with respect to a fixed coordinate system. The optical sensors sample the position data at a higher frequency than the webcam, and a position and orientation fusion scheme is proposed to fuse the data from the two tracking systems. The proposed fusion scheme is validated through an experimental set-up whereby the rehabilitation robot is moved by a humanoid robotic arm replicating previously recorded movements of a stroke patient. The results prove that the presented hybrid position tracking system can track the position and orientation with greater accuracy than the webcam or optical sensors alone. The results also confirm that the developed system is capable of tracking recovery trends during rehabilitation therapy

    Visually-guided walking reference modification for humanoid robots

    Get PDF
    Humanoid robots are expected to assist humans in the future. As for any robot with mobile characteristics, autonomy is an invaluable feature for a humanoid interacting with its environment. Autonomy, along with components from artificial intelligence, requires information from sensors. Vision sensors are widely accepted as the source of richest information about the surroundings of a robot. Visual information can be exploited in tasks ranging from object recognition, localization and manipulation to scene interpretation, gesture identification and self-localization. Any autonomous action of a humanoid, trying to accomplish a high-level goal, requires the robot to move between arbitrary waypoints and inevitably relies on its selflocalization abilities. Due to the disturbances accumulating over the path, it can only be achieved by gathering feedback information from the environment. This thesis proposes a path planning and correction method for bipedal walkers based on visual odometry. A stereo camera pair is used to find distinguishable 3D scene points and track them over time, in order to estimate the 6 degrees-of-freedom position and orientation of the robot. The algorithm is developed and assessed on a benchmarking stereo video sequence taken from a wheeled robot, and then tested via experiments with the humanoid robot SURALP (Sabanci University Robotic ReseArch Laboratory Platform)

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Development of an autonomous mobile robot with planning and location in a structured environment

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáWith the advance of technology mobile robots have been increasingly applied in the industry, performing repetitive work with high performance, and in environments that pose risks to human health. The present work plans and develops a mobile robot platform for the micromouse competition. The micromouse consists of a small autonomous mobile robot that, when placed in an unknown labyrinth, is able to map it, search for the best path between the starting point and the goal and travel it in the shortest possible time. To accomplish these tasks, the robot must be able to self-locate, map the maze as it traverses it and plan paths based on the map obtained. The developed self-localization method is based on the odometry, the laser sensors present in the robot and on a previous knowledge of the start point and the configuration of the environment. Several methodologies of locomotion in unknown environment and route planning are analyzed in order to obtain the combination with the best performance. In order to verify the results, the present work is developed in real environment, in 3D simulation and also with a hardware in the loop capability. Labyrinths from previous competitions are used as basis for comparing methodologies and validating results. At the end it presents the algorithm capable of fulfilling all the requirements of the micromouse competition together with the results of its evaluation run.Com o avanço da tecnologia, os robôs móveis têm sido cada vez mais aplicados na indústria, realizando trabalhos repetitivos com alto desempenho e em ambientes que expõem riscos à saúde humana. O presente trabalho planeja e desenvolve um robô móvel para a competição micromouse. O micromouse consiste em um pequeno robô autônomo que, ao ser colocado em um labirinto desconhecido, é capaz de mapeá-lo, procurar o melhor caminho entre o ponto de partida e o objetivo, e percorrê-lo no menor tempo possível. Para realizar estas tarefas, o robô deve ser capaz de se auto-localizar, mapear o labirinto enquanto o percorre e planejar caminhos com base no mapa obtido. O método de auto-localização desenvolvido baseia-se na odometria, nos sensores a laser presentes no robô e em um prévio conhecimento do ponto de início e da configuração do ambiente. Diversas metodologias de locomoção em ambiente desconhecido e planejamento de rotas são analisadas buscando-se obter a combinação com o melhor desempenho. Para averiguação de resultados o presente trabalho desenvolve-se em ambiente real e em simulação 3D com hardware in the loop. Labirintos de competições anteriores são utilizados de base para o comparativo de metodologias e validação de resultados. Ao final apresenta-se o algoritmo capaz de cumprir todas as exigências da competição micromouse juntamente com os resultados em sua corrida de avaliação

    An Approach for Multi-Robot Opportunistic Coexistence in Shared Space

    Get PDF
    This thesis considers a situation in which multiple robots operate in the same environment towards the achievement of different tasks. In this situation, please consider that not only the tasks, but also the robots themselves are likely be heterogeneous, i.e., different from each other in their morphology, dynamics, sensors, capabilities, etc. As an example, think about a "smart hotel": small wheeled robots are likely to be devoted to cleaning floors, whereas a humanoid robot may be devoted to social interaction, e.g., welcoming guests and providing relevant information to them upon request. Under these conditions, robots are required not only to co-exist, but also to coordinate their activity if we want them to exhibit a coherent and effective behavior: this may range from mutual avoidance to avoid collisions, to a more explicit coordinated behavior, e.g., task assignment or cooperative localization. The issues above have been deeply investigated in the Literature. Among the topics that may play a crucial role to design a successful system, this thesis focuses on the following ones: (i) An integrated approach for path following and obstacle avoidance is applied to unicycle type robots, by extending an existing algorithm [1] initially developed for the single robot case to the multi-robot domain. The approach is based on the definition of the path to be followed as a curve f (x;y) in space, while obstacles are modeled as Gaussian functions that modify the original function, generating a resulting safe path. The attractiveness of this methodology which makes it look very simple, is that it neither requires the computation of a projection of the robot position on the path, nor does it need to consider a moving virtual target to be tracked. The performance of the proposed approach is analyzed by means of a series of experiments performed in dynamic environments with unicycle-type robots by integrating and determining the position of robot using odometry and in Motion capturing environment. (ii) We investigate the problem of multi-robot cooperative localization in dynamic environments. Specifically, we propose an approach where wheeled robots are localized using the monocular camera embedded in the head of a Pepper humanoid robot, to the end of minimizing deviations from their paths and avoiding each other during navigation tasks. Indeed, position estimation requires obtaining a linear relationship between points in the image and points in the world frame: to this end, an Inverse Perspective mapping (IPM) approach has been adopted to transform the acquired image into a bird eye view of the environment. The scenario is made more complex by the fact that Pepper\u2019s head is moving dynamically while tracking the wheeled robots, which requires to consider a different IPM transformation matrix whenever the attitude (Pitch and Yaw) of the camera changes. Finally, the IPM position estimate returned by Pepper is merged with the estimate returned by the odometry of the wheeled robots through an Extened Kalman Filter. Experiments are shown with multiple robots moving along different paths in a shared space, by avoiding each other without onboard sensors, i.e., by relying only on mutual positioning information. Software for implementing the theoretical models described above have been developed in ROS, and validated by performing real experiments with two types of robots, namely: (i) a unicycle wheeled Roomba robot(commercially available all over the world), (ii) Pepper Humanoid robot (commercially available in Japan and B2B model in Europe)

    Contemporary Robotics

    Get PDF
    This book book is a collection of 18 chapters written by internationally recognized experts and well-known professionals of the field. Chapters contribute to diverse facets of contemporary robotics and autonomous systems. The volume is organized in four thematic parts according to the main subjects, regarding the recent advances in the contemporary robotics. The first thematic topics of the book are devoted to the theoretical issues. This includes development of algorithms for automatic trajectory generation using redudancy resolution scheme, intelligent algorithms for robotic grasping, modelling approach for reactive mode handling of flexible manufacturing and design of an advanced controller for robot manipulators. The second part of the book deals with different aspects of robot calibration and sensing. This includes a geometric and treshold calibration of a multiple robotic line-vision system, robot-based inline 2D/3D quality monitoring using picture-giving and laser triangulation, and a study on prospective polymer composite materials for flexible tactile sensors. The third part addresses issues of mobile robots and multi-agent systems, including SLAM of mobile robots based on fusion of odometry and visual data, configuration of a localization system by a team of mobile robots, development of generic real-time motion controller for differential mobile robots, control of fuel cells of mobile robots, modelling of omni-directional wheeled-based robots, building of hunter- hybrid tracking environment, as well as design of a cooperative control in distributed population-based multi-agent approach. The fourth part presents recent approaches and results in humanoid and bioinspirative robotics. It deals with design of adaptive control of anthropomorphic biped gait, building of dynamic-based simulation for humanoid robot walking, building controller for perceptual motor control dynamics of humans and biomimetic approach to control mechatronic structure using smart materials

    Vision based localization: from humanoid robots to visually impaired people

    Get PDF
    Nowadays, 3D applications have recently become a more and more popular topic in robotics, computer vision or augmented reality. By means of cameras and computer vision techniques, it is possible to obtain accurate 3D models of large-scale environments such as cities. In addition, cameras are low-cost, non-intrusive sensors compared to other sensors such as laser scanners. Furthermore, cameras also offer a rich information about the environment. One application of great interest is the vision-based localization in a prior 3D map. Robots need to perform tasks in the environment autonomously, and for this purpose, is very important to know precisely the location of the robot in the map. In the same way, providing accurate information about the location and spatial orientation of the user in a large-scale environment can be of benefit for those who suffer from visual impairment problems. A safe and autonomous navigation in unknown or known environments, can be a great challenge for those who are blind or are visually impaired. Most of the commercial solutions for visually impaired localization and navigation assistance are based on the satellite Global Positioning System (GPS). However, these solutions are not suitable enough for the visually impaired community in urban-environments. The errors are about of the order of several meters and there are also other problems such GPS signal loss or line-of-sight restrictions. In addition, GPS does not work if an insufficient number of satellites are directly visible. Therefore, GPS cannot be used for indoor environments. Thus, it is important to do further research on new more robust and accurate localization systems. In this thesis we propose several algorithms in order to obtain an accurate real-time vision-based localization from a prior 3D map. For that purpose, it is necessary to compute a 3D map of the environment beforehand. For computing that 3D map, we employ well-known techniques such as Simultaneous Localization and Mapping (SLAM) or Structure from Motion (SfM). In this thesis, we implement a visual SLAM system using a stereo camera as the only sensor that allows to obtain accurate 3D reconstructions of the environment. The proposed SLAM system is also capable to detect moving objects especially in a close range to the camera up to approximately 5 meters, thanks to a moving objects detection module. This is possible, thanks to a dense scene flow representation of the environment, that allows to obtain the 3D motion of the world points. This moving objects detection module seems to be very effective in highly crowded and dynamic environments, where there are a huge number of dynamic objects such as pedestrians. By means of the moving objects detection module we avoid adding erroneous 3D points into the SLAM process, yielding much better and consistent 3D reconstruction results. Up to the best of our knowledge, this is the first time that dense scene flow and derived detection of moving objects has been applied in the context of visual SLAM for challenging crowded and dynamic environments, such as the ones presented in this Thesis. In SLAM and vision-based localization approaches, 3D map points are usually described by means of appearance descriptors. By means of these appearance descriptors, the data association between 3D map elements and perceived 2D image features can be done. In this thesis we have investigated a novel family of appearance descriptors known as Gauge-Speeded Up Robust Features (G-SURF). Those descriptors are based on the use of gauge coordinates. By means of these coordinates every pixel in the image is fixed separately in its own local coordinate frame defined by the local structure itself and consisting of the gradient vector and its perpendicular direction. We have carried out an extensive experimental evaluation on different applications such as image matching, visual object categorization and 3D SfM applications that show the usefulness and improved results of G-SURF descriptors against other state-of-the-art descriptors such as the Scale Invariant Feature Transform (SIFT) or SURF. In vision-based localization applications, one of the most expensive computational steps is the data association between a large map of 3D points and perceived 2D features in the image. Traditional approaches often rely on purely appearence information for solving the data association step. These algorithms can have a high computational demand and for environments with highly repetitive textures, such as cities, this data association can lead to erroneous results due to the ambiguities introduced by visually similar features. In this thesis we have done an algorithm for predicting the visibility of 3D points by means of a memory based learning approach from a prior 3D reconstruction. Thanks to this learning approach, we can speed-up the data association step by means of the prediction of visible 3D points given a prior camera pose. We have implemented and evaluated visual SLAM and vision-based localization algorithms for two different applications of great interest: humanoid robots and visually impaired people. Regarding humanoid robots, a monocular vision-based localization algorithm with visibility prediction has been evaluated under different scenarios and different types of sequences such as square trajectories, circular, with moving objects, changes in lighting, etc. A comparison of the localization and mapping error has been done with respect to a precise motion capture system, yielding errors about the order of few cm. Furthermore, we also compared our vision-based localization system with respect to the Parallel Tracking and Mapping (PTAM) approach, obtaining much better results with our localization algorithm. With respect to the vision-based localization approach for the visually impaired, we have evaluated the vision-based localization system in indoor and cluttered office-like environments. In addition, we have evaluated the visual SLAM algorithm with moving objects detection considering test with real visually impaired users in very dynamic environments such as inside the Atocha railway station (Madrid, Spain) and in the city center of Alcalá de Henares (Madrid, Spain). The obtained results highlight the potential benefits of our approach for the localization of the visually impaired in large and cluttered environments
    corecore