409 research outputs found

    Mobile Robot Navigation System Vision Based Through Indoor Corridors

    Get PDF
    Nowadays, industry has been moving toward fourth industry revolution, but surveillance industry is still using human in patrol. This will put this industry in risk due to human nature instincts. By using a mobile robot with assist of vision sensor to patrol can bring this industry to a new level. However, the indoor corridor navigation will become a big challenge to this method. The objective of this project is to develop a navigation system using vision sensor and navigate the mobile robot in indoor corridor environment. To perform this operation, a control system though the WLAN communication develop to guide the movement of mobile robot. Besides that, corridor following system with vision sensor that using Sobel edge detection method and Hough transform to getting the vanish point is needed to help the robot to safely travel in the corridor. Both systems can be using MATLAB to be execute and link with the mobile robot through WLAN connection. This system can be analysis the corridor condition base on different feature and can decide to drive the mobile car in the direction that given. The image capture by mobile robot can be stream to MATLAB in real time and receive a feedback in short time

    IVVI 2.0: An intelligent vehicle based on computational perception

    Get PDF
    This paper presents the IVVI 2.0 a smart research platform to foster intelligent systems in vehicles. Computational perception in intelligent transportation systems applications has advantages, such as huge data from vehicle environment, among others, so computer vision systems and laser scanners are the main devices that accomplish this task. Both have been integrated in our intelligent vehicle to develop cutting-edge applications to cope with perception difficulties, data processing algorithms, expert knowledge, and decision-making. The long-term in-vehicle applications, that are presented in this paper, outperform the most significant and fundamental technical limitations, such as, robustness in the face of changing environmental conditions. Our intelligent vehicle operates outdoors with pedestrians and others vehicles, and outperforms illumination variation, i.e.: shadows, low lighting conditions, night vision, among others. So, our applications ensure the suitable robustness and safety in case of a large variety of lighting conditions and complex perception tasks. Some of these complex tasks are overcome by the improvement of other devices, such as, inertial measurement units or differential global positioning systems, or perception architectures that accomplish sensor fusion processes in an efficient and safe manner. Both extra devices and architectures enhance the accuracy of computational perception and outreach the properties of each device separately.This work was supported by the Spanish Government through the CICYT projects (GRANT TRA2010 20225 C03 01) and (GRANT TRA 2011 29454 C03 02)

    3D Sensor Placement and Embedded Processing for People Detection in an Industrial Environment

    Get PDF
    Papers I, II and III are extracted from the dissertation and uploaded as separate documents to meet post-publication requirements for self-arciving of IEEE conference papers.At a time when autonomy is being introduced in more and more areas, computer vision plays a very important role. In an industrial environment, the ability to create a real-time virtual version of a volume of interest provides a broad range of possibilities, including safety-related systems such as vision based anti-collision and personnel tracking. In an offshore environment, where such systems are not common, the task is challenging due to rough weather and environmental conditions, but the result of introducing such safety systems could potentially be lifesaving, as personnel work close to heavy, huge, and often poorly instrumented moving machinery and equipment. This thesis presents research on important topics related to enabling computer vision systems in industrial and offshore environments, including a review of the most important technologies and methods. A prototype 3D sensor package is developed, consisting of different sensors and a powerful embedded computer. This, together with a novel, highly scalable point cloud compression and sensor fusion scheme allows to create a real-time 3D map of an industrial area. The question of where to place the sensor packages in an environment where occlusions are present is also investigated. The result is algorithms for automatic sensor placement optimisation, where the goal is to place sensors in such a way that maximises the volume of interest that is covered, with as few occluded zones as possible. The method also includes redundancy constraints where important sub-volumes can be defined to be viewed by more than one sensor. Lastly, a people detection scheme using a merged point cloud from six different sensor packages as input is developed. Using a combination of point cloud clustering, flattening and convolutional neural networks, the system successfully detects multiple people in an outdoor industrial environment, providing real-time 3D positions. The sensor packages and methods are tested and verified at the Industrial Robotics Lab at the University of Agder, and the people detection method is also tested in a relevant outdoor, industrial testing facility. The experiments and results are presented in the papers attached to this thesis.publishedVersio

    A Highly Accurate And Reliable Data Fusion Framework For Guiding The Visually Impaired

    Get PDF
    The world has approximately 285 million visually impaired (VI) people according to a report by the World Health Organization. Thirty-nine million people are estimated to be blind, whereas 246 million people are estimated to have impaired vision. An important factor that motivated this research is the fact that 90% of VI people live in developing countries. Several systems have been designed to improve the quality of the life of VI people and support the mobility of VI people. Unfortunately, none of these systems provides a complete solution for VI people, and the systems are very expensive. Therefore, this work presents an intelligent framework that includes several types of sensors embedded in a wearable device to support the visually impaired (VI) community. The proposed work is based on an integration of sensor-based and computer vision-based techniques in order to introduce an efficient and economical visual device. The designed algorithm is divided to two components: obstacle detection and collision avoidance. The system has been implemented and tested in real-time scenarios. A video dataset of 30 videos and an average of 700 frames per video was fed to the system for the testing purpose. The achieved 96.53% accuracy rate of the proposed sequence of techniques that are used for real-time detection component is based on a wide detection view that used two camera modules and a detection range of approximately 9 meters. The 98% accuracy rate was obtained for a larger dataset. However, the main contribution in this work is the proposed novel collision avoidance approach that is based on the image depth and fuzzy control rules. Through the use of x-y coordinate system, we were able to map the input frames, whereas each frame was divided into three areas vertically and further 1/3 of the height of that frame horizontally in order to specify the urgency of any existing obstacles within that frame. In addition, we were able to provide precise information to help the VI user in avoiding front obstacles using the fuzzy logic. The strength of this proposed approach is that it aids the VI users in avoiding 100% of all detected objects. Once the device is initialized, the VI user can confidently enter unfamiliar surroundings. Therefore, this implemented device can be described as accurate, reliable, friendly, light, and economically accessible that facilitates the mobility of VI people and does not require any previous knowledge of the surrounding environment. Finally, our proposed approach was compared with most efficient introduced techniques and proved to outperform them

    Tracking of persons with camera-fusion technology

    Get PDF
    The idea of a robot tracking and following a person is not new. Different combinations of laser range finders and camera pairings have been used for research on this subject. In the last years stereoscopic systems have been developed to compensate shortcomings of laser range finders or ultra sonic sensor arrays in means of 3D recognition. When Microsoft began the distribution of the Microsoft Kinect in the year 2010 they released a comparatively cheap system, that combines depth measurement and a color view in one device. Though the system was intended as a new remote controlling system for games, tackling the market launch of motion sensing wireless controllers such as the Nintendo Wiimote and Sony Playstation 3 move some developers saw more in this technology. And so it did not take long until the first hacks for the Microsoft Kinect were published after the initial release. More and more people started to create own software, ranging from shadowpuppets [TW10] to remote controlling home cinema systems [Nar11]. Microsoft and PrimeSense soon recognized the potential and released free drivers and sdks for the use of the camera device with PCs. With Prime Sense publishing the drivers as open source a lot of possible uses came up for the Microsoft Kinect. Some companies used this event to enter the market of camera-fusion technology. The most comparable system to the Microsoft Kinect is the Xtion Pro Live by Asus. These devices merging the depth measurement and color view with computation on a deviceinternal system reveal new possibilities concerning the tracking of persons and enabling them to even give the robot commands using gestures. This paper shall inquire to what extend the Microsoft Kinect or the Asus Xtion Pro Live can be used as substitute for stereoscopic cameras/laser range finder systems in context of a tracking and control device for human robot interaction scenarios with person following applications for service robots
    • …
    corecore