945 research outputs found

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin

    Audience monitor:an open source tool for tracking audience mobility in front of pervasive displays

    Get PDF
    Understanding an audience's behavior is an important aspect of evaluating display installations. In particular, it is important to understand how people move around in the vicinity of displays, including viewer transitions from noticing a display, through approach, to final use of the display. Despite the importance of measuring viewer mobility patterns, there are still relatively few low-cost tools that can be used with research display deployments to capture detailed spatial and temporal behavior of an audience. In this paper, we present an approach to audience monitoring that uses an off-the-shelf depth sensor and open source computer vision algorithms to monitor the space in front of a digital display, tracking presence and movements of both passers-by and display users. We believe that our approach can help display researchers evaluate their public display deployments and improve the level of quantitative data underpinning our field

    Towards a cloud‑based automated surveillance system using wireless technologies

    Get PDF
    Cloud Computing can bring multiple benefits for Smart Cities. It permits the easy creation of centralized knowledge bases, thus straightforwardly enabling that multiple embedded systems (such as sensor or control devices) can have a collaborative, shared intelligence. In addition to this, thanks to its vast computing power, complex tasks can be done over low-spec devices just by offloading computation to the cloud, with the additional advantage of saving energy. In this work, cloud’s capabilities are exploited to implement and test a cloud-based surveillance system. Using a shared, 3D symbolic world model, different devices have a complete knowledge of all the elements, people and intruders in a certain open area or inside a building. The implementation of a volumetric, 3D, object-oriented, cloud-based world model (including semantic information) is novel as far as we know. Very simple devices (orange Pi) can send RGBD streams (using kinect cameras) to the cloud, where all the processing is distributed and done thanks to its inherent scalability. A proof-of-concept experiment is done in this paper in a testing lab with multiple cameras connected to the cloud with 802.11ac wireless technology. Our results show that this kind of surveillance system is possible currently, and that trends indicate that it can be improved at a short term to produce high performance vigilance system using low-speed devices. In addition, this proof-of-concept claims that many interesting opportunities and challenges arise, for example, when mobile watch robots and fixed cameras would act as a team for carrying out complex collaborative surveillance strategies.Ministerio de Economía y Competitividad TEC2016-77785-PJunta de Andalucía P12-TIC-130

    Controller Synthesis of Multi-Axial Robotic System Used for Wearable Devices

    Get PDF
    Wearable devices are commonly used in different fields to help improving performance of movements for different groups of users. The long-term goal of this study is to develop a low-cost assistive robotic device that allows patients to perform rehabilitation activities independently and reproduces natural movement to help stroke patients and elderly adults in their daily activities while moving their arms. In the past few decades, various types of wearable robotic devices have been developed to assist different physical movements. Among different types of actuators, the twisted-string actuation system is one of those that has advantages of light-weight, low cost, and great portability. In this study, a dual twisted-string actuator is used to drive the joints of the prototype assistive robotic device. To compensate the asynchronous movement caused by nonlinear factors, a hybrid controller that combines fuzzy logic rules and linear PID control algorithm was adopted to compensate for both tracking and synchronization of the two actuators.;In order to validate the performance of proposed controllers, the robotic device was driven by an xPC Target machine with additional embedded controllers for different data acquisition tasks. The controllers were fine tuned to eliminate the inaccuracy of tracking and synchronization caused by disturbance and asynchronous movements of both actuators. As a result, the synthesized controller can provide a high precision when tracking simple actual human movements

    Robot Composite Learning and the Nunchaku Flipping Challenge

    Full text link
    Advanced motor skills are essential for robots to physically coexist with humans. Much research on robot dynamics and control has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this paper, we present a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation. The method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. We also introduce the "nunchaku flipping challenge", an extreme test that puts hard requirements to all these three aspects. Continued from our previous presentations, this paper introduces the latest update of the composite learning scheme and the physical success of the nunchaku flipping challenge

    The development of a robotic test bed with applications in Q-learning

    Get PDF
    In this work, we show the design, development, and testing of an autonomous ground vehicle for experiments in learning and intelligent transportation research. We then implement the Q-Learning algorithm to teach the robot to navigate towards a light source. The vehicle platform is based on the Tamiya TXT-1 chassis which is out\ufb01tted with an onboard computer for processing high-level functions, a microcontroller for controlling the low-level tasks, and an array of sensors for collecting information about its surroundings. The TXT-1 robot is a unique research testbed that encourages the use of a modular design, low-cost COTS hardware, and open-source software. The TXT-1 is designed using different modules or blocks that are separated based on functionality. The different functional blocks of the TXT-1 are the motors, power, low-level controller, high-level controller, and sensors. This modular design is important when considering upgrading or maintaining the robot. The research platform uses an Apple Mac Mini as its on-board computer for handling high-level navigation tasks like processing sensor data and computing navigation trajectories. ROS, the robot operating system, is used on the computer as a development environment to easily implement algorithms to validate on the robot. A ROS driver was created so that the TXT-1 low-level functions can be sensed and commanded. The TXT-1 low-level controller is designed using an ARM7 processor development board with FreeRTOS, OpenOCD, and the CodeSourcery development tools. The RTOS is used to provide a stable, real-time platform that can be used for many future generations of TXT-1 robots. A communication protocol is created so that the high and low-level processors can communicate. A power distribution system is designed and built to deliver power to all of the systems efficiently and reliably while using a single battery type. Velocity controllers are developed and implemented on the low-level controller. These control the linear and angular velocities using the wheel encoders in a PID feedback loop. The angular velocity controller uses gain scheduling to overcome the systems nonlinearity. The controllers are then tested for adequate velocity response and tracking. The robot is then tested by using the Q-Learning algorithm to teach the robot to navigate towards a light source. The Q-Learning algorithm is \ufb01rst described in detail, and then the problem is formulated and the algorithm is tested in the Stage simulation environment with ROS. The same ROS code is then used on the TXT-1 to implement the algorithm in hardware. Because of delays encountered in the system, the Q-Learning algorithm is modi\ufb01ed to use the sensed action to update the Q-Table, which gives promising results. As a result of this research, a novel autonomous ground vehicle was built and the Q-Learning source \ufb01nding problem was implemented.\u2
    corecore