8,086 research outputs found

    Space Applications of Automation, Robotics and Machine Intelligence Systems (ARAMIS), phase 2. Volume 1: Telepresence technology base development

    Get PDF
    The field of telepresence is defined, and overviews of those capabilities that are now available, and those that will be required to support a NASA telepresence effort are provided. Investigation of NASA's plans and goals with regard to telepresence, extensive literature search for materials relating to relevant technologies, a description of these technologies and their state of the art, and projections for advances in these technologies over the next decade are included. Several space projects are examined in detail to determine what capabilities are required of a telepresence system in order to accomplish various tasks, such as servicing and assembly. The key operational and technological areas are identified, conclusions and recommendations are made for further research, and an example developmental program is presented, leading to an operational telepresence servicer

    Beta: Bioprinting engineering technology for academia

    Get PDF
    Higher STEM education is a field of growing potential, but too many middle school and high school students are not testing proficiently in STEM subjects. The BETA team worked to improve biology classroom engagement through the development of technologies for high school biology experiments. The BETA project team expanded functionality of an existing product line to allow for better student and teacher user experience and the execution of more interesting experiments. The BETA project’s first goal was to create a modular incubating Box for the high school classroom. This Box, called the BETA Box was designed with a variety of sensors to allow for custom temperature and lighting environments for each experiment. It was completed with a clear interface to control the settings and an automatic image capture system. The team also conducted a feasibility study on auto calibration and dual-extrusion for SE3D’s existing 3D bioprinter. The findings of this study led to the incorporation of a force sensor for auto calibration and the evidence to support the feasibility of dual extrusion, although further work is needed. These additions to the current SE3D educational product line will increase effectiveness in the classroom and allow the target audience, high school students, to better engage in STEM education activities

    Remote systems development

    Get PDF
    Potential space missions of the nineties and the next century require that we look at the broad category of remote systems as an important means to achieve cost-effective operations, exploration and colonization objectives. This paper addresses such missions, which can use remote systems technology as the basis for identifying required capabilities which must be provided. The relationship of the space-based tasks to similar tasks required for terrestrial applications is discussed. The development status of the required technology is assessed and major issues which must be addressed to meet future requirements are identified. This includes the proper mix of humans and machines, from pure teleoperation to full autonomy; the degree of worksite compatibility for a robotic system; and the required design parameters, such as degrees-of-freedom. Methods for resolution are discussed including analysis, graphical simulation and the use of laboratory test beds. Grumman experience in the application of these techniques to a variety of design issues are presented utilizing the Telerobotics Development Laboratory which includes a 17-DOF robot system, a variety of sensing elements, Deneb/IRIS graphics workstations and control stations. The use of task/worksite mockups, remote system development test beds and graphical analysis are discussed with examples of typical results such as estimates of task times, task feasibility and resulting recommendations for design changes. The relationship of this experience and lessons-learned to future development of remote systems is also discussed

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Evaluation of Physical Finger Input Properties for Precise Target Selection

    Get PDF
    The multitouch tabletop display provides a collaborative workspace for multiple users around a table. Users can perform direct and natural multitouch interaction to select target elements using their bare fingers. However, physical size of fingertip varies from one person to another which generally introduces a fat finger problem. Consequently, it creates the imprecise selection of small size target elements during direct multitouch input. In this respect, an attempt is made to evaluate the physical finger input properties i.e. contact area and shape in the context of imprecise selection

    SpeCam: sensing surface color and material with the front-facing camera of mobile device

    Get PDF
    SpeCam is a lightweight surface color and material sensing approach for mobile devices which only uses the front-facing camera and the display as a multi-spectral light source. We leverage the natural use of mobile devices (placing it face-down) to detect the material underneath and therefore infer the location or placement of the device. SpeCam can then be used to support discreet micro-interactions to avoid the numerous distractions that users daily face with today's mobile devices. Our two-parts study shows that SpeCam can i) recognize colors in the HSB space with 10 degrees apart near the 3 dominant colors and 4 degrees otherwise and ii) 30 types of surface materials with 99% accuracy. These findings are further supported by a spectroscopy study. Finally, we suggest a series of applications based on simple mobile micro-interactions suitable for using the phone when placed face-down.Postprin

    Automatic testing of organic strain gauge tactile sensors.

    Get PDF
    Human-Robot Interaction is a developing field of science, that is posed to augment everything we do in life. Skin sensors that can detect touch, temperature, distance, and other physical interaction parameters at the human-robot interface are very important to enhancing the collaboration between humans and machines. As such, these sensors must be efficiently tested and characterized to give accurate feedback from the sensor to the robot. The objective of this work is to create a diversified software testing suite that removes as much human intervention as possible. The tests and methodology discussed here provide multiple realistic scenarios that the sensors undergo during repeated experiments. This capability allows for easy repeatable tests without interference from the test engineer, increasing productivity and efficiency. The foundation of this work has two main pieces: force feedback control to drive the test actuator, and computer vision functionality to guide alignment of the test actuator and sensors arranged in a 2D array. The software running automated tests was also made compatible with the testbench hardware via LabVIEW programs. The program uses set coordinates to complete a raster scan of the SkinCell that locates individual sensors. Tests are then applied at each sensor using a force controller. The force feedback control system uses a Proportional Integral Derivative (PID) controller that reads in force readings from a load cell to correct itself or follow a desired trajectory. The motion of the force actuator was compared to that of the projected trajectory to test for accuracy and time delay. The proposed motor control allows for dynamic force to stimulate the sensors giving a more realistic test then a stable force. A top facing camera was introduced to take in the starting position of a SkinCell before testing. Then, computer vision algorithms were proposed to extract the location of the cell and individual sensors before generating a coordinate plane. This allows for the engineer to skip over manual alignment of the sensors, saving more time and providing more accurate destinations. Finally, the testbench was applied to numerous sensors developed by the research team at the Louisville Automation and Robotics Research Institute (LARRI) for testing and data analysis. Force loads are applied to the individual sensors while recording response. Afterwards, postprocessing of the data was conducted to compare responses within the SkinCell as well as to other sensors manufactured using different methods

    Development of an automated robot vision component handling system

    Get PDF
    Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013In the industry, automation is used to optimize production, improve product quality and increase profitability. By properly implementing automation systems, the risk of injury to workers can be minimized. Robots are used in many low-level tasks to perform repetitive, undesirable or dangerous work. Robots can perform a task with higher precision and accuracy to lower errors and waste of material. Machine Vision makes use of cameras, lighting and software to do visual inspections that a human would normally do. Machine Vision is useful in application where repeatability, high speed and accuracy are important. This study concentrates on the development of a dedicated robot vision system to automatically place components exiting from a conveyor system onto Automatic Guided Vehicles (AGV). A personal computer (PC) controls the automated system. Software modules were developed to do image processing for the Machine Vision system as well as software to control a Cartesian robot. These modules were integrated to work in a real-time system. The vision system is used to determine the parts‟ position and orientation. The orientation data are used to rotate a gripper and the position data are used by the Cartesian robot to position the gripper over the part. Hardware for the control of the gripper, pneumatics and safety systems were developed. The automated system‟s hardware was integrated by the use of the different communication protocols, namely DeviceNet (Cartesian robot), RS-232 (gripper) and Firewire (camera)

    Proceedings of the 4th field robot event 2006, Stuttgart/Hohenheim, Germany, 23-24th June 2006

    Get PDF
    Zeer uitgebreid verslag van het 4e Fieldrobotevent, dat gehouden werd op 23 en 24 juni 2006 in Stuttgart/Hohenhei

    The Emerging Internet of Things Marketplace From an Industrial Perspective: A Survey

    Get PDF
    The Internet of Things (IoT) is a dynamic global information network consisting of internet-connected objects, such as Radio-frequency identification (RFIDs), sensors, actuators, as well as other instruments and smart appliances that are becoming an integral component of the future internet. Over the last decade, we have seen a large number of the IoT solutions developed by start-ups, small and medium enterprises, large corporations, academic research institutes (such as universities), and private and public research organisations making their way into the market. In this paper, we survey over one hundred IoT smart solutions in the marketplace and examine them closely in order to identify the technologies used, functionalities, and applications. More importantly, we identify the trends, opportunities and open challenges in the industry-based the IoT solutions. Based on the application domain, we classify and discuss these solutions under five different categories: smart wearable, smart home, smart, city, smart environment, and smart enterprise. This survey is intended to serve as a guideline and conceptual framework for future research in the IoT and to motivate and inspire further developments. It also provides a systematic exploration of existing research and suggests a number of potentially significant research directions.Comment: IEEE Transactions on Emerging Topics in Computing 201
    corecore