44 research outputs found
OPEB: Open Physical Environment Benchmark for Artificial Intelligence
Artificial Intelligence methods to solve continuous- control tasks have made
significant progress in recent years. However, these algorithms have important
limitations and still need significant improvement to be used in industry and
real- world applications. This means that this area is still in an active
research phase. To involve a large number of research groups, standard
benchmarks are needed to evaluate and compare proposed algorithms. In this
paper, we propose a physical environment benchmark framework to facilitate
collaborative research in this area by enabling different research groups to
integrate their designed benchmarks in a unified cloud-based repository and
also share their actual implemented benchmarks via the cloud. We demonstrate
the proposed framework using an actual implementation of the classical
mountain-car example and present the results obtained using a Reinforcement
Learning algorithm.Comment: Accepted in 3rd IEEE International Forum on Research and Technologies
for Society and Industry 201
Interactive Visual Histories for Vector Graphics
Presentation and graphics software enables users to experiment with variations of illustrations. They can revisit recent editing operations using the ubiquitous undo command, but they are limited to sequential exploration. We propose a new interaction metaphor and visualization for operation history. While editing, a user can access a history mode in which actions are denoted by graphical depictions appearing on top of the document. Our work is inspired by the visual language of film storyboards and assembly instructions. Our storyboard provides an interactive visual history, summarizing the editing of a document or a selected object. Each view is composed of action depictions representing the userâ s editing actions and enables the user to consider the operation history in context rather than in a disconnected list view. This metaphor provides instant access to any past action and we demonstrate that this is an intuitive interface to a selective undo mechanism
Application of design principles for assembly instructions – evaluation of practitioner use
Production complexity causes assembly errors due to that the demands on the operators are high and there is a need to improve assembly instructions. Design principles for Information Presentation (DFIP) is a method developed to support such improvement and its application was evaluated in three case studies, 152 practitioners. Results indicate that DFIP use help simplifying the information presentation so that complexity can be reduced, and that step 4 is easiest to understand. In addition, the implementation of assembly instructions gave positive results
Optimization of assembly instructions for a low-cost housing solution
Bamboo huts have been proposed as a low-cost housing
solution in places like India, the Far East and South
America. Successful building is strongly linked to the
end-user’s ability to interpret and execute their assembly
instructions correctly. This article reports a case study in
which the planning of the structure of the instructions
was carried out to decrease complexity and increase
effectiveness so that the assembly could be interpreted
and executed correctly by participants. A diagnostic test
to assess their suitability was conducted. The results
provided insight into the way in which end-users dealt
with ambiguity and intrinsic cognitive load, and their
preferences for sub-assemblies, action, colored diagrams
and self-auditing steps
Multi-3D-Models Registration-Based Augmented Reality (AR) Instructions for Assembly
This paper introduces a novel, markerless, step-by-step, in-situ 3D Augmented
Reality (AR) instruction method and its application - BRICKxAR (Multi 3D
Models/M3D) - for small parts assembly. BRICKxAR (M3D) realistically visualizes
rendered 3D assembly parts at the assembly location of the physical assembly
model (Figure 1). The user controls the assembly process through a user
interface. BRICKxAR (M3D) utilizes deep learning-trained 3D model-based
registration. Object recognition and tracking become challenging as the
assembly model updates at each step. Additionally, not every part in a 3D
assembly may be visible to the camera during the assembly. BRICKxAR (M3D)
combines multiple assembly phases with a step count to address these
challenges. Thus, using fewer phases simplifies the complex assembly process
while step count facilitates accurate object recognition and precise
visualization of each step. A testing and heuristic evaluation of the BRICKxAR
(M3D) prototype and qualitative analysis were conducted with users and experts
in visualization and human-computer interaction. Providing robust 3D AR
instructions and allowing the handling of the assembly model, BRICKxAR (M3D)
has the potential to be used at different scales ranging from manufacturing
assembly to construction
Guide to build YOLO, a creativity-stimulating robot for children
YOLO is a non-anthropomorphic social robot designed to stimulate creativity in
children. This robot was envisioned to be used by children during free-play where they use the
robot as a character for the stories they create. During play, YOLO makes use of creativity
techniques that promote the creation of new story-lines. Therefore, the robot serves as a tool that
has the potential to stimulate creativity in children during the interaction. Particularly, YOLO
can stimulate divergent and convergent thinking for story creations. Additionally, YOLO can
have different personalities, providing it with socially intelligent and engaging behaviors. This
work provides open-source and open-access of YOLO's hardware. The design of the robot was
guided by psychological theories and models on creativity, design research including user-centered
design practices with children, and informed by expert working in the field of creativity. Specifically, we relied on established theories of personality to inform the social behavior of the robot, and on theories of creativity to design creativity stimulating behaviors. Our design decisions were then based on design fieldwork with children. The end product is a robot that communicates using non-verbal expressive modalities (lights and movements) equipped with sensors that detect the playful behaviors of children. YOLO has the potential to be used as a research tool for academic studies, and as a toy for the community to engage in personal fabrication. The overall bene t of this proposed hardware is that it is open-source, less expensive than existing ones, and one that children can build by themselves under expert supervision.info:eu-repo/semantics/publishedVersio