7 research outputs found
Task support system by displaying instructional video onto AR workspace
This paper presents an instructional support system based on aug-mented reality (AR). This system helps a user to work intuitively by overlaying visual information in the same way of a navigation system. In usual AR systems, the contents to be overlaid onto real space are created with 3D Computer Graphics. In most cases, such contents are newly created according to applications. However, there are many 2D videos that show how to take apart or build elec-tric appliances and PCs, how to cook, etc. Therefore, our system employs such existing 2D videos as instructional videos. By trans-forming an instructional video to display, according to the user’s view, and by overlaying the video onto the user’s view space, the proposed system intuitively provides the user with visual guidance. In order to avoid the problem that the display of the instructional video and the user’s view may be visually confused, we add var-ious visual effects to the instructional video, such as transparency and enhancement of contours. By dividing the instructional video into sections according to the operations to be carried out in order to complete a certain task, we ensure that the user can interactively move to the next step in the instructional video after a certain op-eration is completed. Therefore, the user can carry on with the task at his/her own pace. In the usability test, users evaluated the use of the instructional video in our system through two tasks: a task involving building blocks and an origami task. As a result, we found that a user’s visibility improves when the instructional video is transformed to display according to his/her view. Further, for the evaluation of visual effects, we can classify these effects according to the task and obtain the guideline for the use of our system as an instructional support system for performing various other tasks
Scalable and Extensible Augmented Reality with Applications in Civil Infrastructure Systems.
In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: 1) it reinforces the connections between people and objects, and promotes engineers’ appreciation about their working context; 2) It allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; 3) It offsets the significant cost of 3D Model Engineering by including the real world background.
The research has successfully overcome several long-standing technical obstacles in AR and investigated technical approaches to address fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co- existence; integrating these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community, and can be readily reused and extended by other researchers and engineers.
The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables spotters to ”see” buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96145/1/dsuyang_1.pd
Dynamic Coverage Control and Estimation in Collaborative Networks of Human-Aerial/Space Co-Robots
In this dissertation, the author presents a set of control, estimation, and decision making strategies to enable small unmanned aircraft systems and free-flying space robots to act as intelligent mobile wireless sensor networks. These agents are primarily tasked with gathering information from their environments in order to increase the situational awareness of both the network as well as human collaborators. This information is gathered through an abstract sensing model, a forward facing anisotropic spherical sector, which can be generalized to various sensing models through adjustment of its tuning parameters.
First, a hybrid control strategy is derived whereby a team of unmanned aerial vehicles can dynamically cover (i.e., sweep their sensing footprints through all points of a domain over time) a designated airspace. These vehicles are assumed to have finite power resources; therefore, an agent deployment and scheduling protocol is proposed that allows for agents to return periodically to a charging station while covering the environment. Rules are also prescribed with respect to energy-aware domain partitioning and agent waypoint selection so as to distribute the coverage load across the network with increased priority on those agents whose remaining power supply is larger. This work is extended to consider the coverage of 2D manifolds embedded in 3D space that are subject to collision by stochastic intruders. Formal guarantees are provided with respect to collision avoidance, timely convergence upon charging stations, and timely interception of intruders by friendly agents. This chapter concludes with a case study in which a human acts as a dynamic coverage supervisor, i.e., they use hand gestures so as to direct the selection of regions which ought to be surveyed by the robot.
Second, the concept of situational awareness is extended to networks consisting of humans working in close proximity with aerial or space robots. In this work, the robot acts as an assistant to a human attempting to complete a set of interdependent and spatially separated multitasking objectives. The human wears an augmented reality display and the robot must learn the human's task locations online and broadcast camera views of these tasks to the human. The locations of tasks are learned using a parallel implementation of expectation maximization of Gaussian mixture models. The selection of tasks from this learned set is executed by a Markov Decision Process which is trained using Q-learning by the human. This method for robot task selection is compared against a supervised method in IRB approved (HUM00145810) experimental trials with 24 human subjects.
This dissertation concludes by discussing an additional case study, by the author, in Bayesian inferred path planning. In addition, open problems in dynamic coverage and human-robot interaction are discussed so as to present an avenue forward for future work.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155147/1/wbentz_1.pd
Recommended from our members
Supporting Multi-User Interaction in Co-Located and Remote Augmented Reality by Improving Reference Performance and Decreasing Physical Interference
One of the most fundamental components of our daily lives is social interaction, ranging from simple activities, such as purchasing a donut in a bakery on the way to work, to complex ones, such as instructing a remote colleague how to repair a broken automobile. While we interact with others, various challenges may arise, such as miscommunication or physical interference. In a bakery, a clerk may misunderstand the donut at which a customer was pointing due to the uncertainty of their finger direction. In a repair task, a technician may remove the wrong bolt and accidentally hit another user while replacing broken parts due to unclear instructions and lack of attention while communicating with a remote advisor.
This dissertation explores techniques for supporting multi-user 3D interaction in augmented reality in a way that addresses these challenges. Augmented Reality (AR) refers to interactively overlaying geometrically registered virtual media on the real world. In particular, we address how an AR system can use overlaid graphics to assist users in referencing local objects accurately and remote objects efficiently, and prevent co-located users from physically interfering with each other. My thesis is that our techniques can provide more accurate referencing for co-located and efficient referencing for remote users and lessen interference among users.
First, we present and evaluate an AR referencing technique for shared environments that is designed to improve the accuracy with which one user (the indicator) can point out a real physical object to another user (the recipient). Our technique is intended for use in otherwise unmodeled environments in which objects in the environment, and the hand of the indicator, are interactively observed by a depth camera, and both users wear tracked see-through displays. This technique allows the indicator to bring a copy of a portion of the physical environment closer and indicate a selection in the copy. At the same time, the recipient gets to see the indicator's live interaction represented virtually in another copy that is brought closer to the recipient, and is also shown the mapping between their copy and the actual portion of the physical environment. A formal user study confirms that our technique performs significantly more accurately than comparison techniques in situations in which the participating users have sufficiently different views of the scene.
Second, we extend the idea of using a copy (virtual replica) of physical object to help a remote expert assist a local user in performing a task in the local user's environment. We develop an approach that uses Virtual Reality (VR) or AR for the remote expert, and AR for the local user. It allows the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. The expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains. We compared our approach with another 3D approach that also uses virtual replicas, in which the remote expert identifies corresponding pairs of points to align on a pair of objects, and a 2D approach in which the expert uses a 2D tablet-based drawing system similar to sketching systems developed for prior work by others on remote assistance. The study shows the 3D demonstration approach to be faster than the others.
Third, we present an interference avoidance technique (Redirected Motion) intended to lessen the chance of physical interference among users with tracked hand-held displays, while minimizing their awareness that the technique is being applied. This interaction technique warps virtual space by shifting the virtual location of a user's hand-held display. We conducted a formal user study to evaluate Redirected Motion against other approaches that either modify what a user sees or hears, or restrict the interaction capabilities users have. Our study was performed using a game we developed, in which two players moved their hand-held displays rapidly in the space around a shared gameboard. Our analysis showed that Redirected Motion effectively and imperceptibly kept players further apart physically than the other techniques.
These interaction techniques were implemented using an extensible programming framework we developed for supporting a broad range of multi-user immersive AR applications. This framework, Goblin XNA, integrates a 3D scene graph with support for 6DOF tracking, rigid body physics simulation, networking, shaders, particle systems, and 2D user interface primitives.
In summary, we showed that our referencing approaches can enhance multi-user AR by improving accuracy for co-located users and increasing efficiency for remote users. In addition, we demonstrated that our interference-avoidance approach can lessen the chance of unwanted physical interference between co-located users, without their being aware of its use