1,213 research outputs found

    Development of a Physics-Aware Dead Reckoning Mechanism for Distributed Interactive Applications

    Get PDF
    Distributed Interactive Applications (DIAs) are a class of software that allow geographically remote users to interact within a shared virtual environment. Many DIAs seek to present a rich and realistic virtual world to users, both on a visual and behavioural level. A relatively recent addition to virtual environments (both distributed and single user) to achieve the latter has been the simulation of realistic physical phenomena between objects in the environment. However, the application of physics simulation to virtual environments in DIAs currently lags that of single user environments. This is primarily due to the unavailability of entity state update mechanisms which can maintain consistency in such physics-rich environments. The difference is particularly evident in applications built on a peer-to-peer architecture, as a lack of a single authority presents additional challenges in synchronising the state of shared objects while also presenting a responsive simulation. This thesis proposes a novel state maintenance mechanism for physics-rich environments in peer-to-peer DIAs composed of two parts: a dynamic authority scheme for shared objects, and a physics-aware dead reckoning model with an adaptive error threshold. The first part is intended to place a bound on the overall inconsistency present in shared objects, while the second is implemented to minimise the instantaneous inconsistency during users’ interactions with shared objects. A testbed application is also described, which is used to validate the performance of the proposed mechanism. The state maintenance mechanism is implemented for a single type of physicsaware application, and demonstrates a marked improvement in consistency for that application. However, several flexible terms are described in its implementation, as well as their potential relevance to alternative applications. Finally, it should be noted that the physics-aware dead reckoning model does not depend on the authority scheme, and can therefore be employed with alternative authority scheme

    Real-Time Virtual Humans

    Get PDF
    The last few years have seen great maturation in the computation speed and control methods needed to portray 30 virtual humans suitable for real interactive applications. We first describe the state of the art, then focus on the particular approach taken at the University of Pennsylvania with the Jack system. Various aspects of real-time virtual humans are considered, such as appearance and motion, interactive control, autonomous action, gesture, attention, locomotion, and multiple individuals. The underlying architecture consists of a sense-control-act structure that permits reactive behaviors to be locally adaptive to the environment, and a PaT-Net parallel finite-state machine controller that can be used to drive virtual humans through complex tasks. We then argue for a deep connection between language and animation and describe current efforts in linking them through two systems: the Jack Presenter and the JackMOO extension to lambdaM00. Finally, we outline a Parameterized Action Representation for mediating between language instructions and animated actions

    Development of Immersive and Interactive Virtual Reality Environment for Two-Player Table Tennis

    Get PDF
    Although the history of Virtual Reality (VR) is only about half a century old, all kinds of technologies in the VR field are developing rapidly. VR is a computer generated simulation that replaces or augments the real world by various media. In a VR environment, participants have a perception of “presence”, which can be described by the sense of immersion and intuitive interaction. One of the major VR applications is in the field of sports, in which a life-like sports environment is simulated, and the body actions of players can be tracked and represented by using VR tracking and visualisation technology. In the entertainment field, exergaming that merges video game with physical exercise activities by employing tracking or even 3D display technology can be considered as a small scale VR. For the research presented in this thesis, a novel realistic real-time table tennis game combining immersive, interactive and competitive features is developed. The implemented system integrates the InterSense tracking system, SwissRanger 3D camera and a three-wall rear projection stereoscopic screen. The Intersense tracking system is based on ultrasonic and inertia sensing techniques which provide fast and accurate 6-DOF (i.e. six degrees of freedom) tracking information of four trackers. Two trackers are placed on the two players’ heads to provide the players’ viewing positions. The other two trackers are held by players as the racquets. The SwissRanger 3D camera is mounted on top of the screen to capture the player’

    Dynamic Speed and Separation Monitoring with On-Robot Ranging Sensor Arrays for Human and Industrial Robot Collaboration

    Get PDF
    This research presents a flexible and dynamic implementation of Speed and Separation Monitoring (SSM) safety measure that optimizes the productivity of a task while ensuring human safety during Human-Robot Collaboration (HRC). Unlike the standard static/fixed demarcated 2D safety zones based on 2D scanning LiDARs, this research presents a dynamic sensor setup that changes the safety zones based on the robot pose and motion. The focus of this research is the implementation of a dynamic SSM safety configuration using Time-of-Flight (ToF) laser-ranging sensor arrays placed around the centers of the links of a robot arm. It investigates the viability of on-robot exteroceptive sensors for implementing SSM as a safety measure. Here the implementation of varying dynamic SSM safety configurations based on approaches of measuring human-robot separation distance and relative speeds using the sensor modalities of ToF sensor arrays, a motion-capture system, and a 2D LiDAR is shown. This study presents a comparative analysis of the dynamic SSM safety configurations in terms of safety, performance, and productivity. A system of systems (cyber-physical system) architecture for conducting and analyzing the HRC experiments was proposed and implemented. The robots, objects, and human operators sharing the workspace are represented virtually as part of the system by using a digital-twin setup. This system was capable of controlling the robot motion, monitoring human physiological response, and tracking the progress of the collaborative task. This research conducted experiments with human subjects performing a task while sharing the robot workspace under the proposed dynamic SSM safety configurations. The experiment results showed a preference for the use of ToF sensors and motion capture rather than the 2D LiDAR currently used in the industry. The human subjects felt safe and comfortable using the proposed dynamic SSM safety configuration with ToF sensor arrays. The results for a standard pick and place task showed up to a 40% increase in productivity in comparison to a 2D LiDAR

    Follow the Leader The Role of Local and Global Visual Information for Keeping Distance in Interpersonal Coordination

    Get PDF
    Despite many years of research into human movement, how humans deal with information in dynamical situations is still subject to debate. The current research programme examined an individual’s ability to coordinate their actions with others in invasion sports (‘interact-ability’) using the ecological dynamics theoretical framework to address this general aim. In its essence interact-ability describes how humans make sense of their sensory world in order to tie in - some form of - action output to serve goal-directed behaviour. A series of experimental studies examined the law-like relationship between agent and environment (cf., perception-action) when one has to coordinate with others. A virtual reality task was chosen that involved maintaining distance in locomotion: follow the leader. The first study examined whether optical expansion by itself would enable distance keeping in a follow-the-leader locomotion task in forward-backward direction. In one condition, participants coordinated their locomotion with an expanding and compressing sphere, whilst in another with a fully animated avatar (i.e., with moving limbs). The coordination of the follower with the leader was analysed using response times (RT) and the point-estimate relative phase (φ) to quantify the temporal synchrony. Additionally, the spatial accuracy was estimated by testing to what extent the rate of change in visual angle was nulled over time. Findings showed a decreased temporal synchrony, but no significant decrease in spatial accuracy, when no limb movement was present in the leader stimulus (i.e., sphere compared to avatar). Additionally, it appeared that regulating distance based on global motion information was affected by a direction-based visual angle bias. The second study then investigated if information for action could be situated along a spectrum from local (i.e., segmental) to global (i.e., expansion-compression) visual information sources. It also was examined how the perception-action coupling was mediated by key task constraints (i.e., regularity and viewpoint). Extending the analysis procedures of the first study, the virtual distance between follower and leader was estimated to further quantify the spatial accuracy. It was put forward that followers may benefit most from flexibly switching between information sources governed by task constraints. Participants followed more irregular leaders better when local motion information was available. Although information for action may not be easily situated along a linear spectrum, various relative benefits were put forward for each component of the proposed spectrum. In the third and final study, these follower-leader dynamics were examined in a lateral follow-the-leader task. It was shown that a point-light display provided information to tighten the temporal synchrony. However, as the spatial accuracy was not significantly affect by the information presented, it may be that early responses were as often facilitating as detrimental for keeping distance. Overall, this body of work may contribute towards understanding how action and perception are linked in dynamic interpersonal situations. Local sources of information were shown to contribute to a tightened temporal synchronization and global sources of information were consistently shown to provide pertinent distance-related information. The main contribution is substantiated by showing how agents in a whole-body interaction task can flexibly use different sources of information and do so as a function of task constraints

    Automated cinematography for games.

    Get PDF
    This thesis deals with the issue of automated cinematography for games. In 3D videogames, the system must continuously provide the player with a view of the virtual world and its characters. The difficulty is that contrary to the cinema the actors are unpredictable. In particular the player continuously modifies the virtual environment by moving objects or by interacting with the other non-playable characters. The latter, because of their more and more sophisticated artificial intelligence, can have behaviours that were not predicted by the developers themselves (such as the complex behaviours that emerge from the combination of basic behaviours). Some games have solved the problem by predefining the possible positions of the camera during the game development while some others give control of the camera system to the player, so that he can find by himself the best possible view. I aim however at finding an intermediate solution, where the camera system would automatically generate both engaging and usable views. The camera system should be able to adapt to every situation of the virtual world without user intervention, and should allow the player to interact with his surrounding in the most efficient way. Such a camera system could be of interest for the game industry. Currently, in many games, the camera movements, positions, etc. are set using scripts manually written by the developers. Having a fully automated system could potentially save hours of work. This system could also be used for the 3D virtual worlds or “3D chats” on the Internet. For example, the avatars – the characters played by the users – could be “filmed” in a different way depending on the mood of the users. I aim to develop techniques which can be generalised to these and other areas of application. Existing approaches to automated cinematography will be reviewed – focusing on the constraint-based and idiom-based ones – in order to highlight the strength and limitations of each one. A solution to the problems found will be proposed in the form of a camera system implemented using Adobe Director. It will be based on “rules” derived from existing cinematographic knowledge. One of my aims will also be to show that using generic rules can give results close to the idiom-based approaches with the convenience of being able to adapt to any type of scene

    A goalkeeper’s performance in stopping free kicks reduces when the defensive wall blocks their initial view of the ball

    Get PDF
    Free kicks are an important goal scoring opportunity in football. It is an unwritten rule that the goalkeeper places a wall of defending players with the aim of making scoring harder for the attacking team. However, the defensive wall can occlude the movements of the kicker, as well as the initial part of the ball trajectory. Research on one-handed catching suggests that a ball coming into view later will likely delay movement initiation and possibly affect performance. Here, we used virtual reality to investigate the effect of the visual occlusion of the initial ball trajectory by the wall on the performance of naĂŻve participants and skilled goalkeepers. We showed that movements were initiated significantly later when the wall was present, but not by the same amount as the duration of occlusion (~200ms, versus a movement delay of ~70-90ms); movements were thus initiated sooner after the ball came into view, based on less accumulated information. For both naĂŻve participants and skilled goalkeepers this delayed initiation significantly affected performance (i.e., 3.6cm and 1.5cm larger spatial hand error, respectively, not differing significantly between the groups). These performance reductions were significantly larger for shorter flight times, reaching increased spatial errors of 4.5cm and 2.8cm for both groups, respectively. Further analyses showed that the wall-induced performance reduction did not differ significantly between free kicks with and without sideward curve. The wall influenced early movement biases, but only for free kicks with curve in the same direction as the required movement; these biases were away from the final ball position, thus hampering performance. Our results cannot suggest an all-out removal of the wall-this study only considered one potential downside-but should motivate goalkeepers to continuously evaluate whether placing a wall is their best option. This seems most pertinent when facing expert free kick takers for whom the wall does not act as a block (i.e., whose kicks consistently scale the wall)

    Interception of vertically approaching objects: temporal recruitment of the internal model of gravity and contribution of optical information

    Get PDF
    introduction: recent views posit that precise control of the interceptive timing can be achieved by combining on-line processing of visual information with predictions based on prior experience. Indeed, for interception of free-falling objects under gravity's effects, experimental evidence shows that time-to-contact predictions can be derived from an internal gravity representation in the vestibular cortex. however, whether the internal gravity model is fully engaged at the target motion outset or reinforced by visual motion processing at later stages of motion is not yet clear. moreover, there is no conclusive evidence about the relative contribution of internalized gravity and optical information in determining the time-to-contact estimates.methods: we sought to gain insight on this issue by asking 32 participants to intercept free falling objects approaching directly from above in virtual reality. object motion had durations comprised between 800 and 1100 ms and it could be either congruent with gravity (1 g accelerated motion) or not (constant velocity or -1 g decelerated motion). we analyzed accuracy and precision of the interceptive responses, and fitted them to bayesian regression models, which included predictors related to the recruitment of a priori gravity information at different times during the target motion, as well as based on available optical information.results: consistent with the use of internalized gravity information, interception accuracy and precision were significantly higher with 1 g motion. moreover, bayesian regression indicated that interceptive responses were predicted very closely by assuming engagement of the gravity prior 450 ms after the motion onset, and that adding a predictor related to on-line processing of optical information improved only slightly the model predictive power. discussion: thus, engagement of a priori gravity information depended critically on the processing of the first 450 ms of visual motion information, exerting a predominant influence on the interceptive timing, compared to continuously available optical information. finally, these results may support a parallel processing scheme for the control of interceptive timing
    • 

    corecore