70,608 research outputs found
FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation
FlightGoggles is a photorealistic sensor simulator for perception-driven
robotic vehicles. The key contributions of FlightGoggles are twofold. First,
FlightGoggles provides photorealistic exteroceptive sensor simulation using
graphics assets generated with photogrammetry. Second, it provides the ability
to combine (i) synthetic exteroceptive measurements generated in silico in real
time and (ii) vehicle dynamics and proprioceptive measurements generated in
motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of
simulating a virtual-reality environment around autonomous vehicle(s). While a
vehicle is in flight in the FlightGoggles virtual reality environment,
exteroceptive sensors are rendered synthetically in real time while all complex
extrinsic dynamics are generated organically through the natural interactions
of the vehicle. The FlightGoggles framework allows for researchers to
accelerate development by circumventing the need to estimate complex and
hard-to-model interactions such as aerodynamics, motor mechanics, battery
electrochemistry, and behavior of other agents. The ability to perform
vehicle-in-the-loop experiments with photorealistic exteroceptive sensor
simulation facilitates novel research directions involving, e.g., fast and
agile autonomous flight in obstacle-rich environments, safe human interaction,
and flexible sensor selection. FlightGoggles has been utilized as the main test
for selecting nine teams that will advance in the AlphaPilot autonomous drone
racing challenge. We survey approaches and results from the top AlphaPilot
teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be
found at https://flightgoggles.mit.edu. Revision includes description of new
FlightGoggles features, such as a photogrammetric model of the MIT Stata
Center, new rendering settings, and a Python AP
Psychologically based Virtual-Suspect for Interrogative Interview Training
In this paper, we present a Virtual-Suspect system which can be used to train
inexperienced law enforcement personnel in interrogation strategies. The system
supports different scenario configurations based on historical data. The
responses presented by the Virtual-Suspect are selected based on the
psychological state of the suspect, which can be configured as well.
Furthermore, each interrogator's statement affects the Virtual-Suspect's
current psychological state, which may lead the interrogation in different
directions. In addition, the model takes into account the context in which the
statements are made. Experiments with 24 subjects demonstrate that the
Virtual-Suspect's behavior is similar to that of a human who plays the role of
the suspect
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
Virtual Meeting Rooms: From Observation to Simulation
Virtual meeting rooms are used for simulation of real meeting behavior and can show how people behave, how they gesture, move their heads, bodies, their gaze behavior during conversations. They are used for visualising models of meeting behavior, and they can be used for the evaluation of these models. They are also used to show the effects of controlling certain parameters on the behavior and in experiments to see what the effect is on communication when various channels of information - speech, gaze, gesture, posture - are switched off or manipulated in other ways. The paper presents the various stages in the development of a virtual meeting room as well and illustrates its uses by presenting some results of experiments to see whether human judges can induce conversational roles in a virtual meeting situation when they only see the head movements of participants in the meeting
Realising intelligent virtual design
This paper presents a vision and focus for the CAD Centre research: the Intelligent Design Assistant (IDA). The vision is based upon the assumption that the human and computer can operate symbiotically, with the computer providing support for the human within the design process. Recently however the focus has been towards the development of integrated design platforms that provide general support irrespective of the domain, to a number of distributed collaborative designers. This is illustrated within the successfully completed Virtual Reality Ship (VRS) virtual platform, and the challenges are discussed further within the NECTISE, SAFEDOR and VIRTUE projects
Feeling crowded yet?: Crowd simulations for VR
With advances in virtual reality technology and its multiple applications, the need for believable, immersive virtual environments is increasing. Even though current computer graphics methods allow us to develop highly realistic virtual worlds, the main element failing to enhance presence is autonomous groups of human inhabitants. A great
number of crowd simulation techniques have emerged in the last decade, but critical details in the crowd's movements and appearance do not meet the standards necessary to convince VR participants that they are present in a real crowd. In this paper, we review recent advances in the creation of immersive virtual crowds and discuss areas that require further work to turn these simulations into more fully immersive and believable experiences.Peer ReviewedPostprint (author's final draft
Realising intelligent virtual design
This paper presents a vision and focus for the CAD Centre research: the Intelligent Design Assistant (IDA). The vision is based upon the assumption that the human and computer can operate symbiotically, with the computer providing support for the human within the design process. Recently however the focus has been towards the development of integrated design platforms that provide general support irrespective of the domain, to a number of distributed collaborative designers. This is illustrated within the successfully completed Virtual Reality Ship (VRS) virtual platform, and the challenges are discussed further within the NECTISE, SAFEDOR and VIRTUE projects
- …