147,544 research outputs found

    Synthetic vision and emotion calculation in intelligent virtual human modeling

    Get PDF
    The virtual human technique already can provide vivid and believable human behaviour in more and more scenarios. Virtual humans are expected to replace real humans in hazardous situations to undertake tests and feed back valuable information. This paper will introduce a virtual human with a novel collision-based synthetic vision, short-term memory model and a capability to implement the emotion calculation and decision making. The virtual character based on this model can ā€˜seeā€™ what is in his field of view (FOV) and remember those objects. After that, a group of affective computing equations have been introduced. These equations have been implemented into a proposed emotion calculation process to enlighten emotion for virtual intelligent huma

    A vision driven wayfinding simulation system based on the architectural features perceived in the office environment

    Get PDF
    Human wayfinding in the built environment is extensively investigated in the last 50 years. One major aspect of the outcome is the decision made on the egresses based on the information perceived during the wayfinding. Information acquired of the environment could be categorized into several types, namely the verbal (information obtained from the reception, staff members, etc.), the graphic (map of the environment, signage showing the location or pointing to certain location, etc.), the architectural (entrances, stairs, corridors, etc.), and the spatial (spatial relationship of objects in the environment). Early analyses of indoor wayfinding suggested that signage and colour codes could provide landmarks, but the addition of these cues after construction can be futile. This suggests that, architectural information has a significant influence on individualā€™ s decision making. However, inmost researches, the function of the architectural information was underestimated -- it was often treated as the constraint of the architectural spaces. The presented research aims at developing of an agent-based system that can find certain destination in a virtual office building environment using artificial vision and cognition based on the architectural features in this built environment. During the wayfinding, this agentā€™ s egress choices follows an estimated model that based on experimental data with real human. Before running the experiments with real humans, pre-experiments were conducted to investigate the conditions for vision research using standard LCD monitors. The thresholds obtained in the pre-experiments for lighting in the virtual environment and for the testing environment served as input to the development of the following experiments. In the first experiment subjects were asked to make choices between two egresses in a sequence of isolated convex rooms. The architectural features of these rooms and of the egresses were varied systematically. The room features included: size and colour; while the egress features were: colour, distance, angle and width. From the collected data a preference function was estimated on egress choice given the architectural features. In the second experiment the assignment was to find the destination and then return to the start in a virtual building. Subjects executed three different assignments given different locations for the destination and the start, and every assignment was repeated two to three times subsequently. Each subjectā€™ s routes in the experiments were recorded. From these routes search strategies used for wayfinding were determined, namely: Orientation Based, Architectural Features Based, Boundary Based, Random Choice, Minimum rooms and Shortest Distance. A preference function was estimated for the next room choice, based upon the architectural features of the current room, and the given familiarity of the environment. The implemented agent uses a simplified version of the virtual building model. This simplified version only includes those architectural elements and features that are relevant for vision driven navigation, i.e. the type of egress, egress colour, egress width, room colour and room size. Room colour is converted into three levels of grey. The agentā€™ s wayfinding behaviour is validated with the hit ratio, the average visit frequency of each room, and the average total number of rooms visited, respectively. The agent-based simulation system developed in not only an interpretation of the empirical findings obtained from the research, but it is also applicable for testing and evaluation purposes in architectural design problems. After certain transformations, a CAD model of an office environment can be presented to the simulation system as input. By setting the wayfinding task, the designed agent can be employed to predict how individuals may behave in this office environment in reality. This helps the architects, with regard to wayfinding efficiency and space utility, to improve their design

    Wayfinding in Complex Multi-storey Buildings: A vision-simulation-augmented wayfinding protocol study

    Get PDF
    Wayfinding in complex multi-storey buildings often brings newcomers and even some frequent visitors uncertainty and stress. However, there is little understanding on wayfinding in 3D structure which contains inter-storey and inter-building travelling. This paper presents the method of vision-simulation-augmented wayfinding protocol for the study of such 3D structure to find its application from investigating pedestriansā€™ wayfinding behaviour in general-purpose complex multi-storey buildings. Based on Passiniā€™s studies as a starting point, an exploratory quasi-experiment was developed during the study and then conducted in a daily wayfinding context, adopting wayfinding protocol method with augmentation by the real-time vision simulation. The purpose is to identify peopleā€™s natural wayfinding strategies in natural settings, for both frequent visitors and newcomers. It is envisioned that the findings of the study can inspire potential design solutions for supporting pedestrianā€™s wayfinding in 3D indoor spaces. From the new method developed and new analytic framework, several findings were identified which differ from other wayfinding literature, such as (1) people seem to directly ā€œmake senseā€ of wayfinding settings, (2) people could translate recurring actions into unconscious operational behaviours, and (3) physical rotation and constrained views, instead of vertical travelling itself, should be problems for wayfinding process, etc. Keywords: Wayfinding Protocol; Real-time Vision Simulation; 3D Indoor Space; Activity Theory; Structure of Wayfinding process</p

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Enabling Self-aware Smart Buildings by Augmented Reality

    Full text link
    Conventional HVAC control systems are usually incognizant of the physical structures and materials of buildings. These systems merely follow pre-set HVAC control logic based on abstract building thermal response models, which are rough approximations to true physical models, ignoring dynamic spatial variations in built environments. To enable more accurate and responsive HVAC control, this paper introduces the notion of "self-aware" smart buildings, such that buildings are able to explicitly construct physical models of themselves (e.g., incorporating building structures and materials, and thermal flow dynamics). The question is how to enable self-aware buildings that automatically acquire dynamic knowledge of themselves. This paper presents a novel approach using "augmented reality". The extensive user-environment interactions in augmented reality not only can provide intuitive user interfaces for building systems, but also can capture the physical structures and possibly materials of buildings accurately to enable real-time building simulation and control. This paper presents a building system prototype incorporating augmented reality, and discusses its applications.Comment: This paper appears in ACM International Conference on Future Energy Systems (e-Energy), 201
    • ā€¦
    corecore