31,047 research outputs found

    Analysing the visual dynamics of spatial morphology

    Get PDF
    Recently there has been a revival of interest in visibility analysis of architectural configurations. The new analyses rely heavily on computing power and statistical analysis, two factors which, according to the postpositivist school of geography, should immediately cause us to be wary. Thedanger, they would suggest, is in the application of a reductionist formal mathematical description in order to `explain' multilayered sociospatial phenomena. The author presents an attempt to rationalise how we can use visibility analysis to explore architecture in this multilayered context by considering the dynamics that lead to the visual experience. In particular, it is recommended that we assess the visualprocess of inhabitation, rather than assess the visibility in vacuo. In order to investigate the possibilities and limitations of the methodology, an urban environment is analysed by means of an agent-based model of visual actors within the configuration. The results obtained from the model are compared with actual pedestrian movement and other analytic measurements of the area: the agents correlate well both with human movement patterns and with configurational relationship as analysed by space-syntax methods. The application of both methods in combination improves on the correlation with observed movement of either, which in turn implies that an understanding of both the process of inhabitation and the principles of configuration may play a crucial role in determining the social usage of space

    Encoding natural movement as an agent-based system: an investigation into human pedestrian behaviour in the built environment

    Get PDF
    Gibson's ecological theory of perception has received considerable attention within psychology literature, as well as in computer vision and robotics. However, few have applied Gibson's approach to agent-based models of human movement, because the ecological theory requires that individuals have a vision-based mental model of the world, and for large numbers of agents this becomes extremely expensive computationally. Thus, within current pedestrian models, path evaluation is based on calibration from observed data or on sophisticated but deterministic route-choice mechanisms; there is little open-ended behavioural modelling of human-movement patterns. One solution which allows individuals rapid concurrent access to the visual information within an environment is an 'exosomatic visual architecture" where the connections between mutually visible locations within a configuration are prestored in a lookup table. Here we demonstrate that, with the aid of an exosomatic visual architecture, it is possible to develop behavioural models in which movement rules originating from Gibson's principle of affordance are utilised. We apply large numbers of agents programmed with these rules to a built-environment example and show that, by varying parameters such as destination selection, field of view, and steps taken between decision points, it is possible to generate aggregate movement levels very similar to those found in an actual building context

    Controlling Concurrent Change - A Multiview Approach Toward Updatable Vehicle Automation Systems

    Get PDF
    The development of SAE Level 3+ vehicles [{SAE}, 2014] poses new challenges not only for the functional development, but also for design and development processes. Such systems consist of a growing number of interconnected functional, as well as hardware and software components, making safety design increasingly difficult. In order to cope with emergent behavior at the vehicle level, thorough systems engineering becomes a key requirement, which enables traceability between different design viewpoints. Ensuring traceability is a key factor towards an efficient validation and verification of such systems. Formal models can in turn assist in keeping track of how the different viewpoints relate to each other and how the interplay of components affects the overall system behavior. Based on experience from the project Controlling Concurrent Change, this paper presents an approach towards model-based integration and verification of a cause effect chain for a component-based vehicle automation system. It reasons on a cross-layer model of the resulting system, which covers necessary aspects of a design in individual architectural views, e.g. safety and timing. In the synthesis stage of integration, our approach is capable of inserting enforcement mechanisms into the design to ensure adherence to the model. We present a use case description for an environment perception system, starting with a functional architecture, which is the basis for componentization of the cause effect chain. By tying the vehicle architecture to the cross-layer integration model, we are able to map the reasoning done during verification to vehicle behavior

    Learning Generalized Reactive Policies using Deep Neural Networks

    Full text link
    We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We show that a deep neural network can be used to learn and represent a \emph{generalized reactive policy} (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to prior efforts in this direction, our approach significantly reduces the dependence of learning on handcrafted domain knowledge or feature selection. Instead, the GRP is trained from scratch using a set of successful execution traces. We show that our approach can also be used to automatically learn a heuristic function that can be used in directed search algorithms. We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and show that our approach facilitates learning complex decision making policies and powerful heuristic functions with minimal human input. Videos of our results are available at goo.gl/Hpy4e3
    corecore