212,503 research outputs found

    Literate modelling: capturing business knowledge with the UML

    Get PDF
    At British Airways, we have found during several large OO projects documented using the UML that non-technical end-users, managers and business domain experts find it difficult to understand UML visual models. This leads to problems in requirement capture and review. To solve this problem, we have developed the technique of Literate Modelling. Literate Models are UML diagrams that are embedded in texts explaining the models. In that way end-users, managers and domain experts gain useful understanding of the models, whilst object-oriented analysts see exactly and precisely how the models define business requirements and imperatives. We discuss some early experiences with Literate Modelling at British Airways where it was used extensively in their Enterprise Object Modelling initiative.We explain why Literate Modelling is viewed as one of the critical success factors for this significant project. Finally, we propose that Literate Modelling may be a valuable extension to many other object-oriented and non object-oriented visual modelling languages

    Automated Verification of Design Patterns with LePUS3

    Get PDF
    Specification and [visual] modelling languages are expected to combine strong abstraction mechanisms with rigour, scalability, and parsimony. LePUS3 is a visual, object-oriented design description language axiomatized in a decidable subset of the first-order predicate logic. We demonstrate how LePUS3 is used to formally specify a structural design pattern and prove (‗verify‘) whether any JavaTM 1.4 program satisfies that specification. We also show how LePUS3 specifications (charts) are composed and how they are verified fully automatically in the Two-Tier Programming Toolkit

    A computer vision model for visual-object-based attention and eye movements

    Get PDF
    This is the post-print version of the final paper published in Computer Vision and Image Understanding. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2008 Elsevier B.V.This paper presents a new computational framework for modelling visual-object-based attention and attention-driven eye movements within an integrated system in a biologically inspired approach. Attention operates at multiple levels of visual selection by space, feature, object and group depending on the nature of targets and visual tasks. Attentional shifts and gaze shifts are constructed upon their common process circuits and control mechanisms but also separated from their different function roles, working together to fulfil flexible visual selection tasks in complicated visual environments. The framework integrates the important aspects of human visual attention and eye movements resulting in sophisticated performance in complicated natural scenes. The proposed approach aims at exploring a useful visual selection system for computer vision, especially for usage in cluttered natural visual environments.National Natural Science of Founda- tion of Chin

    Multisensory causal inference in the brain

    Get PDF
    At any given moment, our brain processes multiple inputs from its different sensory modalities (vision, hearing, touch, etc.). In deciphering this array of sensory information, the brain has to solve two problems: (1) which of the inputs originate from the same object and should be integrated and (2) for the sensations originating from the same object, how best to integrate them. Recent behavioural studies suggest that the human brain solves these problems using optimal probabilistic inference, known as Bayesian causal inference. However, how and where the underlying computations are carried out in the brain have remained unknown. By combining neuroimaging-based decoding techniques and computational modelling of behavioural data, a new study now sheds light on how multisensory causal inference maps onto specific brain areas. The results suggest that the complexity of neural computations increases along the visual hierarchy and link specific components of the causal inference process with specific visual and parietal regions

    A thread-tag based semantics for sequence diagrams

    Get PDF
    The sequence diagram is one of the most popular behaviour modelling languages which offers an intuitive and visual way of describing expected behaviour of object-oriented software. Much research work has investigated ways of providing a formal semantics for sequence diagrams. However, these proposed semantics may not properly interpret sequence diagrams when lifelines do not correspond to threads of controls. In this paper, we address this problem and propose a thread-tag based sequence diagram as a solution. A formal, partially ordered multiset based semantics for the thread-tag based sequence diagrams is proposed

    Smoke and Shadows: Rendering and Light Interaction of Smoke in Real-Time Rendered Virtual Environments

    Get PDF
    Realism in computer graphics depends upon digitally representing what we see in the world with careful attention to detail, which usually requires a high degree of complexity in modelling the scene. The inevitable trade-off between realism and performance means that new techniques that aim to improve the visual fidelity of a scene must do so without compromising the real-time rendering performance. We describe and discuss a simple method for realistically casting shadows from an opaque solid object through a GPU (graphics processing unit) based particle system representing natural phenomena, such as smoke

    AMK: An Interface For Object-oriented Newtonian Particle Mechanics

    Get PDF
    This article describes an object-oriented environment with an associated user interface, AMK, for modelling simple Newtonian particle mechanics. It is intended for educational use, and provides a framework for modelling which generalises methodology. Physical objects are treated as logical objects, and mathematical models are formulated by linking them. The implementation is within the Windows environment using Mathematica and Visual Basic. Modelling is done by constructing objects and linking them to produce new objects. The aim is to produce an equation of motion object. The interface forces the user into a modelling cycle of constructing and linking objects, and accessing their methods. It constructs a Mathematica input automatically from information supplied by the user, and communicates with Mathematica. The combination of a generalised environment plus interface produces correct answers when modelling many specific physical systems

    Synchronized Oscillations During Cooperative Feature Linking in a Cortical Model of Visual Perception

    Full text link
    A neural network model of synchronized oscillator activity in visual cortex is presented in order to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus. In these recent experiments, it was reported that synchronization of oscillatory firing responses to moving bar stimuli occurred not only for nearby neurons, but also occurred between neurons separated by several cortical columns (several mm of cortex) when these neurons shared some receptive field preferences specific to the stimuli. These results were obtained not only for single bar stimuli but also across two disconnected, but colinear, bars moving in the same direction. Our model and computer simulations obtain these synchrony results across both single and double bar stimuli. For the double bar case, synchronous oscillations are induced in the region between the bars, but no oscillations are induced in the regions beyond the stimuli. These results were achieved with cellular units that exhibit limit cycle oscillations for a robust range of input values, but which approach an equilibrium state when undriven. Single and double bar synchronization of these oscillators was achieved by different, but formally related, models of preattentive visual boundary segmentation and attentive visual object recognition, as well as nearest-neighbor and randomly coupled models. In preattentive visual segmentation, synchronous oscillations may reflect the binding of local feature detectors into a globally coherent grouping. In object recognition, synchronous oscillations may occur during an attentive resonant state that triggers new learning. These modelling results support earlier theoretical predictions of synchronous visual cortical oscillations and demonstrate the robustness of the mechanisms capable of generating synchrony.Air Force Office of Scientific Research (90-0175); Army Research Office (DAAL-03-88-K0088); Defense Advanced Research Projects Agency (90-0083); National Aeronautics and Space Administration (NGT-50497

    AAN: Attributes-Aware Network for Temporal Action Detection

    Full text link
    The challenge of long-term video understanding remains constrained by the efficient extraction of object semantics and the modelling of their relationships for downstream tasks. Although the CLIP visual features exhibit discriminative properties for various vision tasks, particularly in object encoding, they are suboptimal for long-term video understanding. To address this issue, we present the Attributes-Aware Network (AAN), which consists of two key components: the Attributes Extractor and a Graph Reasoning block. These components facilitate the extraction of object-centric attributes and the modelling of their relationships within the video. By leveraging CLIP features, AAN outperforms state-of-the-art approaches on two popular action detection datasets: Charades and Toyota Smarthome Untrimmed datasets
    corecore