19,928 research outputs found

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    Enhanced Tracking Aerial Image by Applying Frame Extraction Technique

    Get PDF
    An image registration method is introduced that is capable of registering images from different views of a 3-D scene in the presence of occlusion. The proposed method is capable of withstanding considerable occlusion and homogeneous areas in images. The only requirement of the method is for the ground to be locally flat and sufficient ground cover be visible in the frames being registered. With help of fusion technique we solve the problem of blur images. In previous project sometime object recognition is not possible they do not show appropriate area, path and location. So with the help of object recognition we show the appropriate location, path and area. Then it captured the motion images, static images, video and CCTV footage also. Because of occlusion sometime result not get correct or sometime problems are occurred but with the help of techniques solve the problem of occlusion. This method is applicable for the various investigation departments. For the purpose of tracking such as smuggling or any unwanted operations which are apply or performed by illegally. Various types of technique are applied for performing the tracking operation. That technique return the correct result according to object tracking. Camera is not supported this type of operation because they do not return the clear image result. So apply the drone and aircraft for capturing the long distance or multiview images

    SafeWeb: A Middleware for Securing Ruby-Based Web Applications

    Get PDF
    Web applications in many domains such as healthcare and finance must process sensitive data, while complying with legal policies regarding the release of different classes of data to different parties. Currently, software bugs may lead to irreversible disclosure of confidential data in multi-tier web applications. An open challenge is how developers can guarantee these web applications only ever release sensitive data to authorised users without costly, recurring security audits. Our solution is to provide a trusted middleware that acts as a “safety net” to event-based enterprise web applications by preventing harmful data disclosure before it happens. We describe the design and implementation of SafeWeb, a Ruby-based middleware that associates data with security labels and transparently tracks their propagation at different granularities across a multi-tier web architecture with storage and complex event processing. For efficiency, maintainability and ease-of-use, SafeWeb exploits the dynamic features of the Ruby programming language to achieve label propagation and data flow enforcement. We evaluate SafeWeb by reporting our experience of implementing a web-based cancer treatment application and deploying it as part of the UK National Health Service (NHS)

    Linear Regression and Unsupervised Learning For Tracking and Embodied Robot Control.

    Get PDF
    Computer vision problems, such as tracking and robot navigation, tend to be solved using models of the objects of interest to the problem. These models are often either hard-coded, or learned in a supervised manner. In either case, an engineer is required to identify the visual information that is important to the task, which is both time consuming and problematic. Issues with these engineered systems relate to the ungrounded nature of the knowledge imparted by the engineer, where the systems have no meaning attached to the representations. This leads to systems that are brittle and are prone to failure when expected to act in environments not envisaged by the engineer. The work presented in this thesis removes the need for hard-coded or engineered models of either visual information representations or behaviour. This is achieved by developing novel approaches for learning from example, in both input (percept) and output (action) spaces. This approach leads to the development of novel feature tracking algorithms, and methods for robot control. Applying this approach to feature tracking, unsupervised learning is employed, in real time, to build appearance models of the target that represent the input space structure, and this structure is exploited to partition banks of computationally efficient, linear regression based target displacement estimators. This thesis presents the first application of regression based methods to the problem of simultaneously modeling and tracking a target object. The computationally efficient Linear Predictor (LP) tracker is investigated, along with methods for combining and weighting flocks of LP’s. The tracking algorithms developed operate with accuracy comparable to other state of the art online approaches and with a significant gain in computational efficiency. This is achieved as a result of two specific contributions. First, novel online approaches for the unsupervised learning of modes of target appearance that identify aspects of the target are introduced. Second, a general tracking framework is developed within which the identified aspects of the target are adaptively associated to subsets of a bank of LP trackers. This results in the partitioning of LP’s and the online creation of aspect specific LP flocks that facilitate tracking through significant appearance changes. Applying the approach to the percept action domain, unsupervised learning is employed to discover the structure of the action space, and this structure is used in the formation of meaningful perceptual categories, and to facilitate the use of localised input-output (percept-action) mappings. This approach provides a realisation of an embodied and embedded agent that organises its perceptual space and hence its cognitive process based on interactions with its environment. Central to the proposed approach is the technique of clustering an input-output exemplar set, based on output similarity, and using the resultant input exemplar groupings to characterise a perceptual category. All input exemplars that are coupled to a certain class of outputs form a category - the category of a given affordance, action or function. In this sense the formed perceptual categories have meaning and are grounded in the embodiment of the agent. The approach is shown to identify the relative importance of perceptual features and is able to solve percept-action tasks, defined only by demonstration, in previously unseen situations. Within this percept-action learning framework, two alternative approaches are developed. The first approach employs hierarchical output space clustering of point-to-point mappings, to achieve search efficiency and input and output space generalisation as well as a mechanism for identifying the important variance and invariance in the input space. The exemplar hierarchy provides, in a single structure, a mechanism for classifying previously unseen inputs and generating appropriate outputs. The second approach to a percept-action learning framework integrates the regression mappings used in the feature tracking domain, with the action space clustering and imitation learning techniques developed in the percept-action domain. These components are utilised within a novel percept-action data mining methodology, that is able to discover the visual entities that are important to a specific problem, and to map from these entities onto the action space. Applied to the robot control task, this approach allows for real-time generation of continuous action signals, without the use of any supervision or definition of representations or rules of behaviour

    Gearing up for action: attentive tracking dynamically tunes sensory and motor oscillations in the alpha and beta band

    Get PDF
    Allocation of attention during goal-directed behavior entails simultaneous processing of relevant and attenuation of irrelevant information. How the brain delegates such processes when confronted with dynamic (biological motion) stimuli and harnesses relevant sensory information for sculpting prospective responses remains unclear. We analyzed neuromagnetic signals that were recorded while participants attentively tracked an actor’s pointing movement that ended at the location where subsequently the response-cue indicated the required response. We found the observers’ spatial allocation of attention to be dynamically reflected in lateralized parieto-occipital alpha (8-12Hz) activity and to have a lasting influence on motor preparation. Specifically, beta (16-25Hz) power modulation reflected observers’ tendency to selectively prepare for a spatially compatible response even before knowing the required one. We discuss the observed frequency-specific and temporally evolving neural activity within a framework of integrated visuomotor processing and point towards possible implications about the mechanisms involved in action observation

    Implementing a map based simulator for the location API for J2ME

    Get PDF
    The Java Location API for J2METM integrates generic positioning and orientation data with persistent storage of landmark objects. It can be used to develop location based service applications for small mobile devices, and these applications can be tested using simulation environments. Currently the only simulation tools in the public domain are proprietary mobile device simulators that are driven by GPS data log files, but it is sometimes useful to be able to test location based services using interactive map-based tools. In addition, we may need to experiment with extensions and changes to the standard API to support additional services, requiring an open source environment. In this paper we describe the implementation of an open source map-based simulation tool compatible with other commonly used development and deployment tools
    • …
    corecore