5,126 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Interactive videos: Plausible video editing using sparse structure points

    Get PDF
    Video remains the method of choice for capturing temporal events. However, without access to the underlying 3D scene models, it remains difficult to make object level edits in a single video or across multiple videos. While it may be possible to explicitly reconstruct the 3D geometries to facilitate these edits, such a workflow is cumbersome, expensive, and tedious. In this work, we present a much simpler workflow to create plausible editing and mixing of raw video footage using only sparse structure points (SSP) directly recovered from the raw sequences. First, we utilize user-scribbles to structure the point representations obtained using structure-from-motion on the input videos. The resultant structure points, even when noisy and sparse, are then used to enable various video edits in 3D, including view perturbation, keyframe animation, object duplication and transfer across videos, etc. Specifically, we describe how to synthesize object images from new views adopting a novel image-based rendering technique using the SSPs as proxy for the missing 3D scene information. We propose a structure-preserving image warping on multiple input frames adaptively selected from object video, followed by a spatio-temporally coherent image stitching to compose the final object image. Simple planar shadows and depth maps are synthesized for objects to generate plausible video sequence mimicking real-world interactions. We demonstrate our system on a variety of input videos to produce complex edits, which are otherwise difficult to achieve

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    MenetelmÀ 3D-mallin animointijÀrjestelmÀn automaattiseen luomiseen Àlylaitteella lisÀtyssÀ todellisuudessa

    Get PDF
    3D modeling has become more popular among novice users in the recent years. The ubiquity of mobile devices has led to the need to view and edit 3D content even beyond the traditional desktop workstations. This thesis develops an approach for editing mesh-based 3D models in mobile augmented reality. The developed approach takes a static 3D model and automatically generates a rig with control handles so that the user can pose the model interactively. The rig is generated by approximating the model with a structure called sphere mesh. To attach the generated spheres to the model, a technique called bone heat skinning is used. A direct manipulation scheme is presented to allow the user to pose the processed model with intuitive touch controls. Both translation and rotation operations are developed to give the user expressive power over the pose of the model without overly complicating the controls. Several example scenes are built and analyzed. The scenes show that the developed approach can be used to build novel scenes in augmented reality. The implementation of the approach is measured to be close to real time with the processing times around one second for the used models. The rig generation is shown to yield semantically coherent control handles especially at lower resolutions. While the chosen bone heat skinning algorithm has theoretical shortcomings, they were not apparent in the built examples.3D-mallinnus on kasvattanut suosiotaan ei-ammattimaisten kÀyttÀjien keskuudessa viime vuosina. Mobiililaitteiden yleistyminen on johtanut tarpeeseen katsella ja muokata 3D-malleja myös perinteisten työasemien ulkopuolella. TÀmÀ diplomityö kehittÀÀ menetelmÀn verkkorakenteisten 3D-mallien muokkaamiseen lisÀtyssÀ todellisuudessa mobiililaitteilla. Kehitetty menetelmÀ luo staattiselle 3D-mallille animaatiojÀrjestelmÀn ohjauskahvoineen automaattisesti. NÀin kÀyttÀjÀ voi interaktiivisesti muuttaa 3D-mallin asentoa. AnimaatiojÀrjestelmÀ luodaan muodostamalla mallille likiarvoistus pallomallirakenteella. Luodut pallot kiinnitetÀÀn malliin nk. luulÀmpöpinnoitusmenetelmÀllÀ. Mallin asennon muokkaamiseksi esitellÀÀn suorakÀyttöjÀrjestelmÀ, jossa kÀyttÀjÀ voi kÀsitellÀ mallia helppokÀyttöisin kosketusnÀyttöelein. TyössÀ kehitetÀÀn sekÀ siirto- ettÀ pyöritysoperaatiot, jotta kÀyttÀjÀ voi muokata mallia monipuolisesti ja vaivattomasti. MenetelmÀn toimivuuden osoittamiseksi työssÀ luodaan ja analysoidaan esimerkkejÀ, jotka eivÀt olisi mahdollisia ilman menetelmÀn hyödyntÀmistÀ. MenetelmÀn tekninen toteutus on mittausten perusteella lÀhes tosiaikainen ja kÀytettyjen mallien kÀsittelyajat ovat lÀhellÀ yhtÀ sekuntia. Luodut animaatiojÀrjestelmÀt ovat semanttisesti merkittÀviÀ erityisesti alhaisemmilla tarkkuuksilla. Vaikka luulÀmpöpinnoitukseen liittyy teoreettisia ongelmia, ne eivÀt nÀkyneet luoduissa esimerkeissÀ

    Proceedings, MSVSCC 2018

    Get PDF
    Proceedings of the 12th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2018 at VMASC in Suffolk, Virginia. 155 pp

    Teaching and learning in virtual worlds: is it worth the effort?

    Get PDF
    Educators have been quick to spot the enormous potential afforded by virtual worlds for situated and authentic learning, practising tasks with potentially serious consequences in the real world and for bringing geographically dispersed faculty and students together in the same space (Gee, 2007; Johnson and Levine, 2008). Though this potential has largely been realised, it generally isn’t without cost in terms of lack of institutional buy-in, steep learning curves for all participants, and lack of a sound theoretical framework to support learning activities (Campbell, 2009; Cheal, 2007; Kluge & Riley, 2008). This symposium will explore the affordances and issues associated with teaching and learning in virtual worlds, all the time considering the question: is it worth the effort

    Transforming pre-service teacher curriculum: observation through a TPACK lens

    Get PDF
    This paper will discuss an international online collaborative learning experience through the lens of the Technological Pedagogical Content Knowledge (TPACK) framework. The teacher knowledge required to effectively provide transformative learning experiences for 21st century learners in a digital world is complex, situated and changing. The discussion looks beyond the opportunity for knowledge development of content, pedagogy and technology as components of TPACK towards the interaction between those three components. Implications for practice are also discussed. In today’s technology infused classrooms it is within the realms of teacher educators, practising teaching and pre-service teachers explore and address effective practices using technology to enhance learning

    A moving observer in a three-dimensional world

    Get PDF
    For many tasks, such as retrieving a previously viewed object, an observer must form a representation of the world at one location and use it at another. A world-based 3D reconstruction of the scene built up from visual information would fulfil this requirement, something computer vision now achieves with great speed and accuracy. However, I argue that it is neither easy nor necessary for the brain to do this. I discuss biologically plausible alternatives, including the possibility of avoiding 3D coordinate frames such as ego-centric and world-based representations. For example, the distance, slant and local shape of surfaces dictate the propensity of visual features to move in the image with respect to one another as the observer’s perspective changes (through movement or binocular viewing). Such propensities can be stored without the need for 3D reference frames. The problem of representing a stable scene in the face of continual head and eye movements is an appropriate starting place for understanding the goal of 3D vision, more so, I argue, than the case of a static binocular observer
    • 

    corecore