13,722 research outputs found

    On the efficacy of cinema, or what the visual system did not evolve to do

    Get PDF
    Spatial displays, and a constraint that they do not place on the use of spatial instruments are discussed. Much of the work done in visual perception by psychologists and by computer scientists has concerned displays that show the motion of rigid objects. Typically, if one assumes that objects are rigid, one can then proceed to understand how the constant shape of the object can be perceived (or computed) as it moves through space. The author maintains that photographs and cinema are visual displays that are also powerful forms of art. Their efficacy, in part, stems from the fact that, although viewpoint is constrained when composing them, it is not nearly so constrained when viewing them. It is obvious, according to the author, that human visual systems did not evolve to watch movies or look at photographs. Thus, what photographs and movies present must be allowed in the rule-governed system under which vision evolved. Machine-vision algorithms, to be applicable to human vision, should show the same types of tolerance

    DeltaFinger: a 3-DoF Wearable Haptic Display Enabling High-Fidelity Force Vector Presentation at a User Finger

    Full text link
    This paper presents a novel haptic device DeltaFinger designed to deliver the force of interaction with virtual objects by guiding user's finger with wearable delta mechanism. The developed interface is capable to deliver 3D force vector to the fingertip of the index finger of the user, allowing complex rendering of virtual reality (VR) environment. The developed device is able to produce the kinesthetic feedback up to 1.8 N in vertical projection and 0.9 N in horizontal projection without restricting the motion freedom of of the remaining fingers. The experimental results showed a sufficient precision in perception of force vector with DeltaFinger (mean force vector error of 0.6 rad). The proposed device potentially can be applied to VR communications, medicine, and navigation of the people with vision problems.Comment: 13 pages, 8 figures, accepted version to AsiaHaptics 202

    Beyond: collapsible tools and gestures for computational design

    Get PDF
    Since the invention of the personal computer, digital media has remained separate from the physical world, blocked by a rigid screen. In this paper, we present Beyond, an interface for 3-D design where users can directly manipulate digital media with physically retractable tools and hand gestures. When pushed onto the screen, these tools physically collapse and project themselves onto the screen, letting users perceive as if they were inserting the tools into the digital space beyond the screen. The aim of Beyond is to make the digital 3-D design process straightforward, and more accessible to general users by extending physical affordances to the digital space beyond the computer screen

    Simple Display System of Mechanical Properties of Cells and Their Dispersion

    Get PDF
    The mechanical properties of cells are unique indicators of their states and functions. Though, it is difficult to recognize the degrees of mechanical properties, due to small size of the cell and broad distribution of the mechanical properties. Here, we developed a simple virtual reality system for presenting the mechanical properties of cells and their dispersion using a haptic device and a PC. This system simulates atomic force microscopy (AFM) nanoindentation experiments for floating cells in virtual environments. An operator can virtually position the AFM spherical probe over a round cell with the haptic handle on the PC monitor and feel the force interaction. The Young's modulus of mesenchymal stem cells and HEK293 cells in the floating state was measured by AFM. The distribution of the Young's modulus of these cells was broad, and the distribution complied with a log-normal pattern. To represent the mechanical properties together with the cell variance, we used log-normal distribution-dependent random number determined by the mode and variance values of the Young's modulus of these cells. The represented Young's modulus was determined for each touching event of the probe surface and the cell object, and the haptic device-generating force was calculated using a Hertz model corresponding to the indentation depth and the fixed Young's modulus value. Using this system, we can feel the mechanical properties and their dispersion in each cell type in real time. This system will help us not only recognize the degrees of mechanical properties of diverse cells but also share them with others

    Making the Connection: Moore’s Theory of Transactional Distance and Its Relevance to the Use of a Virtual Classroom in Postgraduate Online Teacher Education

    Get PDF
    This study explored the use of the Web-based virtual environment, Adobe Connect Pro, in a postgraduate online teacher education programme at the University of Waikato. It applied the tenets of Moore’s Theory of Transactional Distance (Moore, 1997) in examining the efficacy of using the virtual classroom to promote quality dialogue and explored how both internal and external structural elements related to the purpose and use of the classroom affected the sense of learner autonomy. The study provides an illustration of the complexity of the relationship that exists between the elements of Moore’s theory, and how the implementation of an external structuring technology such as the virtual classroom, can have both positive impacts (dialogue creation) and negative impacts (diminished sense of learner autonomy). It also suggests that, although Moore’s theory provides a useful conceptual “lens” through which to analyse online learning practices, its tenets may need revisiting to reflect the move toward the use of synchronous communication tools in online distance learning

    Robotic simulators for tissue examination training with multimodal sensory feedback

    Get PDF
    Tissue examination by hand remains an essential technique in clinical practice. The effective application depends on skills in sensorimotor coordination, mainly involving haptic, visual, and auditory feedback. The skills clinicians have to learn can be as subtle as regulating finger pressure with breathing, choosing palpation action, monitoring involuntary facial and vocal expressions in response to palpation, and using pain expressions both as a source of information and as a constraint on physical examination. Patient simulators can provide a safe learning platform to novice physicians before trying real patients. This paper reviews state-of-the-art medical simulators for the training for the first time with a consideration of providing multimodal feedback to learn as many manual examination techniques as possible. The study summarizes current advances in tissue examination training devices simulating different medical conditions and providing different types of feedback modalities. Opportunities with the development of pain expression, tissue modeling, actuation, and sensing are also analyzed to support the future design of effective tissue examination simulators

    A mechatronic shape display based on auxetic materials

    Get PDF
    Shape displays enable people to touch simulated surfaces. A common architecture of such devices uses a mechatronic pin-matrix. Besides their complexity and high cost, these matrix displays suffer from sharp edges due to the discreet representation which reduces their ability to render a large continuous surface when sliding the hand. We propose using an engineered auxetic material actuated by a smaller number of motors. The material bends in multiple directions, feeling smooth and rigid to touch. A prototype implementation uses nine actuators on a 220 mm square section of material. It can display a range of surface curvatures under the palm of a user without aliased edges. In this work we use an auxetic skeleton to provide rigidity on a soft material and demonstrate the potential of this class of surface through user experiments

    Developing a virtual reality environment for petrous bone surgery: a state-of-the-art review

    Get PDF
    The increasing power of computers has led to the development of sophisticated systems that aim to immerse the user in a virtual environment. The benefits of this type of approach to the training of physicians and surgeons are immediately apparent. Unfortunately the implementation of “virtual reality” (VR) surgical simulators has been restricted by both cost and technical limitations. The few successful systems use standardized scenarios, often derived from typical clinical data, to allow the rehearsal of procedures. In reality we would choose a system that allows us not only to practice typical cases but also to enter our own patient data and use it to define the virtual environment. In effect we want to re-write the scenario every time we use the environment and to ensure that its behavior exactly duplicates the behavior of the real tissue. If this can be achieved then VR systems can be used not only to train surgeons but also to rehearse individual procedures where variations in anatomy or pathology present specific surgical problems. The European Union has recently funded a multinational 3-year project (IERAPSI, Integrated Environment for Rehearsal and Planning of Surgical Interventions) to produce a virtual reality system for surgical training and for rehearsing individual procedures. Building the IERAPSI system will bring together a wide range of experts and combine the latest technologies to produce a true, patient specific virtual reality surgical simulator for petrous/temporal bone procedures. This article presents a review of the “state of the art” technologies currently available to construct a system of this type and an overview of the functionality and specifications such a system requires

    A Framework for Dynamic Terrain with Application in Off-road Ground Vehicle Simulations

    Get PDF
    The dissertation develops a framework for the visualization of dynamic terrains for use in interactive real-time 3D systems. Terrain visualization techniques may be classified as either static or dynamic. Static terrain solutions simulate rigid surface types exclusively; whereas dynamic solutions can also represent non-rigid surfaces. Systems that employ a static terrain approach lack realism due to their rigid nature. Disregarding the accurate representation of terrain surface interaction is rationalized because of the inherent difficulties associated with providing runtime dynamism. Nonetheless, dynamic terrain systems are a more correct solution because they allow the terrain database to be modified at run-time for the purpose of deforming the surface. Many established techniques in terrain visualization rely on invalid assumptions and weak computational models that hinder the use of dynamic terrain. Moreover, many existing techniques do not exploit the capabilities offered by current computer hardware. In this research, we present a component framework for terrain visualization that is useful in research, entertainment, and simulation systems. In addition, we present a novel method for deforming the terrain that can be used in real-time, interactive systems. The development of a component framework unifies disparate works under a single architecture. The high-level nature of the framework makes it flexible and adaptable for developing a variety of systems, independent of the static or dynamic nature of the solution. Currently, there are only a handful of documented deformation techniques and, in particular, none make explicit use of graphics hardware. The approach developed by this research offloads extra work to the graphics processing unit; in an effort to alleviate the overhead associated with deforming the terrain. Off-road ground vehicle simulation is used as an application domain to demonstrate the practical nature of the framework and the deformation technique. In order to realistically simulate terrain surface interactivity with the vehicle, the solution balances visual fidelity and speed. Accurately depicting terrain surface interactivity in off-road ground vehicle simulations improves visual realism; thereby, increasing the significance and worth of the application. Systems in academia, government, and commercial institutes can make use of the research findings to achieve the real-time display of interactive terrain surfaces

    Facing the Spectator

    Get PDF
    We investigated the familiar phenomenon of the uncanny feeling that represented people in frontal pose invariably appear to ‘‘face you’’ from wherever you stand. We deploy two different methods. The stimuli include the conventional one—a flat portrait rocking back and forth about a vertical axis—augmented with two novel variations. In one alternative, the portrait frame rotates whereas the actual portrait stays motionless and fronto-parallel; in the other, we replace the (flat!) portrait with a volumetric object. These variations yield exactly the same optical stimulation in frontal view, but become grossly different in very oblique views. We also let participants sample their momentary awareness through ‘‘gauge object’’ settings in static displays. From our results, we conclude that the psychogenesis of visual awareness maintains a number—at least two, but most likely more—of distinct spatial frameworks simultaneously involving ‘‘cue–scission.’’ Cues may be effective in one of these spatial frameworks but ineffective or functionally different in other ones
    corecore