485,845 research outputs found

    CyberLiveApp: a secure sharing and migration approach for live virtual desktop applications in a cloud environment

    Get PDF
    In recent years we have witnessed the rapid advent of cloud computing, in which the remote software is delivered as a service and accessed by users using a thin client over the Internet. In particular, the traditional desktop application can execute in the remote virtual machines without re-architecture providing a personal desktop experience to users through remote display technologies. However, existing cloud desktop applications mainly achieve isolation environments using virtual machines (VMs), which cannot adequately support application-oriented collaborations between multiple users and VMs. In this paper, we propose a flexible collaboration approach, named CyberLiveApp, to enable live virtual desktop applications sharing based on a cloud and virtualization infrastructure. The CyberLiveApp supports secure application sharing and on-demand migration among multiple users or equipment. To support VM desktop sharing among multiple users, a secure access mechanism is developed to distinguish view privileges allowing window operation events to be tracked to compute hidden window areas in real time. A proxy-based window filtering mechanism is also proposed to deliver desktops to different users. To support application sharing and migration between VMs, we use the presentation streaming redirection mechanism and VM cloning service. These approaches have been preliminary evaluated on an extended MetaVNC. Results of evaluations have verified that these approaches are effective and useful

    "Virtual Cockpit Window" for a Windowless Aerospacecraft

    Get PDF
    A software system processes navigational and sensory information in real time to generate a three-dimensional-appearing image of the external environment for viewing by crewmembers of a windowless aerospacecraft. The design of the particular aerospacecraft (the X-38) is such that the addition of a real transparent cockpit window to the airframe would have resulted in unacceptably large increases in weight and cost. When exerting manual control, an aircrew needs to see terrain, obstructions, and other features around the aircraft in order to land safely. The X-38 is capable of automated landing, but even when this capability is utilized, the crew still needs to view the external environment: From the very beginning of the United States space program, crews have expressed profound dislike for windowless vehicles. The wellbeing of an aircrew is considerably promoted by a three-dimensional view of terrain and obstructions. The present software system was developed to satisfy the need for such a view. In conjunction with a computer and display equipment that weigh less than would a real transparent window, this software system thus provides a virtual cockpit window. The key problem in the development of this software system was to create a realistic three-dimensional perspective view that is updated in real time. The problem was solved by building upon a pre-existing commercial program LandForm C3 that combines the speed of flight-simulator software with the power of geographic-information-system software to generate real-time, three-dimensional-appearing displays of terrain and other features of flight environments. In the development of the present software, the pre-existing program was modified to enable it to utilize real-time information on the position and attitude of the aerospacecraft to generate a view of the external world as it would appear to a person looking out through a window in the aerospacecraft. The development included innovations in realistic horizon-limit modeling, three-dimensional stereographic display, and interfaces for utilization of data from inertial-navigation devices, Global Positioning System receivers, and laser rangefinders

    Laterality of Eye Use by Bottlenose (Tursiops truncatus) and Rough-toothed (Steno bredanensis) Dolphins While Viewing Predictable and Unpredictable Stimuli

    Get PDF
    Laterality of eye use has been increasingly studied in cetaceans. Research supports that many cetacean species keep prey on the right side while feeding and preferentially view unfamiliar objects with the right eye. In contrast, the left eye has been used more by calves while in close proximity to their mothers. Despite some discrepancies across and within species, laterality of eye use generally indicates functional specialization of brain hemispheres in cetaceans. The present study aimed to examine laterality of eye use in bottlenose dolphins (Tursiops truncatus) and rough-toothed dolphins (Steno bredanensis) under managed care. Subjects were video-recorded through an underwater window while viewing two different stimuli, one predictable and static and the other unpredictable and moving. Bottlenose dolphins displayed an overall right-eye preference, especially while viewing the unpredictable, moving stimulus. Rough-toothed dolphins did not display eye preference while viewing stimuli. No significant correlations between degree of laterality and behavioral interest in the stimuli were found. Only for bottlenose dolphins were the degree of laterality and curiosity ratings correlated. This study extends research on cetacean lateralization to a species not extensively examined and to stimuli that varied in movement and degree of predictability. Further research is needed to make conclusions regarding lateralization in cetaceans

    Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    Get PDF
    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience

    Visualizing and editing large-scale volume segmentations of neuronal tissue

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (leaf 69).Connectomics researchers examine images of the brain in order to determine the structure of neuronal networks. As imaging techniques improve, images are growing in size and resolution - but they are also outgrowing the capacity of existing software to view these images. In response to this problem, this thesis presents OMNI: an application for viewing and editing large connectomic image volumes. OMNI employs pre-processing and caching techniques to allow researchers to examine large image volumes at multiple viewpoints and resolutions. But OMNI is also a full-fledged navigation and editing environment, incorporating the suggestions of connectomics researchers into a simple and flexible user interface design. The OMNI user interface features multiple synchronized display windows and a novel project inspector widget that facilitates project interaction. The 2D navigation and editing modules use OpenGL textures to display image slices from large image volumes and feature a texture management system that includes a threaded texture cache. Editing is performed by painting voxels in a viewing window and allows the user to edit existing neuron tracings or create new ones. The development of OMNI gives connectomics researchers a way to view detailed images of the nervous system and enables them to trace neural pathways through these large images. By studying the structure of individual neurons and groups of neurons, researchers can approach a better understanding of neuron function and the development of the brain.by Rachel Welles Shearer.M.Eng

    Exhibiting Jewish culture in Postwar Britain: Glasgow's 1951 Festival of Jewish Arts

    Get PDF
    The Festival of Jewish Arts in Glasgow was the first and largest Jewish festival in Britain, conceived as a response to, and timed to coincide with, the Festival of Britain in 1951. Held at Glasgow’s McLellan Galleries on Sauchiehall Street from 4-25 February 1951, the event showcased works from over fifty internationally renowned Jewish artists, antiquities dating back from the 13th century, musical performances, films, lectures, a book display and a run of sell-out performances of S. An-sky’s, The Dybbuk. In this essay, I offer the first sustained account of the festival by bringing together available documentation and analysing the “performance of display” and perspectives on Jewish culture the festival offered. As this essay argues, when looking at the material and tangible elements of the festival alongside the social and cultural ideals of its organisers, one can discern a complex negotiation between the historical place and space of the festival, the concerns of the community, and the tensions between minority and mainstream Scottish and British culture. The Festival of Jewish Arts thus provides a rare window through which to view a Jewish community grappling with issues of loss and reconstructing identity in the aftermath of Nazi atrocities while at the same time trying to transcend the perception of their Otherness and respond to British anxieties about Jewish refugees and the founding of the State of Israel

    Measures for simulator evaluation of a helicopter obstacle avoidance system

    Get PDF
    The U.S. Army Aeroflightdynamics Directorate (AFDD) has developed a high-fidelity, full-mission simulation facility for the demonstration and evaluation of advanced helicopter mission equipment. The Crew Station Research and Development Facility (CSRDF) provides the capability to conduct one- or two-crew full-mission simulations in a state-of-the-art helicopter simulator. The CSRDF provides a realistic, full field-of-regard visual environment with simulation of state-of-the-art weapons, sensors, and flight control systems. We are using the CSRDF to evaluate the ability of an obstacle avoidance system (OASYS) to support low altitude flight in cluttered terrain using night vision goggles (NVG). The OASYS uses a laser radar to locate obstacles to safe flight in the aircraft's flight path. A major concern is the detection of wires, which can be difficult to see with NVG, but other obstacles--such as trees, poles or the ground--are also a concern. The OASYS symbology is presented to the pilot on a head-up display mounted on the NVG (NVG-HUD). The NVG-HUD presents head-stabilized symbology to the pilot while allowing him to view the image intensified, out-the-window scene through the HUD. Since interference with viewing through the display is a major concern, OASYS symbology must be designed to present usable obstacle clearance information with a minimum of clutter

    Full Body Acting Rehearsal in a Networked Virtual Environment-A Case Study

    Get PDF
    In order to rehearse for a play or a scene from a movie, it is generally required that the actors are physically present at the same time in the same place. In this paper we present an example and experience of a full body motion shared virtual environment (SVE) for rehearsal. The system allows actors and directors to meet in an SVE in order to rehearse scenes for a play or a movie, that is, to perform some dialogue and blocking (positions, movements, and displacements of actors in the scene) rehearsal through a full body interactive virtual reality (VR) system. The system combines immersive VR rendering techniques as well as network capabilities together with full body tracking. Two actors and a director rehearsed from separate locations. One actor and the director were in London (located in separate rooms) while the second actor was in Barcelona. The Barcelona actor used a wide field-of-view head-tracked head-mounted display, and wore a body suit for real-time motion capture and display. The London actor was in a Cave system, with head and partial body tracking. Each actor was presented to the other as an avatar in the shared virtual environment, and the director could see the whole scenario on a desktop display, and intervene by voice commands. A video stream in a window displayed in the virtual environment also represented the director. The London participant was a professional actor, who afterward commented on the utility of the system for acting rehearsal. It was concluded that full body tracking and corresponding real-time display of all the actors' movements would be a critical requirement, and that blocking was possible down to the level of detail of gestures. Details of the implementation, actors, and director experiences are provided
    corecore