18 research outputs found

    AN EMBEDDED P2P-BASED POSITIONAL AUDIO SYSTEM IN VIRTUAL ENVIRONMENTS

    Get PDF
    ABSTRACT Networked virtual environments are increasingly used for collaboration tasks and other interactive applications. While the graphics in such virtual worlds are usually three-dimensional, interactive 3D voice support is still in its infancy. Here we describe our demonstration system that supports P2P-based positional audio and interactive voice communication with the SecondLife platform

    ABSTRACT Edge Indexing in a Grid for Highly Dynamic Virtual Environments ∗

    No full text
    Newly emerging game–based application systems such as Second Life 1 provide 3D virtual environments where multiple users interact with each other in real–time. They are filled with autonomous, mutable virtual content which is continuously augmented by the users. To make the systems highly scalable and dynamically extensible, they are usually built on a client–server based grid subspace division where the virtual worlds are partitioned into manageable sub–worlds. In each sub–world, the user continuously receives relevant geometry updates of moving objects from remotely connected servers and renders them according to her viewpoint, rather than retrieving them from a local storage medium. In such systems, the determination of the set of objects that are visible from a user’s viewpoint is one of the primary factors that affect server throughput and scalability. Specifically, performing real–time visibility tests in extremely dynamic virtual environments is a very challenging task as millions of objects and sub-millions of active users are moving and interacting. We recognize that the described challenges are closely related to a spatial database problem, and hence we map the moving geometry objects in the virtual space to a set of multi-dimensional objects in a spatial database while modeling each avatar both as a spatial object and a moving query. Unfortunately, existing spatial indexing methods are unsuitable for this kind of new environments. The main goal of this paper is to present an efficient spatial index structure that minimizes unexpected object popping and supports highly scalable real–time visibility determination. We then uncover many useful properties of this structure and compare the index structure with various spatial indexing methods in terms of query quality, system throughput, and resource utilization. We expect our approach to lay the groundwork for next–generation virtual frameworks that may merge into existing web–based services in the near future

    Edge indexing in a grid for highly dynamic virtual environments

    No full text

    Content vs. Context: Visual and geographic information use in video landmark retrieval

    No full text
    10.1145/2700287ACM Transactions on Multimedia Computing, Communications and Applications11

    Distributed Core Migration in Multicast Peer-to-Peer Streaming

    No full text
    Recently, a number of application-level multicast protocols have been proposed using a center based approach. These multicast trees critically depend on the position of the core for balanced tree organization. Motivated by peer-to-peer streaming media applications, this paper presents a novel approach for core migration in application-level multicast trees which aims to reduce the maximum overall delay. Our architecture is inspired by previous research that proposed migration as an approach to reduce the overall delay in the tree. Our core migration protocol uses a heuristic approach based on the tree diameter to determine the node to be elected as the core. We also present an alternative tree reorganization approach which we compare with core migration. We present simulation results with our overlay protocol YimaCast and show the feasibility of the approach for trees of different sizes. We demonstrate that core migration works well in a very dynamic environment while tree reconstruction is beneficial in more stable scenarios

    Predictive modeling of streaming servers

    No full text

    Observations of Device Orientation Decisions on Mobile Videos

    No full text
    A sensor-rich mobile video represents a new type of videos acquired from modern smart phones. During video recording, it is also recorded various amounts of sensor data collected from embedded sensors. Unlike the conventional videos acquired from proprietary capturing devices, these videos allow enriched reconstruction of their surrounding environments, while enabling users to record them handily. In this paper, we examine the robustness of existing device orientation detection method, by analyzing the motion sensor samples that are publicly available from the sensor-rich mobile video hosting website, and discuss our observation results and potential problems when computing device orientation of georeferential mobile videos
    corecore