4,405 research outputs found

    A study of event traffic during the shared manipulation of objects within a collaborative virtual environment

    Get PDF
    Event management must balance consistency and responsiveness above the requirements of shared object interaction within a Collaborative Virtual Environment (CVE) system. An understanding of the event traffic during collaborative tasks helps in the design of all aspects of a CVE system. The application, user activity, the display interface, and the network resources, all play a part in determining the characteristics of event management. Linked cubic displays lend themselves well to supporting natural social human communication between remote users. To allow users to communicate naturally and subconsciously, continuous and detailed tracking is necessary. This, however, is hard to balance with the real-time consistency constraints of general shared object interaction. This paper aims to explain these issues through a detailed examination of event traffic produced by a typical CVE, using both immersive and desktop displays, while supporting a variety of collaborative activities. We analyze event traffic during a highly collaborative task requiring various forms of shared object manipulation, including the concurrent manipulation of a shared object. Event sources are categorized and the influence of the form of object sharing as well as the display device interface are detailed. With the presented findings the paper wishes to aid the design of future systems

    Autonomous Satellite Operations Via Secure Virtual Mission Operations Center

    Get PDF
    The science community is interested in improving their ability to respond to rapidly evolving, transient phenomena via autonomous rapid reconfiguration, which derives from the ability to assemble separate but collaborating sensors and data forecasting systems to meet a broad range of research and application needs. Current satellite systems typically require human intervention to respond to triggers from dissimilar sensor systems. Additionally, satellite ground services often need to be coordinated days or weeks in advance. Finally, the boundaries between the various sensor systems that make up such a Sensor Web are defined by such things as link delay and connectivity, data and error rate asymmetry, data reliability, quality of service provisions, and trust, complicating autonomous operations. Over the past ten years, researchers from the NASA Glenn Research Center (GRC), General Dynamics, Surrey Satellite Technology Limited (SSTL), Cisco, Universal Space Networks (USN), the U.S. Geological Survey (USGS), the Naval Research Laboratory, the DoD Operationally Responsive Space (ORS) Office, and others have worked collaboratively to develop a virtual mission operations capability. Called VMOC (Virtual Mission Operations Center), this new capability allows cross-system queuing of dissimilar mission unique systems through the use of a common security scheme and published application programming interfaces (APIs). Collaborative VMOC demonstrations over the last several years have supported the standardization of spacecraft to ground interfaces needed to reduce costs, maximize space effects to the user, and allow the generation of new tactics, techniques and procedures that lead to responsive space employment

    Fidelity optimization in distributed virtual environments

    Get PDF
    In virtual environment systems, the ultimate goal is delivery of the highest-fidelity user experience possible. This dissertation shows that is possible to increase the scalability of distributed virtual environments (DVEs), in a tractable fashion, through a novel application of optimization techniques. Fidelity is maximized by utilizing the given display and network capacity in an optimal fashion, individually tuned for multiple users, in a manner most appropriate to a specific DVE application. This optimization is accomplished using the QUICK framework for managing the display and request of representations for virtual objects. Ratings of representation Quality, object Importance, and representation Cost are included in model descriptions as special annotations. The QUICK optimization computes the fidelity contribution of a representation by combining these annotations with specifications of user task and platform capability. This dissertation contributes the QUICK optimization algorithms; a software framework for experimentation; and associated general purpose formats for codifying Quality, Importance, Cost, task, and platform capability. Experimentation with the QUICK framework has shown overwhelming advantages in comparison with standard resource management techniqueshttp://www.archive.org/details/fidelityoptimiza00cappCivilian author.Approved for public release; distribution is unlimited

    On Consistency and Network Latency in Distributed Interactive Applications: A Survey—Part I

    Get PDF
    This paper is the first part of a two-part paper that documents a detailed survey of the research carried out on consistency and latency in distributed interactive applications (DIAs) in recent decades. Part I reviews the terminology associated with DIAs and offers definitions for consistency and latency. Related issues such as jitter and fidelity are also discussed. Furthermore, the various consistency maintenance mechanisms that researchers have used to improve consistency and reduce latency effects are considered. These mechanisms are grouped into one of three categories, namely time management, Information management and system architectural management. This paper presents the techniques associated with the time management category. Examples of such mechanisms include time warp, lock step synchronisation and predictive time management. The remaining two categories are presented in part two of the survey

    Wireless Sensor Data Transport, Aggregation and Security

    Get PDF
    abstract: Wireless sensor networks (WSN) and the communication and the security therein have been gaining further prominence in the tech-industry recently, with the emergence of the so called Internet of Things (IoT). The steps from acquiring data and making a reactive decision base on the acquired sensor measurements are complex and requires careful execution of several steps. In many of these steps there are still technological gaps to fill that are due to the fact that several primitives that are desirable in a sensor network environment are bolt on the networks as application layer functionalities, rather than built in them. For several important functionalities that are at the core of IoT architectures we have developed a solution that is analyzed and discussed in the following chapters. The chain of steps from the acquisition of sensor samples until these samples reach a control center or the cloud where the data analytics are performed, starts with the acquisition of the sensor measurements at the correct time and, importantly, synchronously among all sensors deployed. This synchronization has to be network wide, including both the wired core network as well as the wireless edge devices. This thesis studies a decentralized and lightweight solution to synchronize and schedule IoT devices over wireless and wired networks adaptively, with very simple local signaling. Furthermore, measurement results have to be transported and aggregated over the same interface, requiring clever coordination among all nodes, as network resources are shared, keeping scalability and fail-safe operation in mind. Furthermore ensuring the integrity of measurements is a complicated task. On the one hand Cryptography can shield the network from outside attackers and therefore is the first step to take, but due to the volume of sensors must rely on an automated key distribution mechanism. On the other hand cryptography does not protect against exposed keys or inside attackers. One however can exploit statistical properties to detect and identify nodes that send false information and exclude these attacker nodes from the network to avoid data manipulation. Furthermore, if data is supplied by a third party, one can apply automated trust metric for each individual data source to define which data to accept and consider for mentioned statistical tests in the first place. Monitoring the cyber and physical activities of an IoT infrastructure in concert is another topic that is investigated in this thesis.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Applying global software development approaches to building high-performing software teams

    Get PDF
    The rapid progress of communication technologies combined with the growing competition for talents and knowledge has made it necessary to reassess the potential of distributed development which has significantly changed the landscape of the IT industry introducing a variety of cooperation models and making notable changes to the software team work environment. Along with this, enterprises pay more attention to teams’ performance improvement, employing emerging management tools for building up efficient software teams, and trying to get the most out of understanding factors which significantly impact a team’s overall performance. The objective of the research is to systematize factors characterizing high-performing software teams; indicate the benefits of global software development (GSD) models positively influencing software teams’ development performance; and study how companies’ strategies can benefit from distributed development approaches in building high-performing software teams. The thesis is designed as a combination of a systematic literature review followed by qualitative research in the form of semi-structured interviews to validate the findings regarding classification of GSD models’ benefits and their influence on the development of high-performing software teams. At a literature review stage, the research (1) introduces a team performance factors’ model reflecting the aspects which impact the effectiveness of development teams; (2) suggests the classification of GSD models based on organizational, legal, and temporal characteristics, and (3) describes the benefits of GSD models which influence the performance of software development teams. Within the empirical part of the study, we refine the classification of GSD models’ benefits based on the qualitative analysis results of semi-structured interviews with practitioners from IT industry, form a comparison table of GSD benefits depending on the model in question, and introduce recommendations for company and team management regarding the application of GSD in building high-performing software teams. IT corporations, to achieve their strategic goals, can enrich their range of available tools for managing high-performing teams by considering the peculiarities of different GSD models. Company and team management should evaluate the advantages of the distributed operational models, and use the potential and benefits of available configurations to increase teams’ performance and build high-performing software teams

    DiSCmap : digitisation of special collections mapping, assessment, prioritisation. Final project report

    Get PDF
    Traditionally, digitisation has been led by supply rather than demand. While end users are seen as a priority they are not directly consulted about which collections they would like to have made available digitally or why. This can be seen in a wide range of policy documents throughout the cultural heritage sector, where users are positioned as central but where their preferences are assumed rather than solicited. Post-digitisation consultation with end users isequally rare. How are we to know that digitisation is serving the needs of the Higher Education community and is sustainable in the long-term? The 'Digitisation in Special Collections: mapping, assessment and prioritisation' (DiSCmap) project, funded by the Joint Information Systems Committee (JISC) and the Research Information Network (RIN), aimed to:- Identify priority collections for potential digitisation housed within UK Higher Education's libraries, archives and museums as well as faculties and departments.- Assess users' needs and demand for Special Collections to be digitised across all disciplines.- Produce a synthesis of available knowledge about users' needs with regard to usability and format of digitised resources.- Provide recommendations for a strategic approach to digitisation within the wider context and activity of leading players both in the public and commercial sector.The project was carried out jointly by the Centre for Digital Library Research (CDLR) and the Centre for Research in Library and Information Management (CERLIM) and has taken a collaborative approach to the creation of a user-driven digitisation prioritisation framework, encouraging participation and collective engagement between communities.Between September 2008 and March 2009 the DiSCmap project team asked over 1,000 users, including intermediaries (vocational users who take care of collections) and end users (university teachers, researchers and students) a variety of questions about which physical and digital Special Collections they make use of and what criteria they feel must be considered when selecting materials for digitisation. This was achieved through workshops, interviews and two online questionnaires. Although the data gathered from these activities has the limitation of reflecting only a partial view on priorities for digitisation - the view expressed by those institutions who volunteered to take part in the study - DiSCmap was able to develop:- a 'long list' of 945 collections nominated for digitisation both by intermediaries andend-users from 70 HE institutions (see p. 21);- a framework of user-driven prioritisation criteria which could be used to inform current and future digitisation priorities; (see p. 45)- a set of 'short lists' of collections which exemplify the application of user-driven criteria from the prioritisation framework to the long list (see Appendix X):o Collections nominated more than once by various groups of users.o Collections related to a specific policy framework, eg HEFCE's strategically important and vulnerable subjects for Mathematics, Chemistry and Physics.o Collections on specific thematic clusters.o Collections with highest number of reasons for digitisation

    How data will transform industrial processes: crowdsensing, crowdsourcing and big data as pillars of industry 4.0

    Get PDF
    We are living in the era of the fourth industrial revolution, namely Industry 4.0. This paper presents themain aspects related to Industry 4.0, the technologies thatwill enable this revolution, and the main application domains thatwill be affected by it. The effects that the introduction of Internet of Things (IoT), Cyber-Physical Systems (CPS), crowdsensing, crowdsourcing, cloud computing and big data will have on industrial processeswill be discussed. Themain objectiveswill be represented by improvements in: production efficiency, quality and cost-effectiveness; workplace health and safety, as well as quality of working conditions; products' quality and availability, according to mass customisation requirements. The paper will further discuss the common denominator of these enhancements, i.e., data collection and analysis. As data and information will be crucial for Industry 4.0, crowdsensing and crowdsourcing will introduce new advantages and challenges, which will make most of the industrial processes easier with respect to traditional technologies
    • …
    corecore