100 research outputs found

    Code: Version 2.0

    Get PDF
    Discusses the regulation of cyberspace via code, as well as possible trends to expect in this regulation. Additional topics discussed in this context include intellectual property, privacy, and free speech

    Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications

    Get PDF
    Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of instantaneous derivative information, dead reckoning trades remote extrapolation accuracy for low computational complexity and ease-of-implementation. In this article, we present a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of instantaneous velocity information with predictive velocity information in order to improve the accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning approach, each controlling host employs a bank of neural network predictors trained to estimate future changes in entity velocity up to and including some maximum prediction horizon. The effect of each estimated change in velocity on the current entity position is simulated to produce an estimate for the likely position of the entity over some short time-span. Upon detecting an error threshold violation, the controlling host transmits a predictive velocity vector that extrapolates through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such an approach succeeds in reducing the spatial error associated with remote extrapolation of entity state. Consequently, a further reduction in network traffic can be achieved. Simulation results conducted using several human users in a highly interactive DIA indicate significant potential for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our proposed neuro-reckoning framework exhibits low computational resource overhead for real-time use and can be seamlessly integrated into many existing dead reckoning mechanisms

    Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications

    Get PDF
    Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of instantaneous derivative information, dead reckoning trades remote extrapolation accuracy for low computational complexity and ease-of-implementation. In this article, we present a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of instantaneous velocity information with predictive velocity information in order to improve the accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning approach, each controlling host employs a bank of neural network predictors trained to estimate future changes in entity velocity up to and including some maximum prediction horizon. The effect of each estimated change in velocity on the current entity position is simulated to produce an estimate for the likely position of the entity over some short time-span. Upon detecting an error threshold violation, the controlling host transmits a predictive velocity vector that extrapolates through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such an approach succeeds in reducing the spatial error associated with remote extrapolation of entity state. Consequently, a further reduction in network traffic can be achieved. Simulation results conducted using several human users in a highly interactive DIA indicate significant potential for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our proposed neuro-reckoning framework exhibits low computational resource overhead for real-time use and can be seamlessly integrated into many existing dead reckoning mechanisms

    Race, Religion and the City: Twitter Word Frequency Patterns Reveal Dominant Demographic Dimensions in the United States

    Get PDF
    Recently, numerous approaches have emerged in the social sciences to exploit the opportunities made possible by the vast amounts of data generated by online social networks (OSNs). Having access to information about users on such a scale opens up a range of possibilities, all without the limitations associated with often slow and expensive paper-based polls. A question that remains to be satisfactorily addressed, however, is how demography is represented in the OSN content? Here, we study language use in the US using a corpus of text compiled from over half a billion geo-tagged messages from the online microblogging platform Twitter. Our intention is to reveal the most important spatial patterns in language use in an unsupervised manner and relate them to demographics. Our approach is based on Latent Semantic Analysis (LSA) augmented with the Robust Principal Component Analysis (RPCA) methodology. We find spatially correlated patterns that can be interpreted based on the words associated with them. The main language features can be related to slang use, urbanization, travel, religion and ethnicity, the patterns of which are shown to correlate plausibly with traditional census data. Our findings thus validate the concept of demography being represented in OSN language use and show that the traits observed are inherently present in the word frequencies without any previous assumptions about the dataset. Thus, they could form the basis of further research focusing on the evaluation of demographic data estimation from other big data sources, or on the dynamical processes that result in the patterns found here

    Semantic Selection of Internet Sources through SWRL Enabled OWL Ontologies

    Get PDF
    This research examines the problem of Information Overload (IO) and give an overview of various attempts to resolve it. Furthermore, argue that instead of fighting IO, it is advisable to start learning how to live with it. It is unlikely that in modern information age, where users are producer and consumer of information, the amount of data and information generated would decrease. Furthermore, when managing IO, users are confined to the algorithms and policies of commercial Search Engines and Recommender Systems (RSs), which create results that also add to IO. this research calls to initiate a change in thinking: this by giving greater power to users when addressing the relevance and accuracy of internet searches, which helps in IO. However powerful search engines are, they do not process enough semantics in the moment when search queries are formulated. This research proposes a semantic selection of internet sources, through SWRL enabled OWL ontologies. the research focuses on SWT and its Stack because they (a)secure the semantic interpretation of the environments where internet searches take place and (b) guarantee reasoning that results in the selection of suitable internet sources in a particular moment of internet searches. Therefore, it is important to model the behaviour of users through OWL concepts and reason upon them in order to address IO when searching the internet. Thus, user behaviour is itemized through user preferences, perceptions and expectations from internet searches. The proposed approach in this research is a Software Engineering (SE) solution which provides computations based on the semantics of the environment stored in the ontological model

    A Social Dimension for Digital Architectural Practice

    Get PDF
    Merged with duplicate record 10026.1/1296 on 14.03.2017 by CS (TIS)This thesis proceeds from an analysis of practice and critical commentary to claim that the opportunities presented to some architectural practices by the advent of ubiquitous digital technology have not been properly exploited. The missed opportunities, it claims, can be attributed largely to the retention of a model of time and spaces as discrete design parameters, which is inappropriate in the context of the widening awareness of social interconnectedness that digital technology has also facilitated. As a remedy, the thesis shows that some social considerations essential to good architecture - which could have been more fully integrated in practice and theory more than a decade ago - can now be usefully revisited through a systematic reflection on an emerging use of web technologies that support social navigation. The thesis argues through its text and a number of practical projects that the increasing confidence and sophistication of interdisciplinary studies in geography, most notably in human geography, combined with the technological opportunities of social navigation, provide a useful model of time and space as a unified design parameter. In so doing the thesis suggests new possibilities for architectural practices involving social interaction. Through a literature review of the introduction and development of digital technologies to architectural practice, the thesis identifies the inappropriate persistence of a number of overarching concepts informing architectural practice. In a review of the emergence and growth of 'human geography' it elaborates on the concept of the social production of space, which it relates to an analysis of emerging social navigation technologies. In so doing the thesis prepares the way for an integration of socially aware architecture with the opportunities offered by social computing. To substantiate its claim the thesis includes a number of practical public projects that have been specifically designed to extend and amplify certain concepts, along with a large-scale design project and systematic analysis which is intended to illustrate the theoretical claim and provide a model for further practical exploitation

    Supporting exploratory browsing with visualization of social interaction history

    Get PDF
    This thesis is concerned with the design, development, and evaluation of information visualization tools for supporting exploratory browsing. Information retrieval (IR) systems currently do not support browsing well. Responding to user queries, IR systems typically compute relevance scores of documents and then present the document surrogates to users in order of relevance. Other systems such as email clients and discussion forums simply arrange messages in reverse chronological order. Using these systems, people cannot gain an overview of a collection easily, nor do they receive adequate support for finding potentially useful items in the collection. This thesis explores the feasibility of using social interaction history to improve exploratory browsing. Social interaction history refers to traces of interaction among users in an information space, such as discussions that happen in the blogosphere or online newspapers through the commenting facility. The basic hypothesis of this work is that social interaction history can serve as a good indicator of the potential value of information items. Therefore, visualization of social interaction history would offer navigational cues for finding potentially valuable information items in a collection. To test this basic hypothesis, I conducted three studies. First, I ran statistical analysis of a social media data set. The results showed that there were positive relationships between traces of social interaction and the degree of interestingness of web articles. Second, I conducted a feasibility study to collect initial feedback about the potential of social interaction history to support information exploration. Comments from the participants were in line with the research hypothesis. Finally, I conducted a summative evaluation to measure how well visualization of social interaction history can improve exploratory browsing. The results showed that visualization of social interaction history was able to help users find interesting articles, to reduce wasted effort, and to increase user satisfaction with the visualization tool

    Boundary Images

    Get PDF
    How are images made, and how should we understand the capacities of digital images? This book investigates images as well as the technologies that host them. Its three chapters discuss the boundaries that images cross and blur between humans, machines, and nature and the ways in which images are political, material, and visual. Exploring these boundaries of images, this book places itself at the limits of the visual and beyond what can be seen, understanding these as starting points for the production of new and radically different ways of knowing about the world and its becomings

    Collaborative Workspaces within Distributed Virtual Environments

    Get PDF
    In warfare, be it a training simulation or actual combat, a commander\u27s time is one of the most valuable and fleeting resources of a military unit. Thus, it is natural for a unit to have a plethora of personnel to analyze and filter information to the decision-maker. This dynamic exchange of ideas between analyst and commander is currently not available within the distributed interactive simulation (DIS) community. This lack of exchange limits the usefulness of the DIS experience to the commander and his troops. This thesis addresses the commander\u27s isolation problem through the integration of a collaborative workspace within AFIT\u27s Synthetic BattleBridge (SBB) as a technique to improve situational awareness. The SBB\u27s Collaborative Workspace enhances battlespace awareness through CSCW (computer supported cooperative work) enabling communication technologies. The SBB\u27s Collaborative Workspace allows the user to interact with other SBB users through the transmission and reception of public bulletins, private email, real-time chat sessions, shared viewpoints, shared video, and shared annotations to the virtual environment. Collaborative communication between SBB occurs through the use of standard and experimental DIS-compliant protocol data units. The SBB\u27s Collaborative Workspace gives the battlespace commander the widest range of communication options available within a DIS virtual environment today

    An Information-Theoretic Framework for Consistency Maintenance in Distributed Interactive Applications

    Get PDF
    Distributed Interactive Applications (DIAs) enable geographically dispersed users to interact with each other in a virtual environment. A key factor to the success of a DIA is the maintenance of a consistent view of the shared virtual world for all the participants. However, maintaining consistent states in DIAs is difficult under real networks. State changes communicated by messages over such networks suffer latency leading to inconsistency across the application. Predictive Contract Mechanisms (PCMs) combat this problem through reducing the number of messages transmitted in return for perceptually tolerable inconsistency. This thesis examines the operation of PCMs using concepts and methods derived from information theory. This information theory perspective results in a novel information model of PCMs that quantifies and analyzes the efficiency of such methods in communicating the reduced state information, and a new adaptive multiple-model-based framework for improving consistency in DIAs. The first part of this thesis introduces information measurements of user behavior in DIAs and formalizes the information model for PCM operation. In presenting the information model, the statistical dependence in the entity state, which makes using extrapolation models to predict future user behavior possible, is evaluated. The efficiency of a PCM to exploit such predictability to reduce the amount of network resources required to maintain consistency is also investigated. It is demonstrated that from the information theory perspective, PCMs can be interpreted as a form of information reduction and compression. The second part of this thesis proposes an Information-Based Dynamic Extrapolation Model for dynamically selecting between extrapolation algorithms based on information evaluation and inferred network conditions. This model adapts PCM configurations to both user behavior and network conditions, and makes the most information-efficient use of the available network resources. In doing so, it improves PCM performance and consistency in DIAs
    • …
    corecore