5,779 research outputs found

    Network Inference via the Time-Varying Graphical Lasso

    Full text link
    Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability

    Visual complexity, player experience, performance and physical exertion in motion-based games for older adults

    Get PDF
    Motion-based video games can have a variety of benefits for the players and are increasingly applied in physical therapy, rehabilitation and prevention for older adults. However, little is known about how this audience experiences playing such games, how the player experience affects the way older adults interact with motion-based games, and how this can relate to therapy goals. In our work, we decompose the player experience of older adults engaging with motion-based games, focusing on the effects of manipulations of the game representation through the visual channel (visual complexity), since it is the primary interaction modality of most games and since vision impairments are common amongst older adults. We examine the effects of different levels of visual complexity on player experience, performance, and exertion in a study with fifteen participants. Our results show that visual complexity affects the way games are perceived in two ways: First, while older adults do have preferences in terms of visual complexity of video games, notable effects were only measurable following drastic variations. Second, perceived exertion shifts depending on the degree of visual complexity. These findings can help inform the design of motion-based games for therapy and rehabilitation for older adults

    Longitudinal LASSO: Jointly Learning Features and Temporal Contingency for Outcome Prediction

    Full text link
    Longitudinal analysis is important in many disciplines, such as the study of behavioral transitions in social science. Only very recently, feature selection has drawn adequate attention in the context of longitudinal modeling. Standard techniques, such as generalized estimating equations, have been modified to select features by imposing sparsity-inducing regularizers. However, they do not explicitly model how a dependent variable relies on features measured at proximal time points. Recent graphical Granger modeling can select features in lagged time points but ignores the temporal correlations within an individual's repeated measurements. We propose an approach to automatically and simultaneously determine both the relevant features and the relevant temporal points that impact the current outcome of the dependent variable. Meanwhile, the proposed model takes into account the non-{\em i.i.d} nature of the data by estimating the within-individual correlations. This approach decomposes model parameters into a summation of two components and imposes separate block-wise LASSO penalties to each component when building a linear model in terms of the past τ\tau measurements of features. One component is used to select features whereas the other is used to select temporal contingent points. An accelerated gradient descent algorithm is developed to efficiently solve the related optimization problem with detailed convergence analysis and asymptotic analysis. Computational results on both synthetic and real world problems demonstrate the superior performance of the proposed approach over existing techniques.Comment: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 201

    Responsibility modelling for civil emergency planning

    Get PDF
    This paper presents a new approach to analysing and understanding civil emergency planning based on the notion of responsibility modelling combined with HAZOPS-style analysis of information requirements. Our goal is to represent complex contingency plans so that they can be more readily understood, so that inconsistencies can be highlighted and vulnerabilities discovered. In this paper, we outline the framework for contingency planning in the United Kingdom and introduce the notion of responsibility models as a means of representing the key features of contingency plans. Using a case study of a flooding emergency, we illustrate our approach to responsibility modelling and suggest how it adds value to current textual contingency plans

    Real-time internet control of situated human agents

    No full text

    Grounding the curriculum: learning from live projects in architectural education

    Get PDF
    Abstract: For more than twenty years architects in the UK have advocated the use of ‘live’ projects in architecture schools as an alternative to the more traditional model of studio learning, but the educational establishment continues to marginalize community-based approaches to learning. Recent debate, focusing on shortcomings of the studio culture in architectural education, has condemned the isolation of students from real world contexts and teaching methods that cultivate values of individualism and competition. As an alternative, many claims have been made about the potential for enhancing student learning by adopting live briefs and involving clients and users in the education of architects. Yet much of the literature remains largely speculative or descriptive and so far has neglected to investigate participatory design processes to determine their precise pedagogic value. The aims of this paper are to examine the nature of learning in student projects outside the studio environment, to locate that learning within a range of categories of learning, and to develop a conceptual structure for further exploration of alternative pedagogies in architectural education. The study is based on evaluations of two participatory design projects carried out with students at Lincoln School of Architecture in the UK. Students’ perceptions of the learning they acquired are compared with the intended learning outcomes identified by tutors at the start of the projects, and these are further contrasted with the ‘competencies’ that are typical outcomes of the traditional curriculum. The findings, which reveal significant contingent and emergent learning in the live projects, are then discussed in relation to recognized theories of learning, such as experiential learning, social constructionism, situated learning and collaborative learning. The objective is to identify an appropriate theoretical framework that may be used to draw attention to the valuable contribution of live project learning in architectural education and support arguments in favour of a more expansive and socially grounded architecture curriculum

    The Digital Architectures of Social Media: Comparing Political Campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 U.S. Election

    Full text link
    The present study argues that political communication on social media is mediated by a platform's digital architecture, defined as the technical protocols that enable, constrain, and shape user behavior in a virtual space. A framework for understanding digital architectures is introduced, and four platforms (Facebook, Twitter, Instagram, and Snapchat) are compared along the typology. Using the 2016 US election as a case, interviews with three Republican digital strategists are combined with social media data to qualify the studyies theoretical claim that a platform's network structure, functionality, algorithmic filtering, and datafication model affect political campaign strategy on social media

    TrusNet: Peer-to-Peer Cryptographic Authentication

    Get PDF
    Originally, the Internet was meant as a general purpose communication protocol, transferring primarily text documents between interested parties. Over time, documents expanded to include pictures, videos and even web pages. Increasingly, the Internet is being used to transfer a new kind of data which it was never designed for. In most ways, this new data type fits in naturally to the Internet, taking advantage of the near limit-less expanse of the protocol. Hardware protocols, unlike previous data types, provide a unique set security problem. Much like financial data, hardware protocols extended across the Internet must be protected with authentication. Currently, systems which do authenticate do so through a central server, utilizing a similar authentication model to the HTTPS protocol. This hierarchical model is often at odds with the needs of hardware protocols, particularly in ad-hoc networks where peer-to-peer communication is prioritized over a hierarchical model. Our project attempts to implement a peer-to-peer cryptographic authentication protocol to be used to protect hardware protocols extending over the Internet. The TrusNet project uses public-key cryptography to authenticate nodes on a distributed network, with each node locally managing a record of the public keys of nodes which it has encountered. These keys are used to secure data transmission between nodes and to authenticate the identities of nodes. TrusNet is designed to be used on multiple different types of network interfaces, but currently only has explicit hooks for Internet Protocol connections. As of June 2016, TrusNet has successfully achieved a basic authentication and communication protocol on Windows 7, OSX, Linux 14 and the Intel Edison. TrusNet uses RC-4 as its stream cipher and RSA as its public-key algorithm, although both of these are easily configurable. Along with the library, TrusNet also enables the building of a unit testing suite, a simple UI application designed to visualize the basics of the system and a build with hooks into the I/O pins of the Intel Edison allowing for a basic demonstration of the system
    • 

    corecore