16 research outputs found

    Augmented analyses: supporting the study of ubiquitous computing systems

    Get PDF
    Ubiquitous computing is becoming an increasingly prevalent part of our everyday lives. The reliance of society upon such devices as mobile phones, coupled with the increasing complexity of those devices is an example of how our everyday human-human interaction is affected by this phenomenon. Social scientists studying human-human interaction must now take into account the effects of these technologies not just on the interaction itself, but also on the approach required to study it. User evaluation is a challenging topic in ubiquitous computing. It is generally considered to be difficult, certainly more so than in previous computational settings. Heterogeneity in design, distributed and mobile users, invisible sensing systems and so on, all add up to render traditional methods of observation and evaluation insufficient to construct a complete view of interactional activity. These challenges necessitate the development of new observational technologies. This thesis explores some of those challenges and demonstrates that system logs, with suitable methods of synchronising, filtering and visualising them for use in conjunction with more traditional observational approaches such as video, can be used to overcome many of these issues. Through a review of both the literature of the field, and the state of the art of computer aided qualitative data analysis software (CAQDAS), a series of guidelines are constructed showing what would be required of a software toolkit to meet the challenges of studying ubiquitous computing systems. It outlines the design and implementation of two such software packages, \textit{Replayer} and \textit{Digital Replay System}, which approach the problem from different angles, the former being focussed on visualising and exploring the data in system logs and the latter focussing on supporting the methods used by social scientists to perform qualitative analyses. The thesis shows through case studies how this technique can be applied to add significant value to the qualitative analysis of ubiquitous computing systems: how the coordination of system logs and other media can help us find information in the data that would otherwise be inaccessible; an ability to perform studies in locations/settings that would otherwise be impossible, or at least very difficult; and how creating accessible qualitative data analysis tools allows people to study particular settings or technologies who could not have studied them before. This software aims to demonstrate the direction in which other CAQDAS packages may have to move in order to support the study of the characteristics of human-computer and human-human interaction in a world increasingly reliant upon ubiquitous computing technology

    Un modèle pour la gestion et la capitalisation d'analyses de traces d'activités en interaction collaborative

    Get PDF
    We present our three main results in adressing the problem of assisting the socio-cognitive analysis of human interaction. First, we propose a description of the process of analysis of such data, as well as a generic artefact which covers a large number of the analytic artefacts we have observed and which we call a replayable. Second, we present a study and a modelling of replayables, and describe the four fundamental operations which can be applied to them: synchronisation, visualisation, transformation and enrichment. Finally, we describe the implementation of this model in an environment that assists analysis through the manipulation of replayables, which we evaluate in real-life research situations. Tatiana (http://code.google.com/p/tatiana), the resulting software environment, is based on these four operations and integrates numerous possibilities for extending these operations to adapt to new kinds of analysis while staying within the analytic framework afforded by replayables.Nous présentons nos trois résultats principaux face à la difficulté d'assister l'analyse socio-cognitive d'interactions humaines. D'une part, nous proposons une description du processus d'analyse de ce genre données ainsi qu'un artefact générique permettant de recouvrir un grand nombre d'artefacts analytiques que nous avons pu observer et que nous nommons rejouable. D'autre part, nous présentons une étude et modélisation informatique des rejouables, et décrivons quatre opérations fondamentales qui peuvent s'y appliquer : synchronisation, visualisation, transformation et enrichissement. Enfin, nous décrivons l'implémentation de cette modélisation dans un environnement d'aide à l'analyse par manipulation de rejouables que nous évaluons dans des situations de recherche réelles. Tatiana (http://code.google.com/p/tatiana), l'environnement logiciel résultant, est basé sur ces quatre opérations et permet l'extension de ces opérations pour s'adapter à de nouvelles formes d'analyse

    On Enhancing Security of Password-Based Authentication

    Get PDF
    Password has been the dominant authentication scheme for more than 30 years, and it will not be easily replaced in the foreseeable future. However, password authentication has long been plagued by the dilemma between security and usability, mainly due to human memory limitations. For example, a user often chooses an easy-to-guess (weak) password since it is easier to remember. The ever increasing number of online accounts per user even exacerbates this problem. In this dissertation, we present four research projects that focus on the security of password authentication and its ecosystem. First, we observe that personal information plays a very important role when a user creates a password. Enlightened by this, we conduct a study on how users create their passwords using their personal information based on a leaked password dataset. We create a new metric---Coverage---to quantify the personal information in passwords. Armed with this knowledge, we develop a novel password cracker named Personal-PCFG (Probabilistic Context-Free Grammars) that leverages personal information for targeted password guessing. Experiments show that Personal-PCFG is much more efficient than the original PCFG in cracking passwords. The second project aims to ease the password management hassle for a user. Password managers are introduced so that users need only one password (master password) to access all their other passwords. However, the password manager induces a single point of failure and is potentially vulnerable to data breach. To address these issues, we propose BluePass, a decentralized password manager that features a dual-possession security that involves a master password and a mobile device. In addition, BluePass enables a hand-free user experience by retrieving passwords from the mobile device through Bluetooth communications. In the third project, we investigate an overlooked aspect in the password lifecycle, the password recovery procedure. We study the password recovery protocols in the Alexa top 500 websites, and report interesting findings on the de facto implementation. We observe that the backup email is the primary way for password recovery, and the email becomes a single point of failure. We assess the likelihood of an account recovery attack, analyze the security policy of major email providers, and propose a security enhancement protocol to help securing password recovery emails by two factor authentication. \newline Finally, we focus on a more fundamental level, user identity. Password-based authentication is just a one-time checking to ensure that a user is legitimate. However, a user\u27s identity could be hijacked at any step. For example, an attacker can leverage a zero-day vulnerability to take over the root privilege. Thus, tracking the user behavior is essential to examine the identity legitimacy. We develop a user tracking system based on OS-level logs inside an enterprise network, and apply a variety of techniques to generate a concise and salient user profile for identity examination

    How online small groups co-construct mathematical artifacts to do collaborative problem solving

    Get PDF
    Developing pedagogies and instructional tools to support learning math with understanding is a major goal in math education. A common theme among various characterizations of mathematical understanding involves constructing relations among mathematical facts, procedures, and ideas encapsulated in graphical and symbolic artifacts. Discourse is key for enabling students to realize such connections among seemingly unrelated mathematical artifacts. Analysis of mathematical discourse on a moment-to-moment basis is needed to understand the potential of small-group collaboration and online communication tools to support learning math with understanding.This dissertation investigates interactional practices enacted by virtual teams of secondary students as they co-construct mathematical artifacts in an online environment with multiple interaction spaces including text-chat, whiteboard, and wiki components. The findings of the dissertation arrived at through ethnomethodologically-informed case studies of online sessions are organized along three dimensions: (a) Mathematical Affordances: Whiteboard and chat spaces allow teams to co-construct multiple realizations of relevant mathematical artifacts. Contributions remain persistentlyavailable for subsequent manipulation and reference in the shared visual field. The persistence of contributions facilitates the management of multiple threads of activities across dual media. The sequence of actions that lead to the construction and modification of shared inscriptions makes the visual reasoning process visible.(b) Coordination Methods: Team members achieve a sense of sequential organization across dual media through temporal coordination of their chat postings and drawings. Groups enact referential uses of available features to allocate their attention to specific objects in the shared visual field and to associate them with locally defined terminology. Drawings and text-messages are used together as semiotic resources in mutually elaborating ways.(c) Group Understanding: Teams develop shared mathematical understanding through joint recognition of connections among narrative, graphical and symbolic realizations of the mathematical artifacts that they have co-constructed to address their shared task. The interactional organization of the co-construction work establishes an indexical ground as support for the creation and maintenance of a shared problem space for the group. Each new contribution is made sense of in relation to this persistently available and shared indexical ground, which evolves sequentially as new contributions modify the sense of previous contributions.Ph.D., Information Science and Technology -- Drexel University, 200

    openHTML: Assessing Barriers and Designing Tools for Learning Web Development

    Get PDF
    In this dissertation, I argue that society increasingly recognizes the value of widespread computational literacy and that one of the most common ways that people are exposed to creative computing today is through web development. Prior research has investigated how beginners learn a wide range of programming languages in a variety of domains, from computer science majors taking introductory programming courses to end-user developers maintaining spreadsheets. Yet, surprisingly little is known about the experiences people have learning web development. What barriers do beginners face when authoring their first web pages? What mistakes do they commonly make when writing HTML and CSS? What are the computational skills and concepts with which they engage? How can tools and practices be designed to support these activities? Through a series of studies, interleaved with the iterative design of an experimental web editor for novices called openHTML, this dissertation aims to fill this gap in the literature and address these questions. In drawing connections between my findings and the existing computing education literature, my goal is to attain a deeper understanding of the skills and concepts at play when beginners learn web development, and to broaden notions about how people can develop computational literacy. This dissertation makes the following contributions: * An account of the barriers students face in an introductory web development course, contextualizing difficulties with learning to read and write code within the broad activity of web development. * The implementation of a web editor called openHTML, which has been designed to support learners by mitigating non-coding aspects of web development so that they can attend to learning HTML and CSS. * A detailed taxonomy of errors people make when writing HTML and CSS to construct simple web pages, derived from an intention-based analysis. * A fine-grained analysis of HTML and CSS syntax errors students make in the initial weeks of a web development course, how they resolve them, and the role validation plays in these outcomes. * Evidence for basic web development as a rich activity involving numerous skills and concepts that can support foundational computational literacy.Ph.D., Information Studies -- Drexel University, 201

    Troubles of understanding in virtual math teams

    Get PDF
    When groups engage in math problem solving in an online environment like the VMT (Virtual Math Teams) service, they can face significant challenges from troubles of individual and group understanding that emerge in their problem-solving process. We are interested in how shared understanding is interactionally constructed and accomplished in a collectivity engaged in mathematical reasoning and problem solving in the VMT environment when understanding troubles or differences of understanding between members arise. From our analyses of chat interactions of such collectivities, we have come to see that it is by attending to, managing, and resolving troubles of understanding that shared understanding is achieved. This dissertation investigates the practices by which participants introduce and present such troubles of understanding and how these problems are managed and dealt with by members of the collectivity. In particular, by analyzing the episodes of interaction of VMT groups, we document the interactional methods employed by participants to initiate and constitute their troubles as such and we explicate the procedures involved by which those troubles are addressed.Ph.D., Information Science -- Drexel University, 201

    Conformance checking and diagnosis in process mining

    Get PDF
    In the last decades, the capability of information systems to generate and record overwhelming amounts of event data has experimented an exponential growth in several domains, and in particular in industrial scenarios. Devices connected to the internet (internet of things), social interaction, mobile computing, and cloud computing provide new sources of event data and this trend will continue in the next decades. The omnipresence of large amounts of event data stored in logs is an important enabler for process mining, a novel discipline for addressing challenges related to business process management, process modeling, and business intelligence. Process mining techniques can be used to discover, analyze and improve real processes, by extracting models from observed behavior. The capability of these models to represent the reality determines the quality of the results obtained from them, conditioning its usefulness. Conformance checking is the aim of this thesis, where modeled and observed behavior are analyzed to determine if a model defines a faithful representation of the behavior observed a the log. Most of the efforts in conformance checking have focused on measuring and ensuring that models capture all the behavior in the log, i.e., fitness. Other properties, such as ensuring a precise model (not including unnecessary behavior) have been disregarded. The first part of the thesis focuses on analyzing and measuring the precision dimension of conformance, where models describing precisely the reality are preferred to overly general models. The thesis includes a novel technique based on detecting escaping arcs, i.e., points where the modeled behavior deviates from the one reflected in log. The detected escaping arcs are used to determine, in terms of a metric, the precision between log and model, and to locate possible actuation points in order to achieve a more precise model. The thesis also presents a confidence interval on the provided precision metric, and a multi-factor measure to assess the severity of the detected imprecisions. Checking conformance can be time consuming for real-life scenarios, and understanding the reasons behind the conformance mismatches can be an effort-demanding task. The second part of the thesis changes the focus from the precision dimension to the fitness dimension, and proposes the use of decomposed techniques in order to aid in checking and diagnosing fitness. The proposed approach is based on decomposing the model into single entry single exit components. The resulting fragments represent subprocesses within the main process with a simple interface with the rest of the model. Fitness checking per component provides well-localized conformance information, aiding on the diagnosis of the causes behind the problems. Moreover, the relations between components can be exploded to improve the diagnosis capabilities of the analysis, identifying areas with a high degree of mismatches, or providing a hierarchy for a zoom-in zoom-out analysis. Finally, the thesis proposed two main applications of the decomposed approach. First, the theory proposed is extended to incorporate data information for fitness checking in a decomposed manner. Second, a real-time event-based framework is presented for monitoring fitness.En las últimas décadas, la capacidad de los sistemas de información para generar y almacenar datos de eventos ha experimentado un crecimiento exponencial, especialmente en contextos como el industrial. Dispositivos conectados permanentemente a Internet (Internet of things), redes sociales, teléfonos inteligentes, y la computación en la nube proporcionan nuevas fuentes de datos, una tendencia que continuará en los siguientes años. La omnipresencia de grandes volúmenes de datos de eventos almacenados en logs abre la puerta al Process Mining (Minería de Procesos), una nueva disciplina a caballo entre las técnicas de gestión de procesos de negocio, el modelado de procesos, y la inteligencia de negocio. Las técnicas de minería de procesos pueden usarse para descubrir, analizar, y mejorar procesos reales, a base de extraer modelos a partir del comportamiento observado. La capacidad de estos modelos para representar la realidad determina la calidad de los resultados que se obtengan, condicionando su efectividad. El Conformance Checking (Verificación de Conformidad), objetivo final de esta tesis, permite analizar los comportamientos observados y modelados, y determinar si el modelo es una fiel representación de la realidad. La mayoría de los esfuerzos en Conformance Checking se han centrado en medir y asegurar que los modelos fueran capaces de capturar todo el comportamiento observado, también llamado "fitness". Otras propiedades, tales como asegurar la "precisión" de los modelos (no modelar comportamiento innecesario) han sido relegados a un segundo plano. La primera parte de esta tesis se centra en analizar la precisión, donde modelos describiendo la realidad con precisión son preferidos a modelos demasiado genéricos. La tesis presenta una nueva técnica basada en detectar "arcos de escape", i.e. puntos donde el comportamiento modelado se desvía del comportamiento reflejado en el log. Estos arcos de escape son usados para determinar, en forma de métrica, el nivel de precisión entre un log y un modelo, y para localizar posibles puntos de mejora. La tesis también presenta un intervalo de confianza sobre la métrica, así como una métrica multi-factorial para medir la severidad de las imprecisiones detectadas. Conformance Checking puede ser una operación costosa para escenarios reales, y entender las razones que causan los problemas requiere esfuerzo. La segunda parte de la tesis cambia el foco (de precisión a fitness), y propone el uso de técnicas de descomposición para ayudar en la verificación de fitness. Las técnicas propuestas se basan en descomponer el modelo en componentes con una sola entrada y una sola salida, llamados SESEs. Estos componentes representan subprocesos dentro del proceso principal. Verificar el fitness a nivel de subproceso proporciona una información detallada de dónde están los problemas, ayudando en su diagnóstico. Además, las relaciones entre subprocesos pueden ser explotadas para mejorar las capacidades de diagnóstico e identificar qué áreas concentran la mayor densidad de problemas. Finalmente, la tesis propone dos aplicaciones directas de las técnicas de descomposición: 1) la teoría es extendida para incluir información de datos a la verificación de fitness, y 2) el uso de sistemas descompuestos en tiempo real para monitorizar fitnes

    A generic approach to the evolution of interaction in ubiquitous systems

    Get PDF
    This dissertation addresses the challenge of the configuration of modern (ubiquitous, context-sensitive, mobile et al.) interactive systems where it is difficult or impossible to predict (i) the resources available for evolution, (ii) the criteria for judging the success of the evolution, and (iii) the degree to which human judgements must be involved in the evaluation process used to determine the configuration. In this thesis a conceptual model of interactive system configuration over time (known as interaction evolution) is presented which relies upon the follow steps; (i) identification of opportunities for change in a system, (ii) reflection on the available configuration alternatives, (iii) decision-making and (iv) implementation, and finally iteration of the process. This conceptual model underpins the development of a dynamic evolution environment based on a notion of configuration evaluation functions (hereafter referred to as evaluation functions) that provides greater flexibility than current solutions and, when supported by appropriate tools, can provide a richer set of evaluation techniques and features that are difficult or impossible to implement in current systems. Specifically this approach has support for changes to the approach, style or mode of use used for configuration - these features may result in more effective systems, less effort involved to configure them and a greater degree of control may be offered to the user. The contributions of this work include; (i) establishing the the need for configuration evolution through a literature review and a motivating case study experiment, (ii) development of a conceptual process model supporting interaction evolution, (iii) development of a model based on the notion of evaluation functions which is shown to support a wide range of interaction configuration approaches, (iv) a characterisation of the configuration evaluation space, followed by (v) an implementation of these ideas used in (vi) a series of longitudinal technology probes and investigations into the approaches

    Auto-tuning compiler options for HPC

    Get PDF
    corecore