1,404 research outputs found

    Improving User Involvement Through Live Collaborative Creation

    Full text link
    Creating an artifact - such as writing a book, developing software, or performing a piece of music - is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. This work explores how computational systems can facilitate collaboration, communication, and participation in the context of involving users in the process of creating artifacts while mitigating the challenges inherent to such processes. In particular, the interactive systems presented in this work support live collaborative creation, in which artifact users collaboratively participate in the artifact creation process with creators in real time. In the systems that I have created, I explored liveness, the extent to which the process of creating artifacts and the state of the artifacts are immediately and continuously perceptible, for applications such as programming, writing, music performance, and UI design. Liveness helps preserve natural expressivity, supports real-time communication, and facilitates participation in the creative process. Live collaboration is beneficial for users and creators alike: making the process of creation visible encourages users to engage in the process and better understand the final artifact. Additionally, creators can receive immediate feedback in a continuous, closed loop with users. Through these interactive systems, non-expert participants help create such artifacts as GUI prototypes, software, and musical performances. This dissertation explores three topics: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barriers of entry to live collaboration; and (3) approaches to preserving liveness in the creative process, affording creators more expressivity in making artifacts and affording users access to information traditionally only available in real-time processes. In this work, I showed that enabling collaborative, expressive, and live interactions in computational systems allow the broader population to take part in various creative practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145810/1/snaglee_1.pd

    Analysis of web3 solution development principles

    Get PDF
    In the master's thesis, we researched the principles of Web3 solution development. We studied the blockchain and blockchain-related technology, development of the Web including all versions of the Web and the differences between them. We presented the popular technologies for Web3 development and the most common Web3 solutions with examples. With help of systematic literature review we explored the state-of-art technologies for Web3 solution development and proposed a full-stack for Web3. In the final part we implemented a proof-of-concept Ethereum decentralized application and compared it with equivalent concept of Web2 application. We proposed future work of researching other popular blockchain protocols like Solana or Polygon

    Effective Natural Language Interfaces for Data Visualization Tools

    Get PDF
    How many Covid cases and deaths are there in my hometown? How much money was invested into renewable energy projects across states in the last 5 years? How large was the biggest investment in solar energy projects in the previous year? These questions and others are of interest to users and can often be answered by data visualization tools (e.g., COVID-19 dashboards) provided by governmental organizations or other institutions. However, while users in organizations or private life with limited expertise with data visualization tools (hereafter referred to as end users) are also interested in these topics, they do not necessarily have knowledge of how to use these data visualization tools effectively to answer these questions. This challenge is highlighted by previous research that provided evidence suggesting that while business analysts and other experts can effectively use these data visualization tools, end users with limited expertise with data visualization tools are still impeded in their interactions. One approach to tackle this problem is natural language interfaces (NLIs) that provide end users with a more intuitive way of interacting with these data visualization tools. End users would be enabled to interact with the data visualization tool both by utilizing the graphical user interface (GUI) elements and by just typing or speaking a natural language (NL) input to the data visualization tool. While NLIs for data visualization tools have been regarded as a promising approach to improving the interaction, two design challenges still remain. First, existing NLIs for data visualization tools still target users who are familiar with the technology, such as business analysts. Consequently, the unique design required by end users that address their specific characteristics and that would enable the effective use of data visualization tools by them is not included in existing NLIs for data visualization tools. Second, developers of NLIs for data visualization tools are not able to foresee all NL inputs and tasks that end users want to perform with these NLIs for data visualization tools. Consequently, errors still occur in current NLIs for data visualization tools. End users need to be therefore enabled to continuously improve and personalize the NLI themselves by addressing these errors. However, only limited work exists that focus on enabling end users in teaching NLIs for data visualization tools how to correctly respond to new NL inputs. This thesis addresses these design challenges and provides insights into the related research questions. Furthermore, this thesis contributes prescriptive knowledge on how to design effective NLIs for data visualization tools. Specifically, this thesis provides insights into how data visualization tools can be extended through NLIs to improve their effective use by end users and how to enable end users to effectively teach NLIs how to respond to new NL inputs. Furthermore, this thesis provides high-level guidance that developers and providers of data visualization tools can utilize as a blueprint for developing data visualization tools with NLIs for end users and outlines future research opportunities that are of interest in supporting end users to effectively use data visualization tools

    Design of a backend system to integrate health information systems – case study: ministry of health and social services (MoHSS)-Namibia

    Get PDF
    Information systems are the key to institution organization and decision making. In the health care field, there is a lot of data flow, from the patient demographic information (through the electronic medical records), the patient's medication dispersal methods called pharmaceutical data, laboratory data to hospital organization information such bed allocation. Healthcare information system is a system that manages, store, transmit and display healthcare data. Most of the healthcare data in Namibia are unstructured, there is a heterogeneous environment in which different health information systems are distributed in different departments [1][2]. A lot of data is generated but never used in decision-making due to the fragmentation. The integration of these systems would create a flood of big data into a centralized database. With information technology and new generation networks becoming a called for innovations in every day's operations, the adaptations of accessing big data through information applications and systems in an integrated way will facilitate the performances of practical work in health care. The aim of this dissertation is to find a way in which these vertical Health Information System can be integrated into a unified system. A prototype of a back-end system is used to illustrate how the present healthcare systems that are in place with the Ministry of Health and Social Service facilities in Namibia, can be integrated to promote a more unified system usage. The system uses other prototypes of subsystems that represent the current systems to illustrate how they operate and, in the end, how the integration can improve service delivery in the ministry. The proposed system is expected to benefit the ministry in its daily operations as it enables instant authorized access to data without passing through middlemen. It will improve and preserve data integrity by eliminating multiple handling of data through a single data admission point. With one entry point to the systems, manual work will be reduced hence also reducing cost. Generally, it will ensure efficiency and then increase the quality of service provided

    Tackling the Challenges of Information Security Incident Reporting: A Decentralized Approach

    Get PDF
    Information security incident under-reporting is unambiguously a business problem, as identified by a variety of sources, such as ENISA (2012), Symantec (2016), Newman (2018) and more. This research project identified the underlying issues that cause this problem and proposed a solution, in the form of an innovative artefact, which confronts a number of these issues. This research project was conducted according to the requirements of the Design Science Research Methodology (DSRM) by Peffers et al (2007). The research question set at the beginning of this research project, probed the feasible formation of an incident reporting solution, which would increase the motivational level of users towards the reporting of incidents, by utilizing the positive features offered by existing solutions, on one hand, but also by providing added value to the users, on the other. The comprehensive literature review chapter set the stage, and identified the reasons for incident underreporting, while also evaluating the existing solutions and determining their advantages and disadvantages. The objectives of the proposed artefact were then set, and the artefact was designed and developed. The output of this development endeavour is “IRDA”, the first decentralized incident reporting application (DApp), built on “Quorum”, a permissioned blockchain implementation of Ethereum. Its effectiveness was demonstrated, when six organizations accepted to use the developed artefact and performed a series of pre-defined actions, in order to confirm the platform’s intended functionality. The platform was also evaluated using Venable et al’s (2012) evaluation framework for DSR projects. This research project contributes to knowledge in various ways. It investigates blockchain and incident reporting, two domains which have not been extensively examined and the available literature is rather limited. Furthermore, it also identifies, compares, and evaluates the conventional, reporting platforms, available, up to date. In line with previous findings (e.g Humphrey, 2017), it also confirms the lack of standard taxonomies for information security incidents. This work also contributes by creating a functional, practical artefact in the blockchain domain, a domain where, according to Taylor et al (2019), most studies are either experimental proposals, or theoretical concepts, with limited practicality in solving real-world problems. Through the evaluation activity, and by conducting a series of non-parametric significance tests, it also suggests that IRDA can potentially increase the motivational level of users towards the reporting of incidents. This thesis describes an original attempt in utilizing the newly emergent blockchain technology, and its inherent characteristics, for addressing those concerns which actively contribute to the business problem. To the best of the researcher’s knowledge, there is currently no other solution offering similar benefits to users/organizations for incident reporting purposes. Through the accomplishment of this project’s pre-set objectives, the developed artefact provides a positive answer to the research question. The artefact, featuring increased anonymity, availability, immutability and transparency levels, as well as an overall lower cost, has the potential to increase the motivational level of organizations towards the reporting of incidents, thus improving the currently dismaying statistics of incident under-reporting. The structure of this document follows the flow of activities described in the DSRM by Peffers et al (2007), while also borrowing some elements out of the nominal structure of an empirical research process, including the literature review chapter, the description of the selected research methodology, as well as the “discussion and conclusion” chapter

    Algorizmi: A Configurable Virtual Testbed to Generate Datasets for Offline Evaluation of Intrusion Detection Systems

    Get PDF
    Intrusion detection systems (IDSes) are an important security measure that network administrators adopt to defend computer networks against malicious attacks and intrusions. The field of IDS research includes many challenges. However, one open problem remains orthogonal to the others: IDS evaluation. In other words, researchers have not yet succeeded to agree on a general systematic methodology and/or a set of metrics to fairly evaluate different IDS algorithms. This leads to another problem: the lack of an appropriate IDS evaluation dataset that satisfies the common research needs. One major contribution in this area is the DARPA dataset offered by the Massachusetts Institute of Technology Lincoln Lab (MIT/LL), which has been extensively used to evaluate a number of IDS algorithms proposed in the literature. Despite this, the DARPA dataset received a lot of criticism concerning the way it was designed, especially concerning its obsoleteness and inability to incorporate new sorts of network attacks. In this thesis, we survey previous research projects that attempted to provide a system for IDS offline evaluation. From the survey, we identify a set of design requirements for such a system based on the research community needs. We, then, propose Algorizmi as an open-source configurable virtual testbed for generating datasets for offline IDS evaluation. We provide an architectural overview of Algorizmi and its software and hardware components. Algorizmi provides its users with tools that allow them to create their own experimental testbed using the concepts of virtualization and cloud computing. Algorizmi users can configure the virtual machine instances running in their experiments, select what background traffic those instances will generate and what attacks will be launched against them. At any point in time, an Algorizmi user can generate a dataset (network traffic trace) for any of her experiments so that she can use this dataset afterwards to evaluate an IDS the same way the DARPA dataset is used. Our analysis shows that Algorizmi satisfies more requirements than previous research projects that target the same research problem of generating datasets for IDS offline evaluation. Finally, we prove the utility of Algorizmi by building a sample network of machines, generate both background and attack traffic within that network. We then download a snapshot of the dataset for that experiment and run it against Snort IDS. Snort successfully detected the attacks we launched against the sample network. Additionally, we evaluate the performance of Algorizmi while processing some of the common usages of a typical user based on 5 metrics: CPU time, CPU usage, memory usage, network traffic sent/received and the execution time
    • …
    corecore