475 research outputs found

    Using the Pattern-of-Life in Networks to Improve the Effectiveness of Intrusion Detection Systems

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.As the complexity of cyber-attacks keeps increasing, new and more robust detection mechanisms need to be developed. The next generation of Intrusion Detection Systems (IDSs) should be able to adapt their detection characteristics based not only on the measureable network traffic, but also on the available high- level information related to the protected network to improve their detection results. We make use of the Pattern-of-Life (PoL) of a network as the main source of high-level information, which is correlated with the time of the day and the usage of the network resources. We propose the use of a Fuzzy Cognitive Map (FCM) to incorporate the PoL into the detection process. The main aim of this work is to evidence the improved the detection performance of an IDS using an FCM to leverage on network related contextual information. The results that we present verify that the proposed method improves the effectiveness of our IDS by reducing the total number of false alarms; providing an improvement of 9.68% when all the considered metrics are combined and a peak improvement of up to 35.64%, depending on particular metric combination

    Finding a New Voice: Transitioning Designers from GUI to VUI Design

    Get PDF
    As Voice User Interfaces (VUIs) become widely popular, designers must handle new usability challenges. However, compared to other established domains such as Graphical User Interfaces (GUIs), VUI designers have fewer resources (training support, usability heuris- tics, design patterns) to guide them. On the other hand, GUI-trained designers may also be solicited upon to design VUIs given the in- creased demand for such interfaces. This raises the question: how can we best support such designers as they transition from GUI to VUI design? To answer this, we focus on usability heuristics as a key resource, and conduct several workshops with GUI design experts, exploring how they map their design experience onto VUI design. Based on this, we suggest that the “path of least resistance” to transitioning designers from GUI to VUI may be the adaptation of familiar resources and concepts (such as GUI heuristics) to the VUI design space, instead of the imposition of novel VUI-specific heuristics on GUI-trained designers. This finding can inform the development of design resources that can support the increase demand for VUIs

    StateLens: A Reverse Engineering Solution for Making Existing Dynamic Touchscreens Accessible

    Full text link
    Blind people frequently encounter inaccessible dynamic touchscreens in their everyday lives that are difficult, frustrating, and often impossible to use independently. Touchscreens are often the only way to control everything from coffee machines and payment terminals, to subway ticket machines and in-flight entertainment systems. Interacting with dynamic touchscreens is difficult non-visually because the visual user interfaces change, interactions often occur over multiple different screens, and it is easy to accidentally trigger interface actions while exploring the screen. To solve these problems, we introduce StateLens - a three-part reverse engineering solution that makes existing dynamic touchscreens accessible. First, StateLens reverse engineers the underlying state diagrams of existing interfaces using point-of-view videos found online or taken by users using a hybrid crowd-computer vision pipeline. Second, using the state diagrams, StateLens automatically generates conversational agents to guide blind users through specifying the tasks that the interface can perform, allowing the StateLens iOS application to provide interactive guidance and feedback so that blind users can access the interface. Finally, a set of 3D-printed accessories enable blind people to explore capacitive touchscreens without the risk of triggering accidental touches on the interface. Our technical evaluation shows that StateLens can accurately reconstruct interfaces from stationary, hand-held, and web videos; and, a user study of the complete system demonstrates that StateLens successfully enables blind users to access otherwise inaccessible dynamic touchscreens.Comment: ACM UIST 201

    HILC: Domain-Independent PbD System Via Computer Vision and Follow-Up Questions

    Get PDF
    Creating automation scripts for tasks involving Graphical User Interface (GUI) interactions is hard. It is challenging because not all software applications allow access to a program’s internal state, nor do they all have accessibility APIs. Although much of the internal state is exposed to the user through the GUI, it is hard to programmatically operate the GUI’s widgets. To that end, we developed a system prototype that learns by demonstration, called HILC (Help, It Looks Confusing). Users, both programmers and non-programmers, train HILC to synthesize a task script by demonstrating the task. A demonstration produces the needed screenshots and their corresponding mouse-keyboard signals. After the demonstration, the user answers follow-up questions. We propose a user-in-the-loop framework that learns to generate scripts of actions performed on visible elements of graphical applications. Although pure programming by demonstration is still unrealistic due to a computer’s limited understanding of user intentions, we use quantitative and qualitative experiments to show that non-programming users are willing and effective at answering follow-up queries posed by our system, to help with confusing parts of the demonstrations. Our models of events and appearances are surprisingly simple but are combined effectively to cope with varying amounts of supervision. The best available baseline, Sikuli Slides, struggled to assist users in the majority of the tests in our user study experiments. The prototype with our proposed approach successfully helped users accomplish simple linear tasks, complicated tasks (monitoring, looping, and mixed), and tasks that span across multiple applications. Even when both systems could ultimately perform a task, ours was trained and refined by the user in less time

    Design and creation of a virtual world of Petra, Jordan

    Get PDF
    Includes bibliographical references.This thesis presents the design and creation of a 3D virtual world of Petra, Jordan, based on the digital spatial documentation of this UNESCO World Heritage Site by the Zamani project. Creating digital records of the spatial domain of heritage sites is a well-established practice that employs the technologies of laser scanning, GPS and traditional surveys, aerial and close range photogrammetry, and 360-degree panorama photography to capture spatial data of a site. Processing this data to produce textured 3D models, sections elevations, GISs, and panorama tours to has led to the establishment of the field of virtual heritage. Applications to view this spatial data are considered too specialised to be used by the general public with only trained heritage practitioners being able to use the data. Additionally, data viewing platforms have not been designed to allow for the viewing of combinations of 3D data in an intuitive and engaging manner as currently each spatial data type must be viewed by independent software. Therefore a fully integrated software platform is needed which would allow any interested person, without prior training, easy access to a combination of spatial data, from anywhere in the world. This study seeks to provide a solution to the above requirement by using a game engine to assimilate spatial data of heritage sites in a 3D virtual environment where a virtual visitor is able to interactively engage with combinations of spatial data. The study first begins with an analysis of what virtual heritage applications, in the form of virtual environments, have been created, and the elements that were used in their creation. These elements are then applied to the design and creation of the virtual world of Petra

    Personalising Learning with Dynamic Prediction and Adaptation to Learning Styles in a Conversational Intelligent Tutoring System

    Get PDF
    This thesis presents research that combines the benefits of intelligent tutoring systems (ITS), conversational agents (CA) and learning styles theory by constructing a novel conversational intelligent tutoring system (CITS) called Oscar. Oscar CITS aims to imitate a human tutor by implicitly predicting individuals’ learning style preferences and adapting its tutoring style to suit them during a tutoring conversation. ITS are computerised learning systems that intelligently personalise tutoring based on learner characteristics such as existing knowledge and learning style. ITS are traditionally student-led, hyperlink-based learning systems that adapt the presentation of learning resources by reordering or hiding links. Research suggests that students learn more effectively when instruction matches their learning style, which is typically modelled explicitly using questionnaires or implicitly based on behaviour. Learning is a social process and natural language interfaces to ITS, such as CAs, allow students to construct knowledge through discussion. Existing CITS adapt tutoring according to student knowledge, emotions and mood, however no CITS adapts to learning styles. Oscar CITS models a human tutor by directing a tutoring conversation and automatically detecting and adapting to an individual’s learning styles. Original methodologies and architectures were developed for constructing an Oscar Predictive CITS and an Oscar Adaptive CITS. Oscar Predictive CITS uses knowledge captured from a learning styles model to dynamically predict learning styles from an individual’s tutoring dialogue. Oscar Adaptive CITS applies a novel adaptation algorithm to select the best tutoring style for each tutorial question. The Oscar CITS methodologies and architectures are independent of the learning styles model and subject domain. Empirical studies involving real students have validated the prediction and adaptation of learning styles in a real-world teaching/learning environment. The results show that learning styles can be successfully predicted from a natural language tutoring dialogue, and that adapting the tutoring style significantly improves learning performance

    Locating bugs without looking back

    Get PDF
    Bug localisation is a core program comprehension task in software maintenance: given the observation of a bug, e.g. via a bug report, where is it located in the source code? Information retrieval (IR) approaches see the bug report as the query, and the source code files as the documents to be retrieved, ranked by relevance. Such approaches have the advantage of not requiring expensive static or dynamic analysis of the code. However, current state-of-the-art IR approaches rely on project history, in particular previously fixed bugs or previous versions of the source code. We present a novel approach that directly scores each current file against the given report, thus not requiring past code and reports. The scoring method is based on heuristics identified through manual inspection of a small sample of bug reports. We compare our approach to eight others, using their own five metrics on their own six open source projects. Out of 30 performance indicators, we improve 27 and equal 2. Over the projects analysed, on average we find one or more affected files in the top 10 ranked files for 76% of the bug reports. These results show the applicability of our approach to software projects without history

    How Do UX Practitioners Communicate AI as a Design Material? Artifacts, Conceptions, and Propositions

    Full text link
    UX practitioners (UXPs) face novel challenges when working with and communicating artificial intelligence (AI) as a design material. We explore how UXPs communicate AI concepts when given hands-on experience training and experimenting with AI models. To do so, we conducted a task-based design study with 27 UXPs in which they prototyped and created a design presentation for a AI-enabled interface while having access to a simple AI model training tool. Through analyzing UXPs' design presentations and post-activity interviews, we found that although UXPs struggled to clearly communicate some AI concepts, tinkering with AI broadened common ground when communicating with technical stakeholders. UXPs also identified key risks and benefits of AI in their designs, and proposed concrete next steps for both UX and AI work. We conclude with a sensitizing concept and recommendations for design and AI tools to enhance multi-stakeholder communication and collaboration when crafting human-centered AI experiences
    corecore