385 research outputs found

    Goal-driven Command Recommendations for Analysts

    Full text link
    Recent times have seen data analytics software applications become an integral part of the decision-making process of analysts. The users of these software applications generate a vast amount of unstructured log data. These logs contain clues to the user's goals, which traditional recommender systems may find difficult to model implicitly from the log data. With this assumption, we would like to assist the analytics process of a user through command recommendations. We categorize the commands into software and data categories based on their purpose to fulfill the task at hand. On the premise that the sequence of commands leading up to a data command is a good predictor of the latter, we design, develop, and validate various sequence modeling techniques. In this paper, we propose a framework to provide goal-driven data command recommendations to the user by leveraging unstructured logs. We use the log data of a web-based analytics software to train our neural network models and quantify their performance, in comparison to relevant and competitive baselines. We propose a custom loss function to tailor the recommended data commands according to the goal information provided exogenously. We also propose an evaluation metric that captures the degree of goal orientation of the recommendations. We demonstrate the promise of our approach by evaluating the models with the proposed metric and showcasing the robustness of our models in the case of adversarial examples, where the user activity is misaligned with selected goal, through offline evaluation.Comment: 14th ACM Conference on Recommender Systems (RecSys 2020

    Web Search, Web Tutorials & Software Applications: Characterizing and Supporting the Coordinated Use of Online Resources for Performing Work in Feature-Rich Software

    Get PDF
    Web search and other online resources serve an integral role in how people learn and use feature-rich software (e.g., Adobe Photoshop) on a daily basis. Users depend on web resources both as a first line of technical support, and as a means for coping with system complexity. For example, people rely on web resources to learn new tasks, to troubleshoot problems, or to remind themselves of key task details. When users rely on web resources to support their work, their interactions are distributed over three user environments: (1) the search engine, (2) retrieved documents, and (3) the application's user interface. As users interact with these environments, their actions generate a rich set of signals that characterize how the population thinks about and uses software systems "in the wild," on a day-to-day basis. This dissertation presents three works that successively connect and associate signals and artifacts across these environments, thereby generating novel insights about users and their tasks, and enabling powerful new end-user tools and services. These three projects are as follows: Characterizing usability through search (CUTS): The CUTS system demonstrates that aggregate logs of web search queries can be leveraged to identify common tasks and potential usability problems faced by the users of any publicly available interactive system. For example, in 2011 I examined query data for the Firefox web browser. Automated analysis uncovered approximately 150 variations of the query "Firefox how to get the menu bar back", with queries issued once every 32 minutes on average. Notably, this analysis did not depend on direct access to query logs. Instead, query suggestions services and online advertising valuations were leveraged to approximate aggregate query data. Nevertheless, these data proved to be timely, to have a high degree of ecological validity, and to be arguably less prone to self-selection bias than data gathered via traditional usability methods. Query-feature graphs (QF-Graphs): Query-feature graphs are structures that map high-level descriptions of a user's goals to the specific features and commands relevant to achieving those goals in software. QF-graphs address an important instance of the more general vocabulary mismatch problem. For example, users of the GIMP photo manipulation software often want to "make a picture black and white", and fail to recognize the relevance of the applicable commands, which include: "desaturate", and "channel mixer". The key insights for building QF-graphs are that: (1) queries concisely express the user's goal in the user's own words, and (2) retrieved tutorials likely include both query terms, as well as terminology from the application's interface (e.g., the names of commands). QF-graphs are generated by mining these co-occurrences across thousands of query-tutorial pairings. InterTwine: InterTwine explores interaction possibilities that arise when software applications, web search, and online support materials are directly integrated into a single productivity system. With InterTwine, actions in the web browser directly impact how information is presented in a software application, and vice versa. For example, when a user opens a web tutorial in their browser, the application's menus and tooltips are updated to highlight the commands mentioned therein. These embellishments are designed to help users orient themselves after switching between the web browser and the application. InterTwine also augments web search results to include details of past application use. Search snippets gain before and after pictures and other metadata detailing how the user's personal work document evolved the last time they visited the page. This feature was motivated by the observation that existing mechanisms (e.g., highlighting visited links) are often insufficient for recalling which resources were previously helpful vs. unhelpful for accomplishing a task. Finally, the dissertation concludes with a discussion of the advantages, limitations and challenges of this research, and presents an outline for future work

    Architecture-centric support for security orchestration and automation

    Get PDF
    Security Orchestration, Automation and Response (SOAR) platforms leverage integration and orchestration technologies to (i) automate manual and repetitive labor-intensive tasks, (ii) provide a single panel of control to manage various types of security tools (e.g., intrusion detection system, antivirus and firewall) and (iii) streamline complex Incident Response Process (IRP) responses. SOAR platforms increase the operational efficiency of overwhelmed security teams in a Security Operation Centre (SOC) and accelerate the SOC’s defense and response capacity against ever-growing security incidents. Security tools, IRPs and security requirements form the underlying execution environment of SOAR platforms, which are changing rapidly due to the dynamic nature of security threats. A SOAR platform is expected to adapt continuously to these dynamic changes. Flexible integration, interpretation and interoperability of security tools are essential to ease the adaptation of a SOAR platform. However, most of the effort for designing and developing existing SOAR platforms are ad-hoc in nature, which introduces several engineering challenges and research challenges. For instance, the advancement of a SOAR platform increases its architectural complexity and makes the operation of such platforms difficult for end-users. These challenges come from a lack of a comprehensive view, design space and architectural support for SOAR platforms. This thesis aims to contribute to the growing realization that it is necessary to advance SOAR platforms by designing, implementing and evaluating architecture-centric support to address several of the existing challenges. The envisioned research and development activities require the identification of current practices and challenges of SOAR platforms; hence, a Multivocal Literature Review (MLR) has been designed, conducted and reported. The MLR identifies the functional and non-functional requirements, components and practices of a security orchestration domain, along with the open issues. This thesis advances the domain of a SOAR platform by providing a layered architecture, which considers the key functional and non-functional requirements of a SOAR platform. The proposed architecture is evaluated experimentally with a Proof of Concept (PoC) system, Security Tool Unifier (STUn), using seven security tools, a set of IRPs and playbooks. The research further identifies the need for and design of (i) an Artificial Intelligence (AI) based integration framework to interpret the activities of security tools and enable interoperability automatically, (ii) a semantic-based automated integration process to integrate security tools and (iii) AI-enabled design and generation of a declarative API from user query, namely DecOr, to hide the internal complexity of a SOAR platform from end-users. The experimental evaluation of the proposed approaches demonstrates that (i) consideration of architectural design decisions supports the development of an easy to interact with, modify and update SOAR platform, (ii) an AI-based integration framework and automated integration process provides effective and efficient integration and interpretation of security tools and IRPs and (iii) DecOr increases the usability and flexibility of a SOAR platform. This thesis is a useful resource and guideline for both practitioners and researchers who are working in the security orchestration domain. It provides an insight into how an architecture-centric approach, with incorporation of AI technologies, reduces the operational complexity of SOAR platforms.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    StateLens: A Reverse Engineering Solution for Making Existing Dynamic Touchscreens Accessible

    Full text link
    Blind people frequently encounter inaccessible dynamic touchscreens in their everyday lives that are difficult, frustrating, and often impossible to use independently. Touchscreens are often the only way to control everything from coffee machines and payment terminals, to subway ticket machines and in-flight entertainment systems. Interacting with dynamic touchscreens is difficult non-visually because the visual user interfaces change, interactions often occur over multiple different screens, and it is easy to accidentally trigger interface actions while exploring the screen. To solve these problems, we introduce StateLens - a three-part reverse engineering solution that makes existing dynamic touchscreens accessible. First, StateLens reverse engineers the underlying state diagrams of existing interfaces using point-of-view videos found online or taken by users using a hybrid crowd-computer vision pipeline. Second, using the state diagrams, StateLens automatically generates conversational agents to guide blind users through specifying the tasks that the interface can perform, allowing the StateLens iOS application to provide interactive guidance and feedback so that blind users can access the interface. Finally, a set of 3D-printed accessories enable blind people to explore capacitive touchscreens without the risk of triggering accidental touches on the interface. Our technical evaluation shows that StateLens can accurately reconstruct interfaces from stationary, hand-held, and web videos; and, a user study of the complete system demonstrates that StateLens successfully enables blind users to access otherwise inaccessible dynamic touchscreens.Comment: ACM UIST 201

    Proceedings of the 12th International Conference on Digital Preservation

    Get PDF
    The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase

    Proceedings of the 12th International Conference on Digital Preservation

    Get PDF
    The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    Media of things : supporting the production and consumption of object-based media with the internet of things

    Get PDF
    Ph. D. Thesis.Visual media consumption habits are in a constant state of flux, predicting which platforms and consumption mediums will succeed and which will fail is a fateful business. Virtual Reality and Augmented Reality could be the 3D TVs that went before them, or they could push forward a new level of content immersion and radically change media production forever. Content producers are constantly trying to adapt to these shifts in habits and respond to new technologies. Smaller independent studios buoyed by their new-found audience penetration through sites like YouTube and Facebook can inherently respond to these emerging technologies faster, not weighed down by the “legacy” many. Broadcasters such as the BBC are keen to evolve their content to respond to the challenges of this new world. Producing content that is both more compelling in terms of immersion, and more responsive to technological advances in terms of input and output mediums. This is where the concept of Object-based Broadcasting was born, content that is responsive to the user consuming their content on a phone over a short period of time whilst also providing an immersive multi-screen experience for a smart home environment. One of the primary barriers to the development of Object-based Media is in a feasible set of mechanisms to generate supporting assets and adequately exploit the input and output mediums of the modern home. The underlying question here is how we build these experiences, we obviously can’t produce content for each of the thousands of combinations of devices and hardware we have available to us. I view this challenge to content makers as one of a distinct lack of descriptive and abstract detail at both ends of the production pipeline. In investigating the contribution that the Internet of Things may have to this space I first look to create well described assets in productions using embedded sensing. Detecting non-visual actions and generating detail not possible from vision alone. I then look to exploit existing datasets from production and consumption environments to gain greater understanding of generated media assets and a means to coordinate input/output in the home. Finally, I investigate the opportunities for rich and expressive interaction with devices and content in the home exploiting favourable characteristics of existing interfaces to construct a compelling control interface to Smart Home devices and Object-based experiences. I resolve that the Internet of Things is vital to the development of Object-based Broadcasting and its wider roll-out.British Broadcasting Corporatio
    • …
    corecore