4,694 research outputs found

    FraudDroid: Automated Ad Fraud Detection for Android Apps

    Get PDF
    Although mobile ad frauds have been widespread, state-of-the-art approaches in the literature have mainly focused on detecting the so-called static placement frauds, where only a single UI state is involved and can be identified based on static information such as the size or location of ad views. Other types of fraud exist that involve multiple UI states and are performed dynamically while users interact with the app. Such dynamic interaction frauds, although now widely spread in apps, have not yet been explored nor addressed in the literature. In this work, we investigate a wide range of mobile ad frauds to provide a comprehensive taxonomy to the research community. We then propose, FraudDroid, a novel hybrid approach to detect ad frauds in mobile Android apps. FraudDroid analyses apps dynamically to build UI state transition graphs and collects their associated runtime network traffics, which are then leveraged to check against a set of heuristic-based rules for identifying ad fraudulent behaviours. We show empirically that FraudDroid detects ad frauds with a high precision (93%) and recall (92%). Experimental results further show that FraudDroid is capable of detecting ad frauds across the spectrum of fraud types. By analysing 12,000 ad-supported Android apps, FraudDroid identified 335 cases of fraud associated with 20 ad networks that are further confirmed to be true positive results and are shared with our fellow researchers to promote advanced ad fraud detectionComment: 12 pages, 10 figure

    Building lightweight semantic search engines

    Get PDF
    Despite significant advances in methods for processing large volumes of structured and unstructured data, surprisingly little attention has been devoted to developing general practical methodologies that leverage state-of-the-art technologies to build domain-specific semantic search engines tailored to use cases where they could provide substantial benefits. This paper presents a methodology for developing these kinds of systems in a lightweight, modular, and flexible way with a particular focus on providing powerful search tools in domains where non-expert users encounter challenges in exploring the data repository at hand. Using an academic expertise finder tool as a case study, we demonstrate how this methodology allows us to leverage powerful off-the-shelf technology to enable the rapid, low-cost development of semantic search engines, while also affording developers with the necessary flexibility to embed user-centric design in their development in order to maximise uptake and application value.Postprin

    Grounded reality meets machine learning: A deep-narrative analysis framework for energy policy research

    Get PDF
    Text-based data sources like narratives and stories have become increasingly popular as critical insight generator in energy research and social science. However, their implications in policy application usually remain superficial and fail to fully explo

    A Review on Web Application Testing and its Current Research Directions

    Get PDF
    Testing is an important part of every software development process on which companies devote considerable time and effort. The burgeoning web applications and their proliferating economic significance in the society made the area of web application testing an area of acute importance. The web applications generally tend to take faster and quicker release cycles making their testing very challenging. The main issues in testing are cost efficiency and bug detection efficiency. Coverage-based   testing is the process of ensuring exercise of specific program elements. Coverage measurement helps determine the “thoroughness” of testing achieved. An avalanche of tools, techniques, frameworks came into existence to ascertain the quality of web applications.  A comparative study of some of the prominent tools, techniques and models for web application testing is presented. This work highlights the current research directions of some of the web application testing techniques

    ChatrEx: Designing explainable chatbot interfaces for enhancing usefulness, transparency, and trust

    Get PDF
    When breakdowns occur during a human-chatbot conversation, the lack of transparency and the “black-box” nature of task-oriented chatbots can make it difficult for end users to understand what went wrong and why. Inspired by recent HCI research on explainable AI solutions, we explored the design space of explainable chatbot interfaces through ChatrEx. We followed the iterative design and prototyping approach and designed two novel in-application chatbot interfaces (ChatrEx-VINC and ChatrEx-VST) that provide visual example-based step-by-step explanations about the underlying working of a chatbot during a breakdown. ChatrEx-VINC provides visual example-based step-by-step explanations in-context of the chat window whereas ChatrEx-VST provides explanations as a visual tour overlaid on the application interface. Our formative study with 11 participants elicited informal user feedback to help us iterate on our design ideas at each of the design and ideation phases and we implemented our final designs as web-based interactive chatbots for complex spreadsheet tasks. We conducted an observational study with 14 participants to compare our designs with current state-of-the-art chatbot interfaces and assessed their strengths and weaknesses. We found that visual explanations in both ChatrEx-VINC and ChatrEx-VST enhanced users’ understanding of the reasons for a conversational breakdown and improved users\u27 perceptions of usefulness, transparency, and trust. We identify several opportunities for future HCI research to exploit explainable chatbot interfaces and better support human-chatbot interaction

    Coordination of DWH Long-Term Data Management: The Path Forward Workshop Report

    Get PDF
    Following the 2010 DWH Oil Spill a vast amount of environmental data was collected (e.g., 100,000+ environmental samples, 15 million+ publicly available records). The volume of data collected introduced a number of challenges including: data quality assurance, data storage, data integration, and long-term preservation and availability of the data. An effort to tackle these challenges began in June 2014, at a workshop focused on environmental disaster data management (EDDM) with respect to response and subsequent restoration. The previous EDDM collaboration improved communication and collaboration among a range of government, industry and NGO entities involved in disaster management. In June 2017, the first DWH Long-Term Data Management (LTDM) workshop focused on reviewing existing data management systems, opportunities to advance integration of these systems, the availability of data for restoration planning, project implementation and restoration monitoring efforts, and providing a platform for increased communication among the various data GOM entities. The June 2017 workshop resulted in the formation of three working groups: Data Management Standards, Interoperability and Discovery/Searchability. These working groups spent 2018 coordinating and addressing various complex topics related to DWH LTDM. On December 4th and 5th, 2018 the Coastal Response Research Center (CRRC), NOAA Office of Response and Restoration (ORR) and NOAA National Marine Fisheries Service (NFMS) Restoration Center (RC), co-sponsored a workshop entitled Deepwater Horizon Oil Spill (DWH) Long-Term Data Management (LTDM): The Path Forward at the NOAA Gulf of Mexico (GOM) Disaster Response Center (DRC) in Mobile, AL

    Evolution and Fragilities in Scripted GUI Testing of Android applications

    Get PDF
    In literature there is evidence that Android applications are not rigorously tested as their desktop counterparts. However – especially for what concerns the graphical User Interface of mobile apps – a thorough testing should be advisable for developers. Some peculiarities of Android applications discourage developers from performing automated testing. Among them, we recognize fragility, i.e. test classes failing because of modifications in the GUI only, without the application functionalities being modified. The aim of this study is to provide a preliminary characterization of the fragility issue for Android apps, dentifying some of its causes and estimating its frequency among Android open-source projects. We defined a set of metrics to quantify the amount of fragility of any testing suite, and measured them automatically for a set of repositories hosted on GitHub. We found that, for projects featuring GUI tests, the incidence of fragility is around 10% for test classes, and around 5% for test methods. This means that a significant effort has to be put by developers in fixing their test suites because of the occurrence of fragilities

    Natural Language Interfaces to Data

    Full text link
    Recent advances in NLU and NLP have resulted in renewed interest in natural language interfaces to data, which provide an easy mechanism for non-technical users to access and query the data. While early systems evolved from keyword search and focused on simple factual queries, the complexity of both the input sentences as well as the generated SQL queries has evolved over time. More recently, there has also been a lot of focus on using conversational interfaces for data analytics, empowering a line of non-technical users with quick insights into the data. There are three main challenges in natural language querying (NLQ): (1) identifying the entities involved in the user utterance, (2) connecting the different entities in a meaningful way over the underlying data source to interpret user intents, and (3) generating a structured query in the form of SQL or SPARQL. There are two main approaches for interpreting a user's NLQ. Rule-based systems make use of semantic indices, ontologies, and KGs to identify the entities in the query, understand the intended relationships between those entities, and utilize grammars to generate the target queries. With the advances in deep learning (DL)-based language models, there have been many text-to-SQL approaches that try to interpret the query holistically using DL models. Hybrid approaches that utilize both rule-based techniques as well as DL models are also emerging by combining the strengths of both approaches. Conversational interfaces are the next natural step to one-shot NLQ by exploiting query context between multiple turns of conversation for disambiguation. In this article, we review the background technologies that are used in natural language interfaces, and survey the different approaches to NLQ. We also describe conversational interfaces for data analytics and discuss several benchmarks used for NLQ research and evaluation.Comment: The full version of this manuscript, as published by Foundations and Trends in Databases, is available at http://dx.doi.org/10.1561/190000007
    • 

    corecore