7,773 research outputs found

    AdSplit: Separating smartphone advertising from applications

    Full text link
    A wide variety of smartphone applications today rely on third-party advertising services, which provide libraries that are linked into the hosting application. This situation is undesirable for both the application author and the advertiser. Advertising libraries require additional permissions, resulting in additional permission requests to users. Likewise, a malicious application could simulate the behavior of the advertising library, forging the user's interaction and effectively stealing money from the advertiser. This paper describes AdSplit, where we extended Android to allow an application and its advertising to run as separate processes, under separate user-ids, eliminating the need for applications to request permissions on behalf of their advertising libraries. We also leverage mechanisms from Quire to allow the remote server to validate the authenticity of client-side behavior. In this paper, we quantify the degree of permission bloat caused by advertising, with a study of thousands of downloaded apps. AdSplit automatically recompiles apps to extract their ad services, and we measure minimal runtime overhead. We also observe that most ad libraries just embed an HTML widget within and describe how AdSplit can be designed with this in mind to avoid any need for ads to have native code

    Food supply chain stakeholders' perspectives on sharing information to detect and prevent food integrity issues

    Get PDF
    One of the biggest challenges facing the food industry is assuring food integrity. Dealing with complex food integrity issues requires a multi-dimensional approach. Preventive actions and early reactive responses are key for the food supply chain. Information sharing could facilitate the detection and prevention of food integrity issues. This study investigates attitudes towards a food integrity information sharing system (FI-ISS) among stakeholders in the European food supply chain. Insights into stakeholders' interest in participating and their conditions for joining an FI-ISS are assessed. The stakeholder consultation consisted of three rounds. During the first round, a total of 143 food industry stakeholders-covering all major food sectors susceptible to food integrity issues-participated in an online quantitative survey between November 2017 and February 2018. The second round, an online qualitative feedback survey in which the findings were presented, received feedback from 61 stakeholders from the food industry, food safety authorities and the science community. Finally, 37 stakeholders discussed the results in further detail during an interactive workshop in May 2018. Three distinct groups of industry stakeholders were identified based on reported frequency of occurrence and likelihood of detecting food integrity issues. Food industry stakeholders strongly support the concept of an FI-ISS, with an attitude score of 4.49 (standard deviation (S. D.) = 0.57) on a 5-point scale, and their willingness to participate is accordingly high (81%). Consensus exists regarding the advantages an FI-ISS can yield towards detection and prevention. A stakeholder's perception of the advantages was identified as a predictor of their intention to join an FI-ISS, while their perception of the disadvantages and the perceived risk of food integrity issues were not. Medium-sized companies perceive the current detection of food integrity issues as less likely compared to smaller and large companies. Interestingly, medium-sized companies also have lower intentions to join an FI-ISS. Four key success factors for an FI-ISS are defined, more specifically with regards to (1) the actors to be involved in a system, (2) the information to be shared, (3) the third party to manage the FI-ISS and (4) the role of food safety authorities. Reactions diverged concerning the required level of transparency, the type of data that stakeholders might be willing to share in an FI-ISS and the role authorities can have within an FI-ISS

    Data-Driven Implementation To Filter Fraudulent Medicaid Applications

    Get PDF
    There has been much work to improve IT systems for managing and maintaining health records. The U.S government is trying to integrate different types of health care data for providers and patients. Health care fraud detection research has focused on claims by providers, physicians, hospitals, and other medical service providers to detect fraudulent billing, abuse, and waste. Data-mining techniques have been used to detect patterns in health care fraud and reduce the amount of waste and abuse in the health care system. However, less attention has been paid to implementing a system to detect fraudulent applications, specifically for Medicaid. In this study, a data-driven system using layered architecture to filter fraudulent applications for Medicaid was proposed. The Medicaid Eligibility Application System utilizes a set of public and private databases that contain individual asset records. These asset records are used to determine the Medicaid eligibility of applicants using a scoring model integrated with a threshold algorithm. The findings indicated that by using the proposed data-driven approach, the state Medicaid agency could filter fraudulent Medicaid applications and save over $4 million in Medicaid expenditures

    Data-Driven Implementation To Filter Fraudulent Medicaid Applications

    Get PDF
    There has been much work to improve IT systems for managing and maintaining health records. The U.S government is trying to integrate different types of health care data for providers and patients. Health care fraud detection research has focused on claims by providers, physicians, hospitals, and other medical service providers to detect fraudulent billing, abuse, and waste. Data-mining techniques have been used to detect patterns in health care fraud and reduce the amount of waste and abuse in the health care system. However, less attention has been paid to implementing a system to detect fraudulent applications, specifically for Medicaid. In this study, a data-driven system using layered architecture to filter fraudulent applications for Medicaid was proposed. The Medicaid Eligibility Application System utilizes a set of public and private databases that contain individual asset records. These asset records are used to determine the Medicaid eligibility of applicants using a scoring model integrated with a threshold algorithm. The findings indicated that by using the proposed data-driven approach, the state Medicaid agency could filter fraudulent Medicaid applications and save over $4 million in Medicaid expenditures

    Critical success factors for preventing E-banking fraud

    Get PDF
    E-Banking fraud is an issue being experienced globally and is continuing to prove costly to both banks and customers. Frauds in e-banking services occur as a result of various compromises in security ranging from weak authentication systems to insufficient internal controls. Lack of research in this area is problematic for practitioners so there is need to conduct research to help improve security and prevent stakeholders from losing confidence in the system. The purpose of this paper is to understand factors that could be critical in strengthening fraud prevention systems in electronic banking. The paper reviews relevant literatures to help identify potential critical success factors of frauds prevention in e-banking. Our findings show that beyond technology, there are other factors that need to be considered such as internal controls, customer education and staff education etc. These findings will help assist banks and regulators with information on specific areas that should be addressed to build on their existing fraud prevention systems

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated
    • …
    corecore