122 research outputs found

    Budget-Feasible Mechanism Design for Non-Monotone Submodular Objectives: Offline and Online

    Get PDF
    The framework of budget-feasible mechanism design studies procurement auctions where the auctioneer (buyer) aims to maximize his valuation function subject to a hard budget constraint. We study the problem of designing truthful mechanisms that have good approximation guarantees and never pay the participating agents (sellers) more than the budget. We focus on the case of general (non-monotone) submodular valuation functions and derive the first truthful, budget-feasible and O(1)O(1)-approximate mechanisms that run in polynomial time in the value query model, for both offline and online auctions. Prior to our work, the only O(1)O(1)-approximation mechanism known for non-monotone submodular objectives required an exponential number of value queries. At the heart of our approach lies a novel greedy algorithm for non-monotone submodular maximization under a knapsack constraint. Our algorithm builds two candidate solutions simultaneously (to achieve a good approximation), yet ensures that agents cannot jump from one solution to the other (to implicitly enforce truthfulness). Ours is the first mechanism for the problem where---crucially---the agents are not ordered with respect to their marginal value per cost. This allows us to appropriately adapt these ideas to the online setting as well. To further illustrate the applicability of our approach, we also consider the case where additional feasibility constraints are present. We obtain O(p)O(p)-approximation mechanisms for both monotone and non-monotone submodular objectives, when the feasible solutions are independent sets of a pp-system. With the exception of additive valuation functions, no mechanisms were known for this setting prior to our work. Finally, we provide lower bounds suggesting that, when one cares about non-trivial approximation guarantees in polynomial time, our results are asymptotically best possible.Comment: Accepted to EC 201

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Clinicians' Perspectives on a Web-Based System for Routine Outcome Monitoring in Old-Age Psychiatry in the Netherlands

    Get PDF
    Background: In health care, the use of physical parameters to monitor physical disease progress is common. In mental health care, the periodic measurement of a client's functioning during treatment, or routine outcome monitoring, has recently become important. Online delivery of questionnaires has the potential to reduce clinicians' resistance to the implementation of routine outcome monitoring. Online delivery enables clinicians to receive results on a questionnaire in a graphic directly after data entry. This gives them insight into the progress of a client at a single glance. Objective: To explore clinicians' perspectives on a routine outcome monitoring procedure where questionnaires and feedback on scores were delivered online. Questionnaires could also be filled out on paper and then entered into the online system by a research assistant. Methods: In 2009 we sent an online survey, consisting of five yes-or-no questions and six open-ended questions, to all clinicians in the 14 mental health care organizations working with the routine outcome monitoring system in the Netherlands. Of the 172 clinicians contacted, 80 (47%) opened the link and 70 of these 80 (88%) clinicians completed the survey. Results: Clinicians seldom used the graphical feedback from the Web-based system, which indicates that direct feedback on scores did not enhance the implementation of routine outcome monitoring. Integration into the electronic patient record and more training on interpretation and implementation of feedback in daily practice were seen as the primary points for further improvement. It was mainly the availability of a research assistant that made the routine outcome monitoring procedure feasible. Conclusions: Without a research assistant and training in the interpretation of outcomes, software programs alone cannot ensure effective implementation of monitoring activities in everyday practice. © Marjolein A Veerbeek, Richard C Oude Voshaar, Anne Margriet Pot

    Evidence-based Cybersecurity: Data-driven and Abstract Models

    Get PDF
    Achieving computer security requires both rigorous empirical measurement and models to understand cybersecurity phenomena and the effectiveness of defenses and interventions. To address the growing scale of cyber-insecurity, my approach to protecting users employs principled and rigorous measurements and models. In this dissertation, I examine four cybersecurity phenomena. I show that data-driven and abstract modeling can reveal surprising conclusions about longterm, persistent problems, like spam and malware, and growing threats like data-breaches and cyber conflict. I present two data-driven statistical models and two abstract models. Both of the data-driven models show that the presence of heavy-tailed distributions can make naive analysis of trends and interventions misleading. First, I examine ten years of publicly reported data breaches and find that there has been no increase in size or frequency. I also find that reported and perceived increases can be explained by the heavy-tailed nature of breaches. In the second data-driven model, I examine a large spam dataset, analyzing spam concentrations across Internet Service Providers. Again, I find that the heavy-tailed nature of spam concentrations complicates analysis. Using appropriate statistical methods, I identify unique risk factors with significant impact on local spam levels. I then use the model to estimate the effect of historical botnet takedowns and find they are frequently ineffective at reducing global spam concentrations and have highly variable local effects. Abstract models are an important tool when data are unavailable. Even without data, I evaluate both known and hypothesized interventions used by search providers to protect users from malicious websites. I present a Markov model of malware spread and study the effect of two potential interventions: blacklisting and depreferencing. I find that heavy-tailed traffic distributions obscure the effects of interventions, but with my abstract model, I showed that lowering search rankings is a viable alternative to blacklisting infected pages. Finally, I study how game-theoretic models can help clarify strategic decisions in cyber-conflict. I find that, in some circumstances, improving the attribution ability of adversaries may decrease the likelihood of escalating cyber conflict

    INDIGO: a generalized model and framework for performance prediction of data dissemination

    Get PDF
    According to recent studies, an enormous rise in location-based mobile services is expected in future. People are interested in getting and acting on the localized information retrieved from their vicinity like local events, shopping offers, local food, etc. These studies also suggested that local businesses intend to maximize the reach of their localized offers/advertisements by pushing them to the maxi- mum number of interested people. The scope of such localized services can be augmented by leveraging the capabilities of smartphones through the dissemination of such information to other interested people. To enable local businesses (or publishers) of localized services to take in- formed decision and assess the performance of their dissemination-based localized services in advance, we need to predict the performance of data dissemination in complex real-world scenarios. Some of the questions relevant to publishers could be the maximum time required to disseminate information, best relays to maximize information dissemination etc. This thesis addresses these questions and provides a solution called INDIGO that enables the prediction of data dissemination performance based on the availability of physical and social proximity information among people by collectively considering different real-world aspects of data dissemination process. INDIGO empowers publishers to assess the performance of their localized dissemination based services in advance both in physical as well as the online social world. It provides a solution called INDIGO–Physical for the cases where physical proximity plays the fundamental role and enables the tighter prediction of data dissemination time and prediction of best relays under real-world mobility, communication and data dissemination strategy aspects. Further, this thesis also contributes in providing the performance prediction of data dissemination in large-scale online social networks where the social proximity is prominent using INDIGO–OSN part of the INDIGO framework under different real-world dissemination aspects like heterogeneous activity of users, type of information that needs to be disseminated, friendship ties and the content of the published online activities. INDIGO is the first work that provides a set of solutions and enables publishers to predict the performance of their localized dissemination based services based on the availability of physical and social proximity information among people and different real-world aspects of data dissemination process in both physical and online social networks. INDIGO outperforms the existing works for physical proximity by providing 5 times tighter upper bound of data dissemination time under real-world data dissemination aspects. Further, for social proximity, INDIGO is able to predict the data dissemination with 90% accuracy and differently, from other works, it also provides the trade-off between high prediction accuracy and privacy by introducing the feature planes from an online social networks

    Arguing Security: A Framework for Analyzing Security Requirements

    Get PDF
    When considering the security of a system, the analyst must simultaneously work with two types of properties: those that can be shown to be true, and those that must be argued as being true. The first consists of properties that can be demonstrated conclusively, such as the type of encryption in use or the existence of an authentication scheme. The second consists of things that cannot be so demonstrated but must be considered true for a system to be secure, such as the trustworthiness of a public key infrastructure or the willingness of people to keep their passwords secure. The choices represented by the second case are called trust assumptions, and the analyst should supply arguments explaining why the trust assumptions are valid. This thesis presents three novel contributions: a framework for security requirements elicitation and analysis, based upon the construction of a context for the system; an explicit place and role for trust assumptions in security requirements; and structured satisfaction arguments to validate that a system can satisfy the security requirements. The system context is described using a problem-centered notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument is in two parts: a formal argument that the system can meet its security requirements, and structured informal arguments supporting the assumptions exposed during argument construction. If one cannot construct a convincing argument, designers are asked to provide design information to resolve the problems and another pass is made through the framework to verify that the proposed solution satisfies the requirements. Alternatively, stakeholders are asked to modify the goals for the system so that the problems can be resolved or avoided. The contributions are evaluated by using the framework to do a security requirements analysis within an air traffic control technology evaluation project

    QoS Aware Transmit Beamforming for Secure Backscattering in Symbiotic Radio Systems

    Full text link
    This paper focuses on secure backscatter transmission in the presence of a passive multi-antenna eavesdropper through a symbiotic radio (SR) network. Specifically, a single-antenna backscatter device (BD) aims to transmit confidential information to a primary receiver (PR) by using a multi-antenna primary transmitter's (PT) signal, where the received symbols are jointly decoded at the PR. Our objective is to achieve confidential communications for BD while ensuring that the primary system's quality of service (QoS) requirements are met. We propose an alternating optimisation algorithm that maximises the achievable secrecy rate of BD by jointly optimising primary transmit beamforming and power sharing between information and artificial noise (AN) signals. Numerical results verify our analytical claims on the optimality of the proposed solution and the proposed methodology's underlying low complexity. Additionally, our simulations provide nontrivial design insights into the critical system parameters and quantify the achievable gains over the relevant benchmark schemes

    A Hybrid Framework for Sentiment Analysis Using Genetic Algorithm Based Feature Reduction

    Get PDF
    © 2019 IEEE. Due to the rapid development of Internet technologies and social media, sentiment analysis has become an important opinion mining technique. Recent research work has described the effectiveness of different sentiment classification techniques ranging from simple rule-based and lexicon-based approaches to more complex machine learning algorithms. While lexicon-based approaches have suffered from the lack of dictionaries and labeled data, machine learning approaches have fallen short in terms of accuracy. This paper proposes an integrated framework which bridges the gap between lexicon-based and machine learning approaches to achieve better accuracy and scalability. To solve the scalability issue that arises as the feature-set grows, a novel genetic algorithm (GA)-based feature reduction technique is proposed. By using this hybrid approach, we are able to reduce the feature-set size by up to 42% without compromising the accuracy. The comparison of our feature reduction technique with more widely used principal component analysis (PCA) and latent semantic analysis (LSA) based feature reduction techniques have shown up to 15.4% increased accuracy over PCA and up to 40.2% increased accuracy over LSA. Furthermore, we also evaluate our sentiment analysis framework on other metrics including precision, recall, F-measure, and feature size. In order to demonstrate the efficacy of GA-based designs, we also propose a novel cross-disciplinary area of geopolitics as a case study application for our sentiment analysis framework. The experiment results have shown to accurately measure public sentiments and views regarding various topics such as terrorism, global conflicts, and social issues. We envisage the applicability of our proposed work in various areas including security and surveillance, law-and-order, and public administration
    • …
    corecore