12,348 research outputs found

    Bullcoming v. New Mexico: Revisiting Analyst Testimony After Melendez-Diaz

    Get PDF
    Wallyfy is planned to be a social webmagazine with a focus on good content, with a quality check and moderation of the published content. The author of the study provides Wallyfy with thoughts and ideas about the website content and functionality. The main problem of the study was to explore what makes good content in social media for the potential users of Wallyfy, also using this insight to provide Wallyfy with directions for making decisions regarding both functionality and content.   The theory of the study is being used as a starting point for understanding the phenomenom social media and to easier grasp the problem. The method of the study is based on user- centered thinking in design, where the author seeks to understand the participant’s emotions, values and dreams. Design probes (tasks for the participants) have been used to aid the first steps of the quality data collection, enabling the 5 participants of the study to be a part of the idea generation and to familiarize them with social media. The participants then attended a workshop based on the quality data from the design probes. Further quality data were then derived from the discussion and the creative participation in the workshop.   The relevant data parts were then compiled and orginanised to be presented as the data collection result in the study. The main theme from the data was that the participants valued content more if a personal connection between the user and the content could be made. From a discussion of the data and theory, recommendations and requirements regarding content and functionally for Wallyfy was produced.2014-06-03 14:15, John von Neumann</p

    Spectral Norm Regularization for Improving the Generalizability of Deep Learning

    Full text link
    We investigate the generalizability of deep learning based on the sensitivity to input perturbation. We hypothesize that the high sensitivity to the perturbation of data degrades the performance on it. To reduce the sensitivity to perturbation, we propose a simple and effective regularization method, referred to as spectral norm regularization, which penalizes the high spectral norm of weight matrices in neural networks. We provide supportive evidence for the abovementioned hypothesis by experimentally confirming that the models trained using spectral norm regularization exhibit better generalizability than other baseline methods

    Lung Segmentation from Chest X-rays using Variational Data Imputation

    Full text link
    Pulmonary opacification is the inflammation in the lungs caused by many respiratory ailments, including the novel corona virus disease 2019 (COVID-19). Chest X-rays (CXRs) with such opacifications render regions of lungs imperceptible, making it difficult to perform automated image analysis on them. In this work, we focus on segmenting lungs from such abnormal CXRs as part of a pipeline aimed at automated risk scoring of COVID-19 from CXRs. We treat the high opacity regions as missing data and present a modified CNN-based image segmentation network that utilizes a deep generative model for data imputation. We train this model on normal CXRs with extensive data augmentation and demonstrate the usefulness of this model to extend to cases with extreme abnormalities.Comment: Accepted to be presented at the first Workshop on the Art of Learning with Missing Values (Artemiss) hosted by the 37th International Conference on Machine Learning (ICML). Source code, training data and the trained models are available here: https://github.com/raghavian/lungVAE

    A risk-security tradeoff in graphical coordination games

    Get PDF
    A system relying on the collective behavior of decision-makers can be vulnerable to a variety of adversarial attacks. How well can a system operator protect performance in the face of these risks? We frame this question in the context of graphical coordination games, where the agents in a network choose among two conventions and derive benefits from coordinating neighbors, and system performance is measured in terms of the agents' welfare. In this paper, we assess an operator's ability to mitigate two types of adversarial attacks - 1) broad attacks, where the adversary incentivizes all agents in the network and 2) focused attacks, where the adversary can force a selected subset of the agents to commit to a prescribed convention. As a mitigation strategy, the system operator can implement a class of distributed algorithms that govern the agents' decision-making process. Our main contribution characterizes the operator's fundamental trade-off between security against worst-case broad attacks and vulnerability from focused attacks. We show that this tradeoff significantly improves when the operator selects a decision-making process at random. Our work highlights the design challenges a system operator faces in maintaining resilience of networked distributed systems.Comment: 13 pages, double column, 4 figures. Submitted for journal publicatio
    • …
    corecore