243,420 research outputs found

    THE USAGE OF INDIVIDUAL PRIVACY SETTINGS ON SOCIAL NETWORKING SITES - DRAWING DESIRED DIGITAL IMAGES OF ONESELF

    Get PDF
    Social networking sites (SNS) such as Facebook have created a new way for individuals to share personal data and interact with each other on the Internet. The disclosure of this personal data is directly tied to the existing relationships of individuals within an SNS. Individual privacy settings allow a selective disclosure of personal data to specific connected individuals. In this paper, we present first empirical insights of a grounded theory study, based on 37 qualitative interviews with Facebook users, which reveal factors that drive, or generally influence, the use of these individual privacy settings on SNS. By investigating this privacy protection behaviour towards connected individuals, so-called friends in Facebook\u27s terminology, we add new perspectives to existing theories of information privacy protection \u27individuals\u27 privacy protection behaviour in non-anonymous online environments. We have developed a conceptual model showing that the motivation to use individual privacy settings depends on a complex interplay between different factors. As important drivers, motives for using SNS, existing relationships and context of personal data disclosure have been identified. Building on those insights further allows development or improvement of general privacy controls for individuals interacting with each other on the Internet

    Differentially Private Model Selection with Penalized and Constrained Likelihood

    Full text link
    In statistical disclosure control, the goal of data analysis is twofold: The released information must provide accurate and useful statistics about the underlying population of interest, while minimizing the potential for an individual record to be identified. In recent years, the notion of differential privacy has received much attention in theoretical computer science, machine learning, and statistics. It provides a rigorous and strong notion of protection for individuals' sensitive information. A fundamental question is how to incorporate differential privacy into traditional statistical inference procedures. In this paper we study model selection in multivariate linear regression under the constraint of differential privacy. We show that model selection procedures based on penalized least squares or likelihood can be made differentially private by a combination of regularization and randomization, and propose two algorithms to do so. We show that our private procedures are consistent under essentially the same conditions as the corresponding non-private procedures. We also find that under differential privacy, the procedure becomes more sensitive to the tuning parameters. We illustrate and evaluate our method using simulation studies and two real data examples

    Patent Law\u27s Unpredictability Doctrine and the Software Arts

    Get PDF
    Part II reviews these insights from the Norden model generally. Part III brings these insights to the disclosure doctrines for software patents, with particular emphasis on the unpredictability factor for undue experimentation within enablement. The model corresponds well with enablement and best mode but does not correspond as well with other disclosure-prompting doctrines whose role is related to defining the claim. Thus, the review in Part III of written description, definiteness, and means-plus-function (§ 112 T 6) claim limitations helps establish the contours of applicability for the Norden model. The discussion of Part III also reviews the current state of the law for software patent disclosure: disclosure burdens are light and do not require disclosure of source code for the software. Thus, software patents may represent the high-water technology in patent law for having your cake and eating it too: trade secrecy protection attaches if the licensing and distribution of the software is according to proprietary licensing: distribution of object code, keeping source code secret. Within this review of software patent disclosure law, the Article contrasts the continuum of possible disclosure modes with the Norden model and patent law\u27s current requirements. Part IV then completes the article by arguing for a change to one of the requirements: reducing the categorical approach to unpredictability in the software arts. All of software should not be deemed predictable. Many niches are, but some are not. Unpredictability is one of eight Wands factors that define undue experimentation,14 but it is particularly important among the factors. Technologically, Part IV explains potential sources for unpredictable or unreliable behavior in software systems. Pragmatically, the progression of software technology since the time of the precedent influencing enablement for software patents suggests a failure by the law to recognize the changes in the technology. Moreover, the disclosure doctrines in software patents have not responded to the expansion of patentable subject matter in the area of software patents. The discussion also helps show that patent law does not necessarily specify what it means by unpredictability, whether the unpredictable arts doctrine only attaches to ungovernable or inestimable items in nature or based on natural principles. Software is different as a discipline because it processes encoded information, where the encoding is derived from human thought. For some, this processing would not fit within a definition of what is nature. Regardless, the Norden model suggests that an effort-based perspective on disclosure brings notions of unpredictability into the software arts in a nuanced and niche-specific manner

    The Disclosure of Organizational Secrets by Employees

    Get PDF
    Organizational secrets enable firms to protect their unique stocks of knowledge, reduce the imitability of their capabilities and achieve sustained competitive advantages (Hannah, 2005). In today’s business environments, the loss of valuable proprietary organizational knowledge due to intentional employee disclosure represents a substantial threat to firm competitiveness. Anecdotal evidence suggests that firms in the United States lose more than $250 billion of intellectual property every year, with intentional employee disclosure accounting for a significant portion of these losses (Dandliker, 2012; Heffernan & Swartwood, 1993). Thus, understanding factors that influence such intentional secret disclosure is a key concern, especially in knowledge-intensive industries. While prior research has primarily focused on the disclosure of personal secrets, family secrets or ‘dark’ organizational secrets, very few studies have examined the disclosure of value-creating organizational secrets – i.e., strategic secrets that encapsulate knowledge about a firm’s plans from competitors and Social secrets that create valued identity categorizations within organizations (Goffman, 1959). This dissertation begins to address this gap in the literature by putting forth a person-situation interaction model of secret disclosure. Specifically, drawing on the resource-based view of the firm and Social identity theory, it explores how certain characteristics of value-creating organizational secrets (e.g., market value of knowledge and Social value of concealment) may interact with certain individual-level variables (e.g., moral identity and need for status) to influence employees’ secret disclosure intent. Using scenario-based surveys of undergraduate and EMBA students and a cross-sectional sample of working adults in the United States, this dissertation finds evidence for the key proposition that employees’ perceptions of market value of knowledge and Social value of concealment shape their secret disclosure intentions. Individual-level factors like moral identity and organizational disidentification were also found to play important roles in the disclosure of organizational secrets. This dissertation contributes to the emerging field of organizational secrecy by integrating key informational and Social perspectives to address concerns regarding secret protection in organizations

    A New Method for Protecting Interrelated Time Series with Bayesian Prior Distributions and Synthetic Data

    Get PDF
    Organizations disseminate statistical summaries of administrative data via the Web for unrestricted public use. They balance the trade-off between confidentiality protection and inference quality. Recent developments in disclosure avoidance techniques include the incorporation of synthetic data, which capture the essential features of underlying data by releasing altered data generated from a posterior predictive distribution. The United States Census Bureau collects millions of interrelated time series micro-data that are hierarchical and contain many zeros and suppressions. Rule-based disclosure avoidance techniques often require the suppression of count data for small magnitudes and the modification of data based on a small number of entities. Motivated by this problem, we use zero-inflated extensions of Bayesian Generalized Linear Mixed Models (BGLMM) with privacy-preserving prior distributions to develop methods for protecting and releasing synthetic data from time series about thousands of small groups of entities without suppression based on the of magnitudes or number of entities. We find that as the prior distributions of the variance components in the BGLMM become more precise toward zero, confidentiality protection increases and inference quality deteriorates. We evaluate our methodology using a strict privacy measure, empirical differential privacy, and a newly defined risk measure, Probability of Range Identification (PoRI), which directly measures attribute disclosure risk. We illustrate our results with the U.S. Census Bureau’s Quarterly Workforce Indicators
    • …
    corecore