13,634 research outputs found

    Privacy as a Public Good

    Get PDF
    Privacy is commonly studied as a private good: my personal data is mine to protect and control, and yours is yours. This conception of privacy misses an important component of the policy problem. An individual who is careless with data exposes not only extensive information about herself, but about others as well. The negative externalities imposed on nonconsenting outsiders by such carelessness can be productively studied in terms of welfare economics. If all relevant individuals maximize private benefit, and expect all other relevant individuals to do the same, neoclassical economic theory predicts that society will achieve a suboptimal level of privacy. This prediction holds even if all individuals cherish privacy with the same intensity. As the theoretical literature would have it, the struggle for privacy is destined to become a tragedy. But according to the experimental public-goods literature, there is hope. Like in real life, people in experiments cooperate in groups at rates well above those predicted by neoclassical theory. Groups can be aided in their struggle to produce public goods by institutions, such as communication, framing, or sanction. With these institutions, communities can manage public goods without heavy-handed government intervention. Legal scholarship has not fully engaged this problem in these terms. In this Article, we explain why privacy has aspects of a public good, and we draw lessons from both the theoretical and the empirical literature on public goods to inform the policy discourse on privacy

    Randomised controlled trials of complex interventions and large-scale transformation of services

    Get PDF
    Complex interventions and large-scale transformations of services are necessary to meet the health-care challenges of the 21st century. However, the evaluation of these types of interventions is challenging and requires methodological development. Innovations such as cluster randomised controlled trials, stepped-wedge designs, and non-randomised evaluations provide options to meet the needs of decision-makers. Adoption of theory and logic models can help clarify causal assumptions, and process evaluation can assist in understanding delivery in context. Issues of implementation must also be considered throughout intervention design and evaluation to ensure that results can be scaled for population benefit. Relevance requires evaluations conducted under real-world conditions, which in turn requires a pragmatic attitude to design. The increasing complexity of interventions and evaluations threatens the ability of researchers to meet the needs of decision-makers for rapid results. Improvements in efficiency are thus crucial, with electronic health records offering significant potential

    Sludge and Ordeals

    Get PDF
    Is there an argument for behaviorally informed deregulation? In 2015, the United States government imposed 9.78 billion hours of paperwork burdens on the American people. Many of these hours are best categorized as “sludge,” understood as friction, reducing access to important licenses, programs, and benefits. Because of the sheer costs of sludge, rational people are effectively denied life-changing goods and services. The problem is compounded by the existence of behavioral biases, including inertia, present bias, and unrealistic optimism. A serious deregulatory effort should be undertaken to reduce sludge through automatic enrollment, greatly simplified forms, and reminders. At the same time, sludge can promote legitimate goals. First, it can protect program integrity, which means that policymakers might have to make difficult tradeoffs between (1) granting benefits to people who are not entitled to them and (2) denying benefits to people who are entitled to them. Second, it can overcome impulsivity, recklessness, and self-control problems. Third, it can prevent intrusions on privacy. Fourth, it can serve as a rationing device, ensuring that benefits go to people who most need them. Fifth, it can help public officials to acquire valuable information, which they can use for important purposes. In most cases, however, these defenses of sludge turn out to be far more attractive in principle than in practice. For sludge, a form of cost-benefit analysis is essential, and it will often demonstrate the need for a neglected form of deregulation: sludge reduction. For both public and private institutions, “Sludge Audits” should become routine, and they should provide a foundation for behaviorally informed deregulation. Various suggestions are offered for new action by the Office of Information and Regulatory Affairs, which oversees the Paperwork Reduction Act; for courts; and for Congress

    Not So Private

    Get PDF
    Federal and state laws have long attempted to strike a balance between protecting patient privacy and health information confidentiality on the one hand and supporting important uses and disclosures of health information on the other. To this end, many health laws restrict the use and disclosure of identifiable health data but support the use and disclosure of de-identified data. The goal of health data de-identification is to prevent or minimize informational injuries to identifiable data subjects while allowing the production of aggregate statistics that can be used for biomedical and behavioral research, public health initiatives, informed health care decision making, and other important activities. Many federal and state laws assume that data are de-identified when direct and indirect demographic identifiers such as names, user names, email addresses, street addresses, and telephone numbers have been removed. An emerging reidentification literature shows, however, that purportedly de-identified data can—and increasingly will—be reidentified. This Article responds to this concern by presenting an original synthesis of illustrative federal and state identification and de-identification laws that expressly or potentially apply to health data; identifying significant weaknesses in these laws in light of the developing reidentification literature; proposing theoretical alternatives to outdated identification and de-identification standards, including alternatives based on the theories of evolving law, nonreidentification, non-collection, non-use, non-disclosure, and nondiscrimination; and offering specific, textual amendments to federal and state data protection laws that incorporate these theoretical alternatives

    Attacking and Defending Printer Source Attribution Classifiers in the Physical Domain

    Get PDF
    The security of machine learning classifiers has received increasing attention in the last years. In forensic applications, guaranteeing the security of the tools investigators rely on is crucial, since the gathered evidence may be used to decide about the innocence or the guilt of a suspect. Several adversarial attacks were proposed to assess such security, with a few works focusing on transferring such attacks from the digital to the physical domain. In this work, we focus on physical domain attacks against source attribution of printed documents. We first show how a simple reprinting attack may be sufficient to fool a model trained on images that were printed and scanned only once. Then, we propose a hardened version of the classifier trained on the reprinted attacked images. Finally, we attack the hardened classifier with several attacks, including a new attack based on the Expectation Over Transformation approach, which finds the adversarial perturbations by simulating the physical transformations occurring when the image attacked in the digital domain is printed again. The results we got demonstrate a good capability of the hardened classifier to resist attacks carried out in the physical domai

    Beyond Bitcoin: Issues in Regulating Blockchain Transactions

    Get PDF
    The buzz surrounding Bitcoin has reached a fever pitch. Yet in academic legal discussions, disproportionate emphasis is placed on bitcoins (that is, virtual currency), and little mention is made of blockchain technology—the true innovation behind the Bitcoin protocol. Simply, blockchain technology solves an elusive networking problem by enabling “trustless” transactions: value exchanges over computer networks that can be verified, monitored, and enforced without central institutions (for example, banks). This has broad implications for how we transact over electronic networks. This Note integrates current research from leading computer scientists and cryptographers to elevate the legal community’s understanding of blockchain technology and, ultimately, to inform policymakers and practitioners as they consider different regulatory schemes. An examination of the economic properties of a blockchain-based currency suggests the technology’s true value lies in its potential to facilitate more efficient digital-asset transfers. For example, applications of special interest to the legal community include more efficient document and authorship verification, title transfers, and contract enforcement. Though a regulatory patchwork around virtual currencies has begun to form, its careful analysis reveals much uncertainty with respect to these alternative applications

    Contracts Ex Machina

    Get PDF
    Smart contracts are self-executing digital transactions using decentralized cryptographic mechanisms for enforcement. They were theorized more than twenty years ago, but the recent development of Bitcoin and blockchain technologies has rekindled excitement about their potential among technologists and industry. Startup companies and major enterprises alike are now developing smart contract solutions for an array of markets, purporting to offer a digital bypass around traditional contract law. For legal scholars, smart contracts pose a significant question: Do smart contracts offer a superior solution to the problems that contract law addresses? In this article, we aim to understand both the potential and the limitations of smart contracts. We conclude that smart contracts offer novel possibilities, may significantly alter the commercial world, and will demand new legal responses. But smart contracts will not displace contract law. Understanding why not brings into focus the essential role of contract law as a remedial institution. In this way, smart contracts actually illuminate the role of contract law more than they obviate it

    A comparison of processing techniques for producing prototype injection moulding inserts.

    Get PDF
    This project involves the investigation of processing techniques for producing low-cost moulding inserts used in the particulate injection moulding (PIM) process. Prototype moulds were made from both additive and subtractive processes as well as a combination of the two. The general motivation for this was to reduce the entry cost of users when considering PIM. PIM cavity inserts were first made by conventional machining from a polymer block using the pocket NC desktop mill. PIM cavity inserts were also made by fused filament deposition modelling using the Tiertime UP plus 3D printer. The injection moulding trials manifested in surface finish and part removal defects. The feedstock was a titanium metal blend which is brittle in comparison to commodity polymers. That in combination with the mesoscale features, small cross-sections and complex geometries were considered the main problems. For both processing methods, fixes were identified and made to test the theory. These consisted of a blended approach that saw a combination of both the additive and subtractive processes being used. The parts produced from the three processing methods are investigated and their respective merits and issues are discussed

    Reducing risk in pre-production investigations through undergraduate engineering projects.

    Get PDF
    This poster is the culmination of final year Bachelor of Engineering Technology (B.Eng.Tech) student projects in 2017 and 2018. The B.Eng.Tech is a level seven qualification that aligns with the Sydney accord for a three-year engineering degree and hence is internationally benchmarked. The enabling mechanism of these projects is the industry connectivity that creates real-world projects and highlights the benefits of the investigation of process at the technologist level. The methodologies we use are basic and transparent, with enough depth of technical knowledge to ensure the industry partners gain from the collaboration process. The process we use minimizes the disconnect between the student and the industry supervisor while maintaining the academic freedom of the student and the commercial sensitivities of the supervisor. The general motivation for this approach is the reduction of the entry cost of the industry to enable consideration of new technologies and thereby reducing risk to core business and shareholder profits. The poster presents several images and interpretive dialogue to explain the positive and negative aspects of the student process

    Um método supervisionado para encontrar variáveis discriminantes na análise de problemas complexos : estudos de caso em segurança do Android e em atribuição de impressora fonte

    Get PDF
    Orientadores: Ricardo Dahab, Anderson de Rezende RochaDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A solução de problemas onde muitos componentes atuam e interagem simultaneamente requer modelos de representação nem sempre tratáveis pelos métodos analíticos tradicionais. Embora em muitos caso se possa prever o resultado com excelente precisão através de algoritmos de aprendizagem de máquina, a interpretação do fenómeno requer o entendimento de quais são e em que proporção atuam as variáveis mais importantes do processo. Esta dissertação apresenta a aplicação de um método onde as variáveis discriminantes são identificadas através de um processo iterativo de ranqueamento ("ranking") por eliminação das que menos contribuem para o resultado, avaliando-se em cada etapa o impacto da redução de características nas métricas de acerto. O algoritmo de florestas de decisão ("Random Forest") é utilizado para a classificação e sua propriedade de importância das características ("Feature Importance") para o ranqueamento. Para a validação do método, dois trabalhos abordando sistemas complexos de natureza diferente foram realizados dando origem aos artigos aqui apresentados. O primeiro versa sobre a análise das relações entre programas maliciosos ("malware") e os recursos requisitados pelos mesmos dentro de um ecossistema de aplicações no sistema operacional Android. Para realizar esse estudo, foram capturados dados, estruturados segundo uma ontologia definida no próprio artigo (OntoPermEco), de 4.570 aplicações (2.150 malware, 2.420 benignas). O modelo complexo produziu um grafo com cerca de 55.000 nós e 120.000 arestas, o qual foi transformado usando-se a técnica de bolsa de grafos ("Bag Of Graphs") em vetores de características de cada aplicação com 8.950 elementos. Utilizando-se apenas os dados do manifesto atingiu-se com esse modelo 88% de acurácia e 91% de precisão na previsão do comportamento malicioso ou não de uma aplicação, e o método proposto foi capaz de identificar 24 características relevantes na classificação e identificação de famílias de malwares, correspondendo a 70 nós no grafo do ecosistema. O segundo artigo versa sobre a identificação de regiões em um documento impresso que contém informações relevantes na atribuição da impressora laser que o imprimiu. O método de identificação de variáveis discriminantes foi aplicado sobre vetores obtidos a partir do uso do descritor de texturas (CTGF-"Convolutional Texture Gradient Filter") sobre a imagem scaneada em 600 DPI de 1.200 documentos impressos em 10 impressoras. A acurácia e precisão médias obtidas no processo de atribuição foram de 95,6% e 93,9% respectivamente. Após a atribuição da impressora origem a cada documento, 8 das 10 impressoras permitiram a identificação de variáveis discriminantes associadas univocamente a cada uma delas, podendo-se então visualizar na imagem do documento as regiões de interesse para uma análise pericial. Os objetivos propostos foram atingidos mostrando-se a eficácia do método proposto na análise de dois problemas em áreas diferentes (segurança de aplicações e forense digital) com modelos complexos e estruturas de representação bastante diferentes, obtendo-se um modelo reduzido interpretável para ambas as situaçõesAbstract: Solving a problem where many components interact and affect results simultaneously requires models which sometimes are not treatable by traditional analytic methods. Although in many cases the result is predicted with excellent accuracy through machine learning algorithms, the interpretation of the phenomenon requires the understanding of how the most relevant variables contribute to the results. This dissertation presents an applied method where the discriminant variables are identified through an iterative ranking process. In each iteration, a classifier is trained and validated discarding variables that least contribute to the result and evaluating in each stage the impact of this reduction in the classification metrics. Classification uses the Random Forest algorithm, and the discarding decision applies using its feature importance property. The method handled two works approaching complex systems of different nature giving rise to the articles presented here. The first article deals with the analysis of the relations between \textit{malware} and the operating system resources requested by them within an ecosystem of Android applications. Data structured according to an ontology defined in the article (OntoPermEco) were captured to carry out this study from 4,570 applications (2,150 malware, 2,420 benign). The complex model produced a graph of about 55,000 nodes and 120,000 edges, which was transformed using the Bag of Graphs technique into feature vectors of each application with 8,950 elements. The work accomplished 88% of accuracy and 91% of precision in predicting malicious behavior (or not) for an application using only the data available in the application¿s manifest, and the proposed method was able to identify 24 relevant features corresponding to only 70 nodes of the entire ecosystem graph. The second article is about to identify regions in a printed document that contains information relevant to the attribution of the laser printer that printed it. The discriminant variable determination method achieved average accuracy and precision of 95.6% and 93.9% respectively in the source printer attribution using a dataset of 1,200 documents printed on ten printers. Feature vectors were obtained from the scanned image at 600 DPI applying the texture descriptor Convolutional Texture Gradient Filter (CTGF). After the assignment of the source printer to each document, eight of the ten printers allowed the identification of discriminant variables univocally associated to each one of them, and it was possible to visualize in document's image the regions of interest for expert analysis. The work in both articles accomplished the objective of reducing a complex system into an interpretable streamlined model demonstrating the effectiveness of the proposed method in the analysis of two problems in different areas (application security and digital forensics) with complex models and entirely different representation structuresMestradoCiência da ComputaçãoMestre em Ciência da Computaçã
    corecore