16,016 research outputs found

    The Auditing Imperative for Automated Hiring

    Get PDF
    The high bar of proof to demonstrate either a disparate treatment or disparate impact cause of action under Title VII of the Civil Rights Act, coupled with the “black box” nature of many automated hiring systems, renders the detection and redress of bias in such algorithmic systems difficult. This Article, with contributions at the intersection of administrative law, employment & labor law, and law & technology, makes the central claim that the automation of hiring both facilitates and obfuscates employment discrimination. That phenomenon and the deployment of intellectual property law as a shield against the scrutiny of automated systems combine to form an insurmountable obstacle for disparate impact claimants.To ensure against the identified “bias in, bias out” phenomenon associated with automated decision-making, I argue that the employer’s affirmative duty of care as posited by other legal scholars creates “an auditing imperative” for algorithmic hiring systems. This auditing imperative mandates both internal and external audits of automated hiring systems, as well as record-keeping initiatives for job applications. Such audit requirements have precedent in other areas of law, as they are not dissimilar to the Occupational Safety and Health Administration (OSHA) audits in labor law or the Sarbanes-Oxley Act audit requirements in securities law.I also propose that employers that have subjected their automated hiring platforms to external audits could receive a certification mark, “the Fair Automated Hiring Mark,” which would serve to positively distinguish them in the labor market. Labor law mechanisms such as collective bargaining could be an effective approach to combating the bias in automated hiring by establishing criteria for the data deployed in automated employment decision-making and creating standards for the protection and portability of said data. The Article concludes by noting that automated hiring, which captures a vast array of applicant data, merits greater legal oversight given the potential for “algorithmic blackballing,” a phenomenon that could continue to thwart many applicants’ future job bids

    A Systematic Review of Research Studies Examining Telehealth Privacy and Security Practices Used By Healthcare Providers

    Get PDF
    The objective of this systematic review was to systematically review papers in the United States that examine current practices in privacy and security when telehealth technologies are used by healthcare providers. A literature search was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P). PubMed, CINAHL and INSPEC from 2003 – 2016 were searched and returned 25,404 papers (after duplications were removed). Inclusion and exclusion criteria were strictly followed to examine title, abstract, and full text for 21 published papers which reported on privacy and security practices used by healthcare providers using telehealth.  Data on confidentiality, integrity, privacy, informed consent, access control, availability, retention, encryption, and authentication were all searched and retrieved from the papers examined. Papers were selected by two independent reviewers, first per inclusion/exclusion criteria and, where there was disagreement, a third reviewer was consulted. The percentage of agreement and Cohen’s kappa was 99.04% and 0.7331 respectively. The papers reviewed ranged from 2004 to 2016 and included several types of telehealth specialties. Sixty-seven percent were policy type studies, and 14 percent were survey/interview studies. There were no randomized controlled trials. Based upon the results, we conclude that it is necessary to have more studies with specific information about the use of privacy and security practices when using telehealth technologies as well as studies that examine patient and provider preferences on how data is kept private and secure during and after telehealth sessions.Keywords: Computer security, Health personnel, Privacy, Systematic review, Telehealth

    National CPE Curriculum: a pathway to excellence

    Get PDF
    https://egrove.olemiss.edu/aicpa_comm/1122/thumbnail.jp

    The Promise and The Peril: Artificial Intelligence and Employment Discrimination

    Get PDF
    Artificial intelligence (“AI”) is undeniably transforming the workplace, though many implications remain unknown. Employers increasingly rely on algorithms to determine who gets interviewed, hired, promoted, developed, disciplined, or fired. If appropriately designed and applied, AI promises to help workers find their most rewarding jobs, match companies with their most valuable and productive employees, and advance diversity, inclusion, and accessibility in the work- place. Notwithstanding its positive impacts, however, AI poses new perils for employment discrimination, especially when designed or used improperly. This Article examines the interaction between AI and federal employment antidiscrimination law. This Article explores the legal landscape including responses taken at the federal level, as well as state, local, and global legislation. Next, this Article examines a few legislative proposals designed to further regulate AI as well as several non-legislative proposals. In the absence of a comprehensive federal framework, this Article outlines and advances a deregulatory approach to using AI in the context of employment antidiscrimination that will maintain and spur further innovation. Against the backdrop of the deregulatory approach, this Article concludes by discussing best practices to guide employers in using AI for employment decisions

    CAFCASS operating framework

    Get PDF

    Security and privacy requirements for a multi-institutional cancer research data grid: an interview-based study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Data protection is important for all information systems that deal with human-subjects data. Grid-based systems – such as the cancer Biomedical Informatics Grid (caBIG) – seek to develop new mechanisms to facilitate real-time federation of cancer-relevant data sources, including sources protected under a variety of regulatory laws, such as HIPAA and 21CFR11. These systems embody new models for data sharing, and hence pose new challenges to the regulatory community, and to those who would develop or adopt them. These challenges must be understood by both systems developers and system adopters. In this paper, we describe our work collecting policy statements, expectations, and requirements from regulatory decision makers at academic cancer centers in the United States. We use these statements to examine fundamental assumptions regarding data sharing using data federations and grid computing.</p> <p>Methods</p> <p>An interview-based study of key stakeholders from a sample of US cancer centers. Interviews were structured, and used an instrument that was developed for the purpose of this study. The instrument included a set of problem scenarios – difficult policy situations that were derived during a full-day discussion of potentially problematic issues by a set of project participants with diverse expertise. Each problem scenario included a set of open-ended questions that were designed to elucidate stakeholder opinions and concerns. Interviews were transcribed verbatim and used for both qualitative and quantitative analysis. For quantitative analysis, data was aggregated at the individual or institutional unit of analysis, depending on the specific interview question.</p> <p>Results</p> <p>Thirty-one (31) individuals at six cancer centers were contacted to participate. Twenty-four out of thirty-one (24/31) individuals responded to our request- yielding a total response rate of 77%. Respondents included IRB directors and policy-makers, privacy and security officers, directors of offices of research, information security officers and university legal counsel. Nineteen total interviews were conducted over a period of 16 weeks. Respondents provided answers for all four scenarios (a total of 87 questions). Results were grouped by broad themes, including among others: governance, legal and financial issues, partnership agreements, de-identification, institutional technical infrastructure for security and privacy protection, training, risk management, auditing, IRB issues, and patient/subject consent.</p> <p>Conclusion</p> <p>The findings suggest that with additional work, large scale federated sharing of data within a regulated environment is possible. A key challenge is developing suitable models for authentication and authorization practices within a federated environment. Authentication – the recognition and validation of a person's identity – is in fact a global property of such systems, while authorization – the permission to access data or resources – mimics data sharing agreements in being best served at a local level. Nine specific recommendations result from the work and are discussed in detail. These include: (1) the necessity to construct separate legal or corporate entities for governance of federated sharing initiatives on this scale; (2) consensus on the treatment of foreign and commercial partnerships; (3) the development of risk models and risk management processes; (4) development of technical infrastructure to support the credentialing process associated with research including human subjects; (5) exploring the feasibility of developing large-scale, federated honest broker approaches; (6) the development of suitable, federated identity provisioning processes to support federated authentication and authorization; (7) community development of requisite HIPAA and research ethics training modules by federation members; (8) the recognition of the need for central auditing requirements and authority, and; (9) use of two-protocol data exchange models where possible in the federation.</p

    Graduate School of Business Academic Catalog 2012 - 2013

    Get PDF
    • …
    corecore