Donald & Barbara Zucker School of Medicine at Hofstra/Northwell
Hofstra Law's ScholarshipNot a member yet
6463 research outputs found
Sort by
Is Art Even Worth It?: Acknowledging Art\u27s Value in Regulation and Sentencing to Prevent Art Dealers from Getting off Scot-Free
Defining the Field of Wellness Law
Some lawyers practice wellness law without knowing what it is and how it differs from more recognized fields of practice such as health law, public health law, and medical malpractice. This Article defines the field of wellness law using, in a prescriptive manner, the organizational framework that identifies the common and distinctive patterns in wellness and the law that surrounds it. This examination entails reviewing statutes and cases that differentiate between conventional health care and products or services outside of it to identify the core problems that are common and unique to wellness law. Those core problems are twofold. First, whether an imminent disease or illness is at issue when the consumer seeks to improve their wellbeing. If so, then it is not wellness the consumer seeks, but health care. Second, whether the individual\u27s choice to engage in wellness activities is undermined by an intervening authority or situational circumstance. If so, then it is not wellness. Wellness is premised on an empowered individual seeking self-actualization, not a vulnerable patient under threat by illness or injury. The policy trade-offs, values, and interests present in these core wellness law problems involve balancing autonomy striving for self-actualization with paternalistic responses to protect or advance individual health and safety. Defining wellness law gives the field coherence and offers a guide for courts and legislatures when confronted with a wellness law problem. It also encourages legal scholars to further demarcate the area ofwellness law as afield with distinct practical and regulatory issues
Private Equity and A.I. in Healthcare: A Perilous Pairing for Patient Privacy
The American healthcare system faces two trends that not only threaten the quality of health care, but which, combined, exacerbate another threat: the threat to patient privacy. The first of the two trends is the rapid expansion of private-equity acquisitions in the healthcare sector in the last decade, continuing the trend of medicine\u27s corporatization. Decisions increasingly are being made by investors motivated by short-term profit, rather than by doctors motivated by clinical care, compromising healthcare delivery. The second trend is the simultaneous incursion in the healthcare industry of The American healthcare system faces two trends that not only threaten the quality of health care, but which, combined, exacerbate another threat: the threat to patient privacy. The first of the two trends is the rapid expansion of private-equity acquisitions in the healthcare sector in the last decade, continuing the trend of medicine\u27s corporatization. Decisions increasingly are being made by investors motivated by short-term profit, rather than by doctors motivated by clinical care, compromising healthcare delivery. The second trend is the simultaneous incursion in the healthcare industry of A.I.-supported technology, upon which private equity firms have incentives to rely. Under private equity, which prioritizes financial returns, A.I. use intensifies harm to the quality of care, with little industry accountability. The same forces that threaten the quality of care combine to exacerbate the threat to patient privacy. Already, healthcare data privacy lags under the current regulatory system that relies on the Health Insurance Portability and Accountability Act\u27s ( HIPAA ) quaint custodians-of-record framework. The private-equity-A.I. era amplifies this vulnerability, and while current law enforcement efforts have focused on private equity\u27s anticompetitive effects, attention has not yet turned to the privacy harms from the combination of private equity and A.I. in health care. Part I describes how the surge in both the private equity model and A.I.-supported technology threatens the quality of health care. Part II describes the other threat: the combined effect of the dual surge that exacerbates the threat to patient privacy. This Part examines how A.I. \u27s deployment under the private-equity model of healthcare increases patient privacy vulnerabilities. PartIII examines law enforcement\u27s response to the current lack of privacy protections for healthcare data that is already lagging under HIPAA\u27s outdated architecture. Efforts so far have focused against private equity\u27s acquisitions of healthcare systems as violating antitrust laws, with attention beginning to turn to transparency and harm to patient quality of care. However, enforcement efforts have not yet turned to the harms to patient privacy from the private-equity model of healthcare. Part IV of this Article proposes riding the brewing momentum to regulate private equity to include protections for health data privacy when using A.I. systems. Rather than waiting for an overhaul of sectoral healthcare data regulation or the creation of a comprehensive privacy statute, some protections for patient privacy can be carved out now by requiring private equity to provide pre-merger reporting of its data governance plan for acquisitions of healthcare systems in which A.I. systems are to be deployed by expanding upon the already-existing reporting requirement under the Hart-Scott-Rodino Act to the Federal Trade Commission ( FTC\u27).A pre-merger obligation serves two purposes: (1) it prevents the private equity firm from escaping liability and transferring the liability to the healthcare entity it creates; and (2) it removes the patient\u27s burden under traditional privacy frameworks that require the consumer\u27s notice and consent. The pre-merger report should be followed up with compliance reviews and investigations by the FTC to ensure that even after a merger has been effected, the private equity firm will continue to be held to the terms of the data governance report. This Article proposes that the private-equity firm should report its data governance plans to maintain confidentiality of data, transparency and accountability, and quality management. The data-governance reporting requirement is not necessarily exclusive to private equity or to A.I., so carving out an area of regulatory protection in a high-profile technology category within the healthcare industry lays the groundwork for expanding data privacy protections to eventually replace the current outdated scheme and can translate into part of a comprehensive privacy statute. Requiring enhanced transparency in private-equity use of A.I. provides a starting point to mitigate not only the anticompetitive harms law enforcement already has on its radar, but also the deleterious effects of imperiled patient privacy. A.I.-supported technology, upon which private equity firms have incentives to rely. Under private equity, which prioritizes financial returns, A.I. use intensifies harm to the quality of care, with little industry accountability. The same forces that threaten the quality of care combine to exacerbate the threat to patient privacy. Already, healthcare data privacy lags under the current regulatory system that relies on the Health Insurance Portability and Accountability Act\u27s ( HIPAA ) quaint custodians-of-record framework. The private-equity-A.I. era amplifies this vulnerability, and while current law enforcement efforts have focused on private equity\u27s anticompetitive effects, attention has not yet turned to the privacy harms from the combination of private equity and A.I. in health care
Total Governance
Everyone has values which conflict with the sheer maximization of profit. When those conflicts occur, virtually no one will consistently choose profit over conflicting values, especially when the potential profit is small or conflicts with other economic values, promoting the common good or preserving a functioning ecosphere. This Article explores a multi-stakeholder approach that will increase the quantum of humanity in the governance of American large public companies in the era of social media and online communication. We dub this crowd-based strategy of steering public corporations “total governance.” Total governance recognizes that individuals can engage with public corporations from multiple angles, as investors, employees, online activists, community members, and consumers. While we often think in terms of monolithic roles, imputing interests to shareholders, for example, on the assumption that shareholders are only shareholders, this is never true. Every shareholder, whether human or institutional, also inhabits other roles and has other interests and values. Moreover, role-players can ally with other role-players with the same or different stakes in the firm to participate in the governance of the corporation.
The diffusion of technologies and social media, facilitating online communications on a global scale, enables coordination and stakeholder coalitions. As digitally native Millennials and GenZ’ers move into positions of influence, the system of passive and disenfranchised human stakeholders is about to change. Human stakeholders of different categories can act collectively to pursue and promote values that resonate with them as shareholders, employees, customers, members of communities, or inhabitants of a shared planet.
As human shareholders join forces with employees, consumers or others, they will overcome the two key myths of traditional taxonomy of stakeholders’ interests. First, we often assume that the interests of individuals in a corporation’s performance and practices depend solely on the type of stake that they have in a corporation. According to this myth, shareholders have a monolithic interest as shareholders, generally assumed to be maximization of the economic value of their investment. But human shareholders hold and rank numerous values that speak to their human nature: some shareholders are prosocial, some are greedy, some prioritize the environment, some care more about social justice. Second, it is a myth that stakeholders of different categories (e.g., shareholders, consumers, employees, etc.) carry different interests defined by their stakeholder role. On the contrary: many individuals occupy multiple roles with respect to particular corporations (employees, for example, often are also shareholders, consumers, neighbors and potential pensioners). Moreover, individuals have interests and commitments beyond their roles. Some employees, shareholders, and consumers prioritize social justice over anything else. Other employees, shareholders, and consumers rank first the environment, economic-driven choices, or other values.
This Article explores an innovative corporate governance paradigm that rejects fictitious assumptions about shareholders and other stakeholders, considers how online communication facilitate cooperation across stakeholders of different categories, and recognizes that the key common denominator of human stakeholders is their humanity. Human stakeholders can coordinate on a global scale to exert leverage on corporations to make them answerable to human beings. They can cooperate and coordinate across stakeholder categories to pursue common goals that depend on the values they prioritize, not on to which stakeholder role they are assigned
Diamonds Are Forever the Consumer\u27s Worst Enemy: The Failure of the Kimberley Process to End the Trade of Conflict Diamonds
A Systems Approach to Shedding Sunlight on A.I. Black Boxes
A substantial body of literature has emerged around concerns that machine learning and artificial intelligence systems are opaque, or black boxes. The black box nature ofA.I.-powered services and applications has resulted in alarming risks in social life, including insecurity, mistrust, lack of accountability, and exacerbated bias and discrimination. Despite the call to open the black boxes, corresponding legal and regulatory measures tend to run aground due to their infeasibility, inefficacy, and ambiguity. This Article offers a unique perspective on the A.I black box problem. Using systems theory as a heuristic tool, this Article views A.I. as a law-related system and A.I regulations as interactions of many subsystems. Each system has its own purposes and operates according to its own logic. This Article argues that current Explainable A.I (XAI) regulations fail because they do not adequately consider the relationships between these subsystems or create effective interactions. Drawing on global examples, the Article suggests that rather than merely attempting to open the black boxes, XAI regulations should focus on fostering dynamic, coherent interactions that align with the overall objectives of the A.I. system. Based on the observations of systems thinking in XAI regulation, this Article concludes that XAI regulation needs to establish clear and compatible communication frameworks. It proposes practical regulatory techniques, such as counterfactual explanations, controlled disclosures, and benchmarking, to enhance transparency and improve interactions between A.I. subsystems, thereby ensuring the robust operation of the entire A.I. system
Are A.I. Lawyers a Legal Product or Legal Service?: Why Current UPL Laws Are Not up to the Task of Regulating Autonomous A.I. Actors
The rise of automation, particularly with the advent of large language models, presents a significant potential for the legal profession. While automation has traditionally focused on manual and repetitive tasks, A.I.\u27s evolution now allows machines to handle complex, thought-intensive work involving decision-making. This shift underscores a pressing issue: the American legal system lacks a clear definition of the practice of law. This becomes especially critical as A.l, an autonomous actor, begins to take on roles that were previously exclusive to human practitioners. One company that exemplifies the advanced capabilities of modern A.I.-powered technology is Pactum Al Pactum\u27s autonomous negotiation software is already in use by major corporations like Walmart and Maersk to automate their contract negotiation processes, from vendor selection to making offers and counter-offers. The capabilities of Pactum Al demonstrate the potential for A.I. to handle complex legal tasks traditionally performed by humans. However, they also underscore the regulatory challenges posed by these technologies. The current UPL laws, designed with human actors in mind, are ill-equipped to address the nuances of autonomous legal tools. This Article puts forward three key recommendations for the effective regulation of A.I.-powered tools in the legal space. First, it suggests that regulators should facilitate collaboration between attorneys and A.I. developers, ensuring that the former can work with A.I without risking UPL violations. Second, it stresses the importance of establishing a clear boundary between legal and non-legal work for autonomous systems. Finally, it proposes that regulations should strike a balance between consumer protection and the promotion of innovation in the legal tech industry. By addressing these areas, the legal profession can help A.I. technologies successfully and safely integrate into society while upholding the integrity and effectiveness of legal practice
The Supreme Court’s Specious Code of Conduct
Congressionally imposed ethics enforcement would enhance the Supreme Court\u27s exercise of constitutional powers by reducing the interference of improper personal motives during its factual and legal determinations. The Court has long recognized the self-evident truth that no man can be a judge in his own case. The Framers recognized the limits inherent in human nature: humans are not angels. Accordingly, the Constitutional scheme of separated powers, as elucidated by Madison, [does] not mean that these departments ought to have no partial agency in, or no control over, the acts of each other. Congress rightly has the constitutional means and motive to promote due process by bolstering impartiality and the appearance thereof It can do so while also strengthening the Court\u27s proper constitutional role. Congress can and should enact an ethics enforcement mechanism that brings the Justices within a system that checks their individual interests. The Supreme Court\u27s ersatz Code, in its present form, is manifestly insufficient. With permissive language and no meaningful enforcement mechanism, the Code serves only as a clever loophole, designed to suppress public scrutiny of the Court, without enacting real change