5,301 research outputs found

    Causal Reasoning: Charting a Revolutionary Course for Next-Generation AI-Native Wireless Networks

    Full text link
    Despite the basic premise that next-generation wireless networks (e.g., 6G) will be artificial intelligence (AI)-native, to date, most existing efforts remain either qualitative or incremental extensions to existing ``AI for wireless'' paradigms. Indeed, creating AI-native wireless networks faces significant technical challenges due to the limitations of data-driven, training-intensive AI. These limitations include the black-box nature of the AI models, their curve-fitting nature, which can limit their ability to reason and adapt, their reliance on large amounts of training data, and the energy inefficiency of large neural networks. In response to these limitations, this article presents a comprehensive, forward-looking vision that addresses these shortcomings by introducing a novel framework for building AI-native wireless networks; grounded in the emerging field of causal reasoning. Causal reasoning, founded on causal discovery, causal representation learning, and causal inference, can help build explainable, reasoning-aware, and sustainable wireless networks. Towards fulfilling this vision, we first highlight several wireless networking challenges that can be addressed by causal discovery and representation, including ultra-reliable beamforming for terahertz (THz) systems, near-accurate physical twin modeling for digital twins, training data augmentation, and semantic communication. We showcase how incorporating causal discovery can assist in achieving dynamic adaptability, resilience, and cognition in addressing these challenges. Furthermore, we outline potential frameworks that leverage causal inference to achieve the overarching objectives of future-generation networks, including intent management, dynamic adaptability, human-level cognition, reasoning, and the critical element of time sensitivity

    Heated debates and cool analysis: thinking well about financial ethics

    Get PDF
    Not for the first time, the banks and other financial institutions have got themselves – and the rest of us – into a mess, this time on an unprecedented financial and geographical scale. It is no surprise that opinions about causes, consequences and cures abound with ethical issues, as well as technical and economic concerns, a focus of attention. It is to be hoped that useful lessons for the future will be learned. In this chapter, however, we step back from a direct engagement with the stated ills of the financial system itself, whether actual or perceived, chronic or acute. Our starting point is that crisis in the financial system not only makes us stop and think; but it might also, particularly under conditions of moral panic, prevent us from thinking well. Our contention is that a further impediment to thinking well about financial crises is the lack of a substantial body of academic knowledge that might be termed ‘financial ethics’ – a corpus of well developed conceptual insights and appropriate empirical evidence. We identify some of the reasons for this situation and proffer some suggestions regarding what might be done to remedy it – including the development of knowledge that is as relevant to everyday practices during periods of normality as it is to providing perspectives on crisis. The chapter is structured as follows: the next section provides a perspective on debate during times of crisis; the middle section seeks to explain why academic financial ethics is not a significant constituent element of debate on the financial crisis post-2007; and the final two main sections explore ways in which an academic agenda for financial ethics might be constructed. In a curious way this chapter echoes some of the themes and especially the conclusion of David Bevan’s chapter in this work (chapter18) although the reasoning to the conclusion that finance ethics is an empty set follows a rather different Badiou-inspired path in chapter 18

    Conflicting Requirements of Notice: The Incorporation of Rule 9(b) into the False Claims Act\u27s First-to-File Bar

    Get PDF
    Intended to prevent fraud against the government, the False Claims Act (“FCA”) contains a qui tam provision allowing private individuals, known as relators, to bring suits on behalf of the government and receive a portion of the damages. At the heart of the qui tam provision lies the first-to-file bar, which provides that, once a first relator has filed a complaint, subsequent relators are prohibited from coming forward with complaints based on the facts underlying the first relator’s pending action. A circuit split has recently emerged regarding the incorporation of Federal Rule of Civil Procedure 9(b)’s heightened pleading standard into the FCA’s first-to-file rule. Neither the circuit court decisions nor the relevant scholarship on this issue, however, has provided a comprehensive explanation as to why the government’s notice requirements should differ—if indeed they should differ at all—from defendants’ notice requirements for purposes of the first-to-file bar. This Note aims to fill that void and argues that, unlike garden-variety civil defendants in an adversarial context, the government maintains a partnership with the relator and has sufficient investigatory tools beyond the four corners of the complaint to assess adequately the merits of the relator’s allegations. Thus, the government does not require the heightened notice of Rule 9(b) at the first-to-file stage, and courts should ultimately adopt the approach employed by the First and D.C. Circuits in affording preclusive effect to first-filed FCA complaints, even if they are deficient under Rule 9(b)

    What Do Lawyers Contribute to Law and Economics?

    Get PDF
    The law-and-economics movement has transformed the analysis of private law in the United States and, increasingly, around the world. As the field developed from 1970 to the early 2000s, scholars have developed countless insights about the operation and effects of law and legal institutions. Throughout this period, the discipline of law-and-economics has benefited from a partnership among trained economists and academic lawyers. Yet the tools that are used derive primarily from economics and not law. A logical question thus demands attention: what role do academic lawyers play in law-and-economics scholarship? In this Essay, we offer an interpretive theory of the practice of law-and-economics scholarship over the past 50 years that recognizes the distinct methodological tools of the academic lawyer. We claim that, in addition to the legal resources they provide to the economic analyst, academic lawyers have cognizable analytical skills, developed through their involvement in law as an applied discipline and their mastery of the common lawÊŒs analogical method of argument. We draw on the idea of analogical argument to explain some of the differences in the ways that economists and lawyers analyze some of the building blocks of our economy, including the relationship between formal and informal modes of enforcement and the reasons why inefficient boilerplate terms persist in certain standardized contracts. By enriching the standard economic model with insights from other disciplines and clarifying the connections among these disciplines, the lawyer provides skills that are critically important for advancing normative claims

    What Do Lawyers Contribute to Law & Economics?

    Get PDF
    The law-and-economics movement has transformed the analysis of private law in the United States and, increasingly, around the world. As the field developed from 1970 to the early 2000s, scholars have developed countless insights about the operation and effects of law and legal institutions. Throughout this period, the discipline of law-and-economics has benefited from a partnership among trained economists and academic lawyers. Yet the tools that are used derive primarily from economics and not law. A logical question thus demands attention: what role do academic lawyers play in law-and-economics scholarship? In this Essay, we offer an interpretive theory of the practice of law-and-economics scholarship over the past 50 years that recognizes the distinct methodological tools of the academic lawyer. We claim that, in addition to the legal resources they provide to the economic analyst, academic lawyers have cognizable analytical skills, developed through their involvement in law as an applied discipline and their mastery of the common lawÊŒs analogical method of argument. We draw on the idea of analogical argument to explain some of the differences in the ways that economists and lawyers analyze some of the building blocks of our economy, including the relationship between formal and informal modes of enforcement and the reasons why inefficient boilerplate terms persist in certain standardized contracts. By enriching the standard economic model with insights from other disciplines and clarifying the connections among these disciplines, the lawyer provides skills that are critically important for advancing normative claims

    A Temporal Framework for Hypergame Analysis of Cyber Physical Systems in Contested Environments

    Get PDF
    Game theory is used to model conflicts between one or more players over resources. It offers players a way to reason, allowing rationale for selecting strategies that avoid the worst outcome. Game theory lacks the ability to incorporate advantages one player may have over another player. A meta-game, known as a hypergame, occurs when one player does not know or fully understand all the strategies of a game. Hypergame theory builds upon the utility of game theory by allowing a player to outmaneuver an opponent, thus obtaining a more preferred outcome with higher utility. Recent work in hypergame theory has focused on normal form static games that lack the ability to encode several realistic strategies. One example of this is when a player’s available actions in the future is dependent on his selection in the past. This work presents a temporal framework for hypergame models. This framework is the first application of temporal logic to hypergames and provides a more flexible modeling for domain experts. With this new framework for hypergames, the concepts of trust, distrust, mistrust, and deception are formalized. While past literature references deception in hypergame research, this work is the first to formalize the definition for hypergames. As a demonstration of the new temporal framework for hypergames, it is applied to classical game theoretical examples, as well as a complex supervisory control and data acquisition (SCADA) network temporal hypergame. The SCADA network is an example includes actions that have a temporal dependency, where a choice in the first round affects what decisions can be made in the later round of the game. The demonstration results show that the framework is a realistic and flexible modeling method for a variety of applications

    Testing Human Ability To Detect Deepfake Images of Human Faces

    Get PDF
    Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat
    • 

    corecore