5,590 research outputs found

    Regulatory Monitors: Policing Firms in the Compliance Era

    Get PDF
    Like police officers patrolling the streets for crime, the front line for most large business regulators — Environmental Protection Agency (EPA) engineers, Consumer Financial Protection Bureau (CFPB) examiners, and Nuclear Regulatory Commission (NRC) inspectors, among others — decide when and how to enforce the law. These regulatory monitors guard against toxic air, financial ruin, and deadly explosions. Yet whereas scholars devote considerable attention to police officers in criminal law enforcement, they have paid limited attention to the structural role of regulatory monitors in civil law enforcement. This Article is the first to chronicle the statutory rise of regulatory monitors and to situate them empirically at the core of modern administrative power. Since the Civil War, often in response to crises, the largest federal regulators have steadily accrued authority to collect documents remotely and enter private space without any suspicion of wrongdoing. Those exercising this monitoring authority within agencies administer the law at least as much as the groups that are the focus of legal scholarship: enforcement lawyers, administrative law judges, and rule writers. Regulatory monitors wield sanctions, influence rulemaking, and create quasi-common law. Moreover, they offer a better fit than lawyers for the modern era of “collaborative governance” and corporate compliance departments, because their principal function — information collection — is less adversarial. Yet unlike lawsuits and rulemaking, monitoring-based decisions are largely unobservable by the public, often unreviewable by courts, and explicitly excluded by the Administrative Procedure Act (APA). The regulatory monitor function can thus be more easily ramped up or deconstructed by the President, interest groups, and agency directors. A better understanding of regulatory monitors — and their relationship with regulatory lawyers — is vital to designing democratic accountability not only during times of political transition, but as long as they remain a central pillar of the administrative state

    FAULT LINKS: IDENTIFYING MODULE AND FAULT TYPES AND THEIR RELATIONSHIP

    Get PDF
    The presented research resulted in a generic component taxonomy, a generic code-faulttaxonomy, and an approach to tailoring the generic taxonomies into domain-specific aswell as project-specific taxonomies. Also, a means to identify fault links was developed.Fault links represent relationships between the types of code-faults and the types ofcomponents being developed or modified. For example, a fault link has been found toexist between Controller modules (that forms a backbone for any software via. itsdecision making characteristics) and Control/Logic faults (such as unreachable code).The existence of such fault links can be used to guide code reviews, walkthroughs, testingof new code development, as well as code maintenance. It can also be used to direct faultseeding. The results of these methods have been validated. Finally, we also verified theusefulness of the obtained fault links through an experiment conducted using graduatestudents. The results were encouraging

    HUMAN ERROR IN MINING: A MULTIVARIABLE ANALYSIS OF MINING ACCIDENTS/INCIDENTS IN QUEENSLAND, AUSTRALIA AND THE UNITED STATES OF AMERICA USING THE HUMAN FACTORS ANALYSIS AND CLASSIFICATION SYSTEM FRAMEWORK

    Get PDF
    Historically, mining has been viewed as an inherently high-risk industry. Nevertheless, the introduction of new technology and a heightened concern for safety has yielded marked reductions in accident and injury rates over the last several decades. In an effort to further reduce these rates, the human factors associated with incidents/accidents need to be addressed. A modified version of the Human Factors Classification and Analysis System (HFCAS-MI) was used to analyze lost time accidents and high-potential incidents from across Queensland, Australia and fatal accidents from the United States of America (USA) to identify human factor trends and system deficiencies within mining. An analysis of the data revealed that skill-based errors (referred to as routine disruption errors by industry) were the most common unsafe act and showed no significant differences between accident types. Findings for unsafe acts were consistent across the time period examined. The percentages of cases associated with preconditions were also not significantly different between accident types. Higher tiers of HFACS-MI were associated with a significantly higher percentage of fatal accidents than non-fatal accidents. These results suggest that there are differences in the underlying causal factors between fatal and non-fatal accidents. By illuminating human causal factors in a systematic fashion, this study has provided mine safety professionals the information necessary to reduce mine accidents/incidents further

    Using Machine Learning and Graph Mining Approaches to Improve Software Requirements Quality: An Empirical Investigation

    Get PDF
    Software development is prone to software faults due to the involvement of multiple stakeholders especially during the fuzzy phases (requirements and design). Software inspections are commonly used in industry to detect and fix problems in requirements and design artifacts, thereby mitigating the fault propagation to later phases where the same faults are harder to find and fix. The output of an inspection process is list of faults that are present in software requirements specification document (SRS). The artifact author must manually read through the reviews and differentiate between true-faults and false-positives before fixing the faults. The first goal of this research is to automate the detection of useful vs. non-useful reviews. Next, post-inspection, requirements author has to manually extract key problematic topics from useful reviews that can be mapped to individual requirements in an SRS to identify fault-prone requirements. The second goal of this research is to automate this mapping by employing Key phrase extraction (KPE) algorithms and semantic analysis (SA) approaches to identify fault-prone requirements. During fault-fixations, the author has to manually verify the requirements that could have been impacted by a fix. The third goal of my research is to assist the authors post-inspection to handle change impact analysis (CIA) during fault fixation using NL processing with semantic analysis and mining solutions from graph theory. The selection of quality inspectors during inspections is pertinent to be able to carry out post-inspection tasks accurately. The fourth goal of this research is to identify skilled inspectors using various classification and feature selection approaches. The dissertation has led to the development of automated solution that can identify useful reviews, help identify skilled inspectors, extract most prominent topics/keyphrases from fault logs; and help RE author during the fault-fixation post inspection

    Differentiating Legislative from Nonlegislative Rules: An Empirical and Qualitative Analysis

    Get PDF
    The elusive distinction between legislative rules and nonlegislative rules has frustrated courts, motivated voluminous scholarly debate, and ushered in a flood of litigation against administrative agencies. In the absence of U.S. Supreme Court guidance on the proper demarcating line, circuit courts have adopted various tests to ascertain a rule’s proper classification. This Note analyzes all 241 cases in which a circuit court has used one or more of the enunciated tests to differentiate legislative from nonlegislative rules. These opinions come from every one of the thirteen circuits and span the period of the early 1950s through 2018. This Note identifies six different tests that courts have employed in this effort and offers a qualitative and empirical analysis of each. The qualitative analysis explains the underlying premise of the tests, articulates their merits and shortcomings, and considers how courts have applied them to particular disputes. The empirical portion of this Note uses regression analysis to ascertain how using or rejecting one or more of the tests affects a court’s determination of whether the rule is legislative or nonlegislative. This Note classifies the different tests into two categories: public-focused tests and agency-focused tests. These two categories are defined by a principle that permeates administrative law jurisprudence: achieving a proper balance between efficient agency rulemaking and maintaining a proper check against unconstrained agency action. These two categories thus defined, this Note proposes a balanced approach that incorporates elements of both categories to identify and refine the proper test

    Mining and checking object behavior

    Get PDF
    This thesis introduces a novel approach to modeling the behavior of programs at runtime. We leverage the structure of object-oriented programs to derive models that describe the behavior of individual objects. Our approach mines object behavior models, finite state automata where states correspond to different states of an object, and transitions are caused by method invocations. Such models capture the effects of method invocations on an object\u27;s state. To our knowledge, our approach is the first to combine the control-flow with information about the values of variables. Our ADABU tool is able to mine object behavior models from the executions of large interactive JAVA programs. To investigate the usefulness of our technique, we study two different applications of object behavior models: Mining Specifications Many existing verification techniques are difficult to apply because in practice the necessary specifications are missing. We use ADABU to automatically mine specifications from the execution of test suites. To enrich these specifications, our TAUTOKO tool systematically generates test cases that exercise previously uncovered behavior. Our results show that, when fed into a typestate verifier, such enriched specifications are able to detect more bugs than the original versions. Generating Fixes We present PACHIKA, a tool to automatically generate possible fixes for failing program runs. Our approach uses object behavior models to compare passing and failing runs. Differences in the models both point to anomalies and suggest possible ways to fix the anomaly. In a controlled experiment, PACHIKA was able to synthesize fixes for real bugs mined from the history of two open-source projects.Diese Arbeit stellt einen neuen Ansatz zur Modellierung des Verhaltens eines Programmes zur Laufzeit vor. Wir nutzen die Struktur Objektorientierter Programme aus um Modelle zu erzeugen, die das Verhalten einzelner Objekte beschreiben. Unser Ansatz generiert Objektverhaltensmodelle, endliche Automaten deren Zustände unterschiedlichen Zuständen des Objektes entsprechen. Zustandsübergänge im Automaten werden durch Methodenaufrufe ausgelöst. Diese Modelle erfassen die Auswirkungen von Methodenaufrufen auf den Zustand eines Objektes. Nach unserem Kenntnisstand ist unser Ansatz der Erste, der Informationen über den Kontrollfluss eines Programms mit den Werten von Variablen kombiniert. Unser ADABU Prototyp ist in der Lage, Objektverhaltensmodelle von Ausführungen großer JAVA Programme zu lernen. Um die Anwendbarkeit unseres Ansatzes in der Praxis zu untersuchen, haben wir zwei unterschiedliche Anwendungen von Objektverhaltensmodellen untersucht: Lernen von Spezifikationen: Viele Ansätze zur Programmverifikation sind in der Praxis schwierig zu verwenden, da die notwendigen Spezifikationen fehlen. Wir verwenden ADABU um Spezifikationen von der Ausführung automatischer Tests zu lernen. Um die Spezifikationen zu vervollständigen generiert der TAUTOKO Prototyp systematisch Tests, die gezielt neues Verhalten abtesten. Unsere Ergebnisse zeigen, dass derart vervollständigte Spezifikationen für ein spezielles Verifikationsverfahren namens \u27;Typestate Verification\u27; wesentlich mehr Fehler finden als die ursprünglichen Spezifikationen. Automatische Programmkorrektur: Wir stellen PACHIKA vor, ein Werkzeug das automatisch mögliche Programmkorrekturen für fehlerhafte Programmläufe vorschlägt. Unser Ansatz verwendet Objektverhaltensmodelle um das Verhalten von normalen und fehlerhaften Läufen zu vergleichen. Unterschiede in den Modellen weisen auf Anomalien hin und zeigen mögliche Korrekturen auf. In einem kontrollierten Experiment war PACHIKA in der Lage, Korrekturen für echte Fehler aus der Versionsgeschichte zweier quelloffener Programme zu generieren
    • …
    corecore