42 research outputs found

    A 2016 Copa America Bump for Major League Soccer? Strengthening the Case for Legal Action Arising from the Corrupted 2022 World Cup Bid

    Full text link
    Governmental and private investigations have generated evidence of corruption in the bidding process to host the 2022 FIFA World Cup, which went to Qatar rather than the United States. One economic study has shown an increase in professional soccer attendance in European countries that host the World Cup and the European Championships. Accordingly, Major League Soccer and its investor-operators could pursue tort and unfair competition claims to argue that denial of a 2022 World Cup USA will result in lowered attendance, and thus lost profits and diminished business value. Key differences in American and European soccer leagues and sports markets might render the assumption that MLS would see a World Cup bump in attendance dubious, however, thus precluding a successful action for damages. The United States recently hosted the 2016 Copa America, a regional soccer tournament similar to the European Championship. An ordinary least squares regression analysis of MLS attendance data immediately before and after the Copa America reveals an increase in attendance correlated with the tournament, thus supporting application of the World Cup bump in a legal action for money damages related to the corrupted 2022 World Cup bid. This Article suggests the need for further research in economics about the impact of nations hosting major soccer tournaments and in evidence law about applying economic studies from one product, business, or market to another product, business, or market

    Testing SOAR Tools in Use

    Full text link
    Modern security operation centers (SOCs) rely on operators and a tapestry of logging and alerting tools with large scale collection and query abilities. SOC investigations are tedious as they rely on manual efforts to query diverse data sources, overlay related logs, and correlate the data into information and then document results in a ticketing system. Security orchestration, automation, and response (SOAR) tools are a new technology that promise to collect, filter, and display needed data; automate common tasks that require SOC analysts' time; facilitate SOC collaboration; and, improve both efficiency and consistency of SOCs. SOAR tools have never been tested in practice to evaluate their effect and understand them in use. In this paper, we design and administer the first hands-on user study of SOAR tools, involving 24 participants and 6 commercial SOAR tools. Our contributions include the experimental design, itemizing six characteristics of SOAR tools and a methodology for testing them. We describe configuration of the test environment in a cyber range, including network, user, and threat emulation; a full SOC tool suite; and creation of artifacts allowing multiple representative investigation scenarios to permit testing. We present the first research results on SOAR tools. We found that SOAR configuration is critical, as it involves creative design for data display and automation. We found that SOAR tools increased efficiency and reduced context switching during investigations, although ticket accuracy and completeness (indicating investigation quality) decreased with SOAR use. Our findings indicated that user preferences are slightly negatively correlated with their performance with the tool; overautomation was a concern of senior analysts, and SOAR tools that balanced automation with assisting a user to make decisions were preferred

    Voting Systems and Election Reform: What Do Election Officials Think?

    Get PDF
    In the aftermath of voting problems in the 2000 presidential election, Congress passed legislation seeking to reform how elections were run and what voting technologies were used. Some of the new voting systems selected, particularly electronic voting systems, drew criticism for perceived security and transparency problems. Absent from this debate was any systematic representation of the views of the administrators who actually run these elections. This report presented the results of a survey of over 1400 local election officials from across the country. The survey solicited views on specific election systems and technologies; the factors local election officials consider in determining the appropriate election systems for their specific jurisdictions; the influence of vendors and federal, state, and local officials on the decision making process; the impact of federal reform on state and local jurisdictions; and other topics

    AI ATAC 1: An Evaluation of Prominent Commercial Malware Detectors

    Full text link
    This work presents an evaluation of six prominent commercial endpoint malware detectors, a network malware detector, and a file-conviction algorithm from a cyber technology vendor. The evaluation was administered as the first of the Artificial Intelligence Applications to Autonomous Cybersecurity (AI ATAC) prize challenges, funded by / completed in service of the US Navy. The experiment employed 100K files (50/50% benign/malicious) with a stratified distribution of file types, including ~1K zero-day program executables (increasing experiment size two orders of magnitude over previous work). We present an evaluation process of delivering a file to a fresh virtual machine donning the detection technology, waiting 90s to allow static detection, then executing the file and waiting another period for dynamic detection; this allows greater fidelity in the observational data than previous experiments, in particular, resource and time-to-detection statistics. To execute all 800K trials (100K files ×\times 8 tools), a software framework is designed to choreographed the experiment into a completely automated, time-synced, and reproducible workflow with substantial parallelization. A cost-benefit model was configured to integrate the tools' recall, precision, time to detection, and resource requirements into a single comparable quantity by simulating costs of use. This provides a ranking methodology for cyber competitions and a lens through which to reason about the varied statistical viewpoints of the results. These statistical and cost-model results provide insights on state of commercial malware detection

    Beyond the Hype: A Real-World Evaluation of the Impact and Cost of Machine Learning-Based Malware Detection

    Full text link
    There is a lack of scientific testing of commercially available malware detectors, especially those that boast accurate classification of never-before-seen (i.e., zero-day) files using machine learning (ML). The result is that the efficacy and gaps among the available approaches are opaque, inhibiting end users from making informed network security decisions and researchers from targeting gaps in current detectors. In this paper, we present a scientific evaluation of four market-leading malware detection tools to assist an organization with two primary questions: (Q1) To what extent do ML-based tools accurately classify never-before-seen files without sacrificing detection ability on known files? (Q2) Is it worth purchasing a network-level malware detector to complement host-based detection? We tested each tool against 3,536 total files (2,554 or 72% malicious, 982 or 28% benign) including over 400 zero-day malware, and tested with a variety of file types and protocols for delivery. We present statistical results on detection time and accuracy, consider complementary analysis (using multiple tools together), and provide two novel applications of a recent cost-benefit evaluation procedure by Iannaconne & Bridges that incorporates all the above metrics into a single quantifiable cost. While the ML-based tools are more effective at detecting zero-day files and executables, the signature-based tool may still be an overall better option. Both network-based tools provide substantial (simulated) savings when paired with either host tool, yet both show poor detection rates on protocols other than HTTP or SMTP. Our results show that all four tools have near-perfect precision but alarmingly low recall, especially on file types other than executables and office files -- 37% of malware tested, including all polyglot files, were undetected.Comment: Includes Actionable Takeaways for SOC

    Wavefront sensing and control in space-based coronagraph instruments using Zernike’s phase-contrast method

    Get PDF
    Future space telescopes with coronagraph instruments will use a wavefront sensor (WFS) to measure and correct for phase errors and stabilize the stellar intensity in high-contrast images. The HabEx and LUVOIR mission concepts baseline a Zernike wavefront sensor (ZWFS), which uses Zernike’s phase contrast method to convert phase in the pupil into intensity at the WFS detector. In preparation for these potential future missions, we experimentally demonstrate a ZWFS in a coronagraph instrument on the Decadal Survey Testbed in the High Contrast Imaging Testbed facility at NASA’s Jet Propulsion Laboratory. We validate that the ZWFS can measure low- and mid-spatial frequency aberrations up to the control limit of the deformable mirror (DM), with surface height sensitivity as small as 1 pm, using a configuration similar to the HabEx and LUVOIR concepts. Furthermore, we demonstrate closed-loop control, resolving an individual DM actuator, with residuals consistent with theoretical models. In addition, we predict the expected performance of a ZWFS on future space telescopes using natural starlight from a variety of spectral types. The most challenging scenarios require ∼1  h of integration time to achieve picometer sensitivity. This timescale may be drastically reduced by using internal or external laser sources for sensing purposes. The experimental results and theoretical predictions presented here advance the WFS technology in the context of the next generation of space telescopes with coronagraph instruments

    Meeting Report: Consensus Statement—Parkinson’s Disease and the Environment: Collaborative on Health and the Environment and Parkinson’s Action Network (CHE PAN) Conference 26–28 June 2007

    Get PDF
    BackgroundParkinson's disease (PD) is the second most common neurodegenerative disorder. People with PD, their families, scientists, health care providers, and the general public are increasingly interested in identifying environmental contributors to PD risk.MethodsIn June 2007, a multidisciplinary group of experts gathered in Sunnyvale, California, USA, to assess what is known about the contribution of environmental factors to PD.ResultsWe describe the conclusions around which they came to consensus with respect to environmental contributors to PD risk. We conclude with a brief summary of research needs.ConclusionsPD is a complex disorder, and multiple different pathogenic pathways and mechanisms can ultimately lead to PD. Within the individual there are many determinants of PD risk, and within populations, the causes of PD are heterogeneous. Although rare recognized genetic mutations are sufficient to cause PD, these account for < 10% of PD in the U.S. population, and incomplete penetrance suggests that environmental factors may be involved. Indeed, interplay among environmental factors and genetic makeup likely influences the risk of developing PD. There is a need for further understanding of how risk factors interact, and studying PD is likely to increase understanding of other neurodegenerative disorders
    corecore