8,749 research outputs found

    Collider Interplay for Supersymmetry, Higgs and Dark Matter

    Get PDF
    We discuss the potential impacts on the CMSSM of future LHC runs and possible electron-positron and higher-energy proton-proton colliders, considering searches for supersymmetry via MET events, precision electroweak physics, Higgs measurements and dark matter searches. We validate and present estimates of the physics reach for exclusion or discovery of supersymmetry via MET searches at the LHC, which should cover the low-mass regions of the CMSSM parameter space favoured in a recent global analysis. As we illustrate with a low-mass benchmark point, a discovery would make possible accurate LHC measurements of sparticle masses using the MT2 variable, which could be combined with cross-section and other measurements to constrain the gluino, squark and stop masses and hence the soft supersymmetry-breaking parameters m_0, m_{1/2} and A_0 of the CMSSM. Slepton measurements at CLIC would enable m_0 and m_{1/2} to be determined with high precision. If supersymmetry is indeed discovered in the low-mass region, precision electroweak and Higgs measurements with a future circular electron-positron collider (FCC-ee, also known as TLEP) combined with LHC measurements would provide tests of the CMSSM at the loop level. If supersymmetry is not discovered at the LHC, is likely to lie somewhere along a focus-point, stop coannihilation strip or direct-channel A/H resonance funnel. We discuss the prospects for discovering supersymmetry along these strips at a future circular proton-proton collider such as FCC-hh. Illustrative benchmark points on these strips indicate that also in this case FCC-ee could provide tests of the CMSSM at the loop level.Comment: 47 pages, 26 figure

    Standing and Privacy Harms: A Critique of TransUnion v. Ramirez

    Get PDF
    Through the standing doctrine, the U.S. Supreme Court has taken a new step toward severely limiting the effective enforcement of privacy laws. The recent Supreme Court decision, TransUnion v. Ramirez (U.S. June 25, 2021) revisits the issue of standing and privacy harms under the Fair Credit Reporting Act (FCRA) that began with Spokeo v. Robins, 132 S. Ct. 1441 (2012). In TransUnion, a group of plaintiffs sued TransUnion under FCRA for falsely labeling them as potential terrorists in their credit reports. The Court concluded that only some plaintiffs had standing – those whose credit reports were disseminated. Plaintiffs whose credit reports weren’t disseminated lacked a “concrete” injury and accordingly lacked standing – even though Congress explicitly granted them a private right of action to sue for violations like this and even though a jury had found that TransUnion was at fault.In this essay, Professors Daniel J. Solove and Danielle Keats Citron engage in an extensive critique of the TransUnion case. They contend that existing standing doctrine incorrectly requires concrete harm. For most of U.S. history, standing required only an infringement on rights. Moreover, when assessing harm, the Court has a crabbed and inadequate understanding of privacy harms. Additionally, allowing courts to nullify private rights of action in federal privacy laws is a usurpation of legislative power that upends the compromises and balances that Congress establishes in laws. Private rights of action are essential enforcement mechanisms

    Risk and Anxiety: A Theory of Data Breach Harms

    Get PDF
    In lawsuits about data breaches, the issue of harm has confounded courts. Harm is central to whether plaintiffs have standing to sue in federal court and whether their claims are viable. Plaintiffs have argued that data breaches create a risk of future injury from identity theft or fraud and that breaches cause them to experience anxiety about this risk. Courts have been reaching wildly inconsistent conclusions on the issue of harm, with most courts dismissing data breach lawsuits for failure to allege harm. A sound and principled approach to harm has yet to emerge, resulting in a lack of consensus among courts and an incoherent jurisprudence. In the past five years, the U.S. Supreme Court has contributed to this confounding state of affairs. In 2013, the Court in Clapper v. Amnesty International concluded that fear and anxiety about surveillance – and the cost of taking measures to protect against it – were too speculative to constitute “injury in fact” for standing. The Court emphasized that injury must be “certainly impending” to warrant recognition. This past term, the U.S. Supreme Court in Spokeo v. Robins issued an opinion aimed at clarifying the harm required for standing in a case involving personal data. But far from providing guidance, the opinion fostered greater confusion. What the Court made clear, however, was that “intangible” injury, including the “risk” of injury, could be sufficient to establish harm. In cases involving informational injuries, when is intangible injury like increased risk and anxiety “certainly impending” or “substantially likely to occur” to warrant standing? The answer is unclear. Little progress has been made to harmonize this troubled body of law, and there is no coherent theory or approach. In this essay, we examine why courts have struggled when dealing with harms caused by data breaches. The difficulty largely stems from the fact that data breach harms are intangible, risk-oriented, and diffuse. Harms with these characteristics need not confound courts; the judicial system has, been recognizing intangible, risk-oriented, and diffuse injuries in other areas of law. We argue that courts are far too dismissive of certain forms of data breach harm. In many instances, courts should find that data breaches cause cognizable harm. We explore how existing legal foundations support the recognition of such harm. We demonstrate how courts can assess risk and anxiety in a concrete and coherent way

    Privacy Harms

    Get PDF
    Privacy harms have become one of the largest impediments in privacy law enforcement. In most tort and contract cases, plaintiffs must establish that they have been harmed. Even when legislation does not require it, courts have taken it upon themselves to add a harm element. Harm is also a requirement to establish standing in federal court. In Spokeo v. Robins, the U.S. Supreme Court has held that courts can override Congress’s judgments about what harm should be cognizable and dismiss cases brought for privacy statute violations. The caselaw is an inconsistent, incoherent jumble, with no guiding principles. Countless privacy violations are not remedied or addressed on the grounds that there has been no cognizable harm. Courts conclude that many privacy violations, such as thwarted expectations, improper uses of data, and the wrongful transfer of data to other organizations, lack cognizable harm. Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations do result in negative consequences, the effects are often small – frustration, aggravation, and inconvenience – and dispersed among a large number of people. When these minor harms are done at a vast scale by a large number of actors, they aggregate into more significant harms to people and society. But these harms do not fit well with existing judicial understandings of harm. This article makes two central contributions. The first is the construction of a road map for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. We set forth a typology of privacy harms that elucidates why certain types of privacy harms should be recognized as cognizable. The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigations suffers from a misalignment of law enforcement goals and remedies. For example, existing methods of litigating privacy cases, such as class actions, often enrich lawyers but fail to achieve meaningful deterrence. Because the personal data of tens of millions of people could be involved, even small actual damages could put companies out of business without providing much of value to each individual. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies

    Hemagglutinin sequence conservation guided stem immunogen design from influenza A H3 subtype

    Get PDF
    Seasonal epidemics caused by influenza A (H1 and H3 subtypes) and B viruses are a major global health threat. The traditional, trivalent influenza vaccines have limited efficacy because of rapid antigenic evolution of the circulating viruses. This antigenic variability mediates viral escape from the host immune responses, necessitating annual vaccine updates. Influenza vaccines elicit a protective antibody response, primarily targeting the viral surface glycoprotein hemagglutinin (HA). However, the predominant humoral response is against the hypervariable head domain of HA, thereby restricting the breadth of protection. In contrast, the conserved, subdominant stem domain of HA is a potential ‘universal’ vaccine candidate. We designed an HA stem-fragment immunogen from the 1968 pandemic H3N2 strain (A/Hong Kong/1/68) guided by a comprehensive H3 HA sequence conservation analysis. The biophysical properties of the designed immunogen were further improved by C-terminal fusion of a trimerization motif, ‘isoleucine-zipper’ or ‘foldon’. These immunogens elicited cross-reactive, antiviral antibodies and conferred partial protection against a lethal, homologous HK68 virus challenge in vivo. Furthermore, bacterial expression of these immunogens is economical and facilitates rapid scale-up

    The NUHM2 after LHC Run 1

    Get PDF
    We make a frequentist analysis of the parameter space of the NUHM2, in which the soft supersymmetry (SUSY)-breaking contributions to the masses of the two Higgs multiplets, mHu,d2m^2_{H_{u,d}}, vary independently from the universal soft SUSY-breaking contributions m02m^2_0 to the masses of squarks and sleptons. Our analysis uses the MultiNest sampling algorithm with over 4×1084 \times 10^8 points to sample the NUHM2 parameter space. It includes the ATLAS and CMS Higgs mass measurements as well as their searches for supersymmetric jets + MET signals using the full LHC Run~1 data, the measurements of Bsμ+μB_s \to \mu^+ \mu^- by LHCb and CMS together with other B-physics observables, electroweak precision observables and the XENON100 and LUX searches for spin-independent dark matter scattering. We find that the preferred regions of the NUHM2 parameter space have negative SUSY-breaking scalar masses squared for squarks and sleptons, m02<0m_0^2 < 0, as well as mHu2<mHd2<0m^2_{H_u} < m^2_{H_d} < 0. The tension present in the CMSSM and NUHM1 between the supersymmetric interpretation of gμ2g_\mu - 2 and the absence to date of SUSY at the LHC is not significantly alleviated in the NUHM2. We find that the minimum χ2=32.5\chi^2 = 32.5 with 21 degrees of freedom (dof) in the NUHM2, to be compared with χ2/dof=35.0/23\chi^2/{\rm dof} = 35.0/23 in the CMSSM, and χ2/dof=32.7/22\chi^2/{\rm dof} = 32.7/22 in the NUHM1. We find that the one-dimensional likelihood functions for sparticle masses and other observables are similar to those found previously in the CMSSM and NUHM1.Comment: 20 pages latex, 13 figure

    The pMSSM10 after LHC Run 1

    Get PDF
    We present a frequentist analysis of the parameter space of the pMSSM10, in which the following 10 soft SUSY-breaking parameters are specified independently at the mean scalar top mass scale Msusy = Sqrt[M_stop1 M_stop2]: the gaugino masses M_{1,2,3}, the 1st-and 2nd-generation squark masses M_squ1 = M_squ2, the third-generation squark mass M_squ3, a common slepton mass M_slep and a common trilinear mixing parameter A, the Higgs mixing parameter mu, the pseudoscalar Higgs mass M_A and tan beta. We use the MultiNest sampling algorithm with 1.2 x 10^9 points to sample the pMSSM10 parameter space. A dedicated study shows that the sensitivities to strongly-interacting SUSY masses of ATLAS and CMS searches for jets, leptons + MET signals depend only weakly on many of the other pMSSM10 parameters. With the aid of the Atom and Scorpion codes, we also implement the LHC searches for EW-interacting sparticles and light stops, so as to confront the pMSSM10 parameter space with all relevant SUSY searches. In addition, our analysis includes Higgs mass and rate measurements using the HiggsSignals code, SUSY Higgs exclusion bounds, the measurements B-physics observables, EW precision observables, the CDM density and searches for spin-independent DM scattering. We show that the pMSSM10 is able to provide a SUSY interpretation of (g-2)_mu, unlike the CMSSM, NUHM1 and NUHM2. As a result, we find (omitting Higgs rates) that the minimum chi^2/dof = 20.5/18 in the pMSSM10, corresponding to a chi^2 probability of 30.8 %, to be compared with chi^2/dof = 32.8/24 (31.1/23) (30.3/22) in the CMSSM (NUHM1) (NUHM2). We display 1-dimensional likelihood functions for SUSY masses, and show that they may be significantly lighter in the pMSSM10 than in the CMSSM, NUHM1 and NUHM2. We discuss the discovery potential of future LHC runs, e+e- colliders and direct detection experiments.Comment: 47 pages, 29 figure

    Supersymmetric Dark Matter after LHC Run 1

    Get PDF
    Different mechanisms operate in various regions of the MSSM parameter space to bring the relic density of the lightest neutralino, neutralino_1, assumed here to be the LSP and thus the Dark Matter (DM) particle, into the range allowed by astrophysics and cosmology. These mechanisms include coannihilation with some nearly-degenerate next-to-lightest supersymmetric particle (NLSP) such as the lighter stau (stau_1), stop (stop_1) or chargino (chargino_1), resonant annihilation via direct-channel heavy Higgs bosons H/A, the light Higgs boson h or the Z boson, and enhanced annihilation via a larger Higgsino component of the LSP in the focus-point region. These mechanisms typically select lower-dimensional subspaces in MSSM scenarios such as the CMSSM, NUHM1, NUHM2 and pMSSM10. We analyze how future LHC and direct DM searches can complement each other in the exploration of the different DM mechanisms within these scenarios. We find that the stau_1 coannihilation regions of the CMSSM, NUHM1, NUHM2 can largely be explored at the LHC via searches for missing E_T events and long-lived charged particles, whereas their H/A funnel, focus-point and chargino_1 coannihilation regions can largely be explored by the LZ and Darwin DM direct detection experiments. We find that the dominant DM mechanism in our pMSSM10 analysis is chargino_1 coannihilation: {parts of its parameter space can be explored by the LHC, and a larger portion by future direct DM searches.Comment: 21 pages, 8 figure

    The impact of constructive operating lease capitalisation on key accounting ratios

    Get PDF
    Current UK lease accounting regulation does not require operating leases to be capitalised in the accounts of lessees, although this is likely to change with the publication of FRS 5. This study conducts a prospective analysis of the effects of such a change. The potential magnitude of the impact of lease capitalisation upon individual users' decisions, market valuations, company cash flows, and managers' behaviour can be indicated by the effect on key accounting ratios, which are employed in decision-making and in financial contracts. The capitalised value of operating leases is estimated using a method similar to that suggested by Imhoff, Lipe and Wright (1991), adapted for the UK accounting and tax environment, and developed to incorporate company-specific assumptions. Results for 1994 for a random sample of 300 listed UK companies show that, on average, the unrecorded long-term liability represented 39% of reported long-term debt, while the unrecorded asset represented 6% of total assets. Capitalisation had a significant impact (at the 1% level) on six of the nine selected ratios (profit margin, return on assets, asset turnover, and three measures of gearing). Moreover, the Spearman rank correlation between each ratio before and after capitalisation revealed that the ranking of companies changed markedly for gearing measures in particular. There were significant inter-industry variations, with the services sector experiencing the greatest impact. An analysis of the impact of capitalisation over the five-year period from 1990 to 1994 showed that capitalisation had the greatest impact during the trough of the recession. Results were shown to be robust with respect to key assumptions of the capitalisation method. These findings contribute to the assessment of the economic consequences of a policy change requiring operating lease capitalisation. Significant changes in the magnitude of key accounting ratios and a major shift in company performance rankings suggest that interested parties' decisions and company cash flows are likely to be affected
    corecore