109 research outputs found
Recommended from our members
Validation of Vector Magnitude Datasets: Effects of Random Component Errors
A statistically consistent and physically realistic approach for validating vector magnitude measurements is developed, based on a model for random measurement noise that explicitly satisfies a nonnegativity constraint for all ânoisyâ vector magnitude measurements. Numerical and analytic approximations are used to quantify the nonlinear functional dependence of sample conditional means on true values and component noise magnitudes. In particular, it is shown analytically that random component errors will result in overall vector magnitude biases. A simple nonlinear regression of measured sample conditional mean vector magnitudes (calculated from traditional collocated data) against Monte Carlo simulation results is proposed for determining both deterministic trends and random errors in the data to be validated. The approach is demonstrated using Seasat and ERS-1 scatterometer measurements and collocated buoy data. The approach accounts well for the observed qualitative features of the collocated datasets and yields realistic values of random component error magnitudes and deterministic gain and offset for each dataset. An apparent systematic insensitivity of scatterometers at low wind speeds is shown to be a consequence of random component speed errors if it is assumed that the comparison buoy measurements are error free
A Mixed-Method Analysis of Fatal Attacks on Police by Far-Right Extremists
Several recent high-profile homicides of police officers have brought increased attention to issues of far-right extremist violence in the United States. We still, however, know very little about why (and how) certain encounters between far-right extremists and police result in violence. To fill this research gap, we conduct a mixed-method analysis of far-right antipolice homicides based on quantitative and qualitative data from the U.S. Extremist Crime Database. We begin by categorizing cases based on key aspects of homicide storylines. We then comparatively analyze attributes of event precursor, transaction, and aftermath stages across four storyline categories. Finally, a case study is purposively selected to follow-up on each storyline category to better capture the nuances of fluid homicide processes. Our findings have important implications for identifying triggering events, escalation factors, and other situated sets of conditions and circumstances that contribute to deadly outcomes for police officers
Recommended from our members
Scatterometer-Based Assessment of 10-m Wind Analyses from the Operational ECMWF and NCEP Numerical Weather Prediction Models
Wind measurements by the National Aeronautics and Space Administration (NASA) scatterometer (NSCAT) and the SeaWinds scatterometer on the NASA QuikSCAT satellite are compared with buoy observations to establish that the accuracies of both scatterometers are essentially the same. The scatterometer measurement errors are best characterized in terms of random component errors, which are about 0.75 and 1.5 m sâ»Âč for the along-wind and crosswind components, respectively.
The NSCAT and QuikSCAT datasets provide a consistent baseline from which recent changes in the accuracies of 10-m wind analyses from the European Centre for Medium-Range Weather Forecasts (ECMWF) and the U.S. National Centers for Environmental Prediction (NCEP) operational numerical weather prediction (NWP) models are assessed from consideration of three time periods: September 1996âJune 1997, August 1999âJuly 2000, and February 2002âJanuary 2003. These correspond, respectively, to the 9.5-month duration of the NSCAT mission, the first 12 months of the QuikSCAT mission, and the first year after both ECMWF and NCEP began assimilating QuikSCAT observations. There were large improvements in the accuracies of both NWP models between the 1997 and 2000 time periods. Though modest in comparison, there were further improvements in 2002, at least partly attributable to the assimilation of QuikSCAT observations in both models.
There is no evidence of bias in the 10-m wind speeds in the NCEP model. The 10-m wind speeds in the ECMWF model, however, are shown to be biased low by about 0.4 m sâ»Âč. While it is difficult to eliminate systematic errors this small, a bias of 0.4 m sâ»Âč corresponds to a typical wind stress bias of more than 10%. This wind stress bias increases to nearly 20% if atmospheric stability effects are not taken into account. Biases of these magnitudes will result in significant systematic errors in ocean general circulation models that are forced by ECMWF winds
Recommended from our members
The Relationship between Winds, Surface Roughness, and Radar Backscatter at Low Incidence Angles from TRMM Precipitation Radar Measurements
One year of collocated, rain-free normalized backscatter cross-section measurements from the Tropical Rainfall Mapping Mission (TRMM) precipitation radar (PR) and wind speed estimates from the TRMM Microwave Imager are used to construct fully empirical model functions relating cross section to wind speed for incidence angles from 0° (nadir) to 18°. With the exception of a ~1.9-dB offset, the TRMM PR model function at nadir compares well with TOPEX and Geosat results. For the first time using spaceborne data, least squares fits of the TRMM PR model functions to the geometric optics scattering formulation allow direct solution for the magnitudes and wind speed dependencies of Ku-band effective nadir reflectivity and effective mean square slope [s(u)]. Effective reflectivity is found to decrease slightly with increasing wind speed above 3.5 m sâ»Âč. A logarithmic dependence of s(u) is found for wind speeds between 1 and 10 m sâ»Âč. Both linear and two-branch logarithmic dependencies (with a slope break at 10 m sâ»Âč) are excellent and statistically indistinguishable parameterizations for s(u) for wind speeds between 5 and 19 m sâ»Âč. Calculations using the Elfouhaily et al. wave model suggest that the effective Ku-band s(u) corresponds to a cutoff wavelength of 3 to 6 times the radar wavelength for wind speeds from 5 to 25 m sâ»Âč; at lower wind speeds, the cutoff wavelength increases rapidly consistent with earlier observations
Immigration in the 21st Century: Perspectives on Law and Policy
The program consisted of a keynote presentation by Linda Chavez, Chairman of the Center for Equal Opportunity, followed by a panel featuring Leticia Saucedo, Associate Professor of Law at the William S. Boyd School of Law, University of Nevada, Law Vegas; Andrea Rahal, Associate at McCandlish Holton, PC in Richmond; Robert Redmond, Jr., Partner at Williams Mullen in Richmond; Michael Hethmon, General Counsel for the Immigration Reform Law Institute; and Tim Freilich, Legal Director of the Legal Aid Justice Center\u27s Immigration Advocacy Program. Christopher Nugent, Senior Counsel at Holland & Knight, D.C. Office, served as moderator
Recommended from our members
Sampling Errors in Wind Fields Constructed from Single and Tandem Scatterometer Datasets
Sampling patterns and sampling errors from various scatterometer datasets are examined. Four single and two tandem scatterometer mission scenarios are considered. The single scatterometer missions are ERS (with a single, narrow swath), NSCAT and ASCAT (dual swaths), and QuikSCAT (a single, broad swath obtained from the SeaWinds instrument). The two tandem scenarios are combinations of the broad-swath SeaWinds scatterometer with ASCAT and QuikSCAT. The dense, nearly uniform distribution of measurements within swaths, combined with the relatively sparse, nonuniform placement of the swaths themselves create complicated spaceâtime sampling patterns. The temporal sampling of all of the missions is characterized by bursts of closely spaced samples separated by longer gaps and is highly variable in both latitude and longitude. Sampling errors are quantified by the expected squared bias of particular linear estimates of component winds. Modifications to a previous method that allow more efficient expected squared bias calculations are presented and applied. Sampling errors depend strongly on both the details of the temporal sampling of each mission and the assumed temporal scales of variability in the wind field but are relatively insensitive to different spatial scales of variability. With the exception of ERS, all of the scatterometer scenarios can be used to make low-resolution (3° and 12 days) wind component maps with errors at or below the 1 m sâ»Âč level. Only datasets from the broad-swath and tandem mission scenarios can be used for higher-resolution maps with similar levels of error, emphasizing the importance of the improved spatial and temporal coverage of those missions. A brief discussion of measurement errors concludes that sampling error is generally the dominant term in the overall error budget for maps constructed from scatterometer dataset
A New Approach to Patent Reform
Scholars and policy makers have tried for years to solve the tenacious and harmful crisis of low quality, erroneously granted patents. Far from resolving the problem, these determined efforts have resulted in hundreds of conflicting policy proposals, failed Congressional bills, and no way to evaluate the policiesâ value or impact or to decide between the overwhelming multiplicity of policies.
This Article provides not only new solutions, but a new approach for designing and assessing policies both in patent law and legal systems more generally. We introduce a formal economic model of the patent system that differs from existing scholarship because it permits us to (1) determine how a policy change to one part of the patent system affects the system as a whole; and (2) quantify the impact of policy changes. Existing scholarship typically analyses a policy by assessing its effect on just the targeted element of the patent system, but legal systems are complex with interrelated components and players react along multiple margins, so these analyses are incomplete and sometimes incorrect. Our approach fixes this problem, providing a comprehensive understanding of how a policy change affects the patent system from beginning-to-end. It also permits us to conduct complex analyses such as varying multiple policies at once. Further, much existing scholarship fails to quantify the magnitude of a policyâs effect, and even empirical scholarship can only measure the effect of an already-implemented policy, not predict the effect of a proposed change. Quantification is critical because policies generally have multiple effects, often in countervailing directions. Quantificationâas shown using our modelâpermits scholars to determine the overall direction and size of a theoretically ambiguous effect. Quantification also allows us to compare the social welfare effects of different reforms so that policy-makers know where to focus their efforts.
We apply our model to several of the most prominent policy debates in patent law. We conclude that certain reforms such as regulation of settlement licenses and increased examination intensity yield large gains in social welfare and should be prioritized. Other reforms that are popular with scholars, including decreasing the availability of injunctions and reducing litigation costs produce surprisingly small gains in social welfare. Often existing scholarship operates too much on intuition, which, we show, can be wrong. Our new approach to patent reform provides an approach that offers deeper understanding and a more effective evaluation framework
- âŠ