5,376 research outputs found

    APPLICATION OF MODERN STATISTICAL TOOLS TO SOLVING CONTEMPORARY ECONOMIC PROBLEMS: EVALUATION OF THE REGIONAL AGRICULTURAL CAMPAIGN IMPACT AND THE USDA FORECASTING EFFORTS

    Get PDF
    The research is comprised with three studies to implement statistical tools for examining two economic issues: the impact of a regional agricultural campaign on participating restaurants and efforts of U.S. Department of Agriculture (USDA) forecasting reports in agricultural commodity markets. The first study examined how various components of the Certified South Carolina campaign are valued by participating restaurants. A choice experiment was conducted to estimate the average willingness to pay (WTP) for each campaign component using a mixed logit model. Three existing campaign components--Labeling, Multimedia Advertising, and the \u27Fresh on the Menu\u27 program were found to have a significant positive economic value. Results also revealed that the type of restaurant, the level of satisfaction with the campaign, and the factors motivating participation significantly affected restaurants\u27 WTP for the campaign components. The second study evaluated the revision inefficiencies of all supply, demand, and price categories of World Agricultural Supply and Demand Estimates (WASDE) forecasts for U.S. corn, soybeans, wheat, and cotton. Significant correlations between consecutive forecast revisions were found in all crops, all categories except for the seed category in wheat forecasts. This study also developed a statistical procedure for correction of inefficiencies. The procedure took into account the issue of outliers, the impact of forecasts size and direction, and the stability of revision inefficiency. Findings suggested that the adjustment procedure has the highest potential for improving accuracy in corn, wheat, and cotton production forecasts. The third study evaluated the impact of four public reports and one private report on the cotton market: Export Sales, Crop Processing, World Agricultural Supply and Demand Estimates (WASDE), Perspective Planting, and Cotton This Month. The \u27best fitting\u27 GARCH-type models were selected separately for the daily cotton futures close-to-close, close-to-open, and open-to-close returns from January 1995 through January 2012. In measuring the report effects, we controlled for the day-of-week, seasonality, stock level, and weekend-holiday effects on cotton futures returns. We found statistically significant impacts of the WASDE and Perspective Planting reports on cotton returns. Furthermore, results indicated that the progression of market reaction varied across reports

    Assembly Bias of Dwarf-sized Dark Matter Haloes

    Full text link
    Previous studies indicate that assembly bias effects are stronger for lower mass dark matter haloes. Here we make use of high resolution re-simulations of rich clusters and their surroundings from the Phoenix Project and a large volume cosmological simulation, the Millennium-II run, to quantify assembly bias effects on dwarf-sized dark matter haloes. We find that, in the regions around massive clusters, dwarf-sized haloes ([10^9,10^{11}]\ms) form earlier (Δz∼2\Delta z \sim 2 in redshift) and possess larger VmaxV_{\rm max} (∼20\sim20%) than the field galaxies. We find that this environmental dependence is largely caused by tidal interactions between the ejected haloes and their former hosts, while other large scale effects are less important. Finally we assess the effects of assembly bias on dwarf galaxy formation with a sophisticated semi-analytical galaxy formation model. We find that the dwarf galaxies near massive clusters tend to be redder (Δ(u−r)=0.5\Delta(u-r) = 0.5) and have three times as much stellar mass compared to the field galaxies with the same halo mass. These features should be seen with observational data.Comment: 8 pages, 8 figures, accepted by MNRA

    Radiative Neutrino Mass with Z3Z_3 Dark matter: From Relic Density to LHC Signatures

    Full text link
    In this work we give a comprehensive analysis on the phenomenology of a specific Z3\mathbb{Z}_3 dark matter (DM) model in which neutrino mass is induced at two loops by interactions with a DM particle that can be a complex scalar or a Dirac fermion. Both the DM properties in relic density and direct detection and the LHC signatures are examined in great detail, and indirect detection for gamma-ray excess from the Galactic Center is also discussed briefly. On the DM side, both semi-annihilation and co-annihilation processes play a crucial role in alleviating the tension of parameter space between relic density and direct detection. On the collider side, new decay channels resulting from Z3\mathbb{Z}_3 particles lead to distinct signals at LHC. Currently the trilepton signal is expected to give the most stringent bound for both scalar and fermion DM candidates, and the signatures of fermion DM are very similar to those of electroweakinos in simplified supersymmetric models.Comment: 40 pages, 24 figure

    Testing Cannot Tell Whether Ballot-Marking Devices Alter Election Outcomes

    Full text link
    Like all computerized systems, ballot-marking devices (BMDs) can be hacked, misprogrammed, and misconfigured. Several approaches to testing BMDs have been proposed. In _logic and accuracy_ (_L&A_) tests, trusted agents input known test patterns into the BMD and check whether the printout matches. In _parallel_ or _live_ testing, agents use the BMDs on election day, emulating voters. In _passive_ testing, agents monitor the rate at which voters "spoil" ballots and request another opportunity to mark a ballot: an anomalously high rate might result from BMD malfunctions. In practice, none of these methods can protect against outcome-altering problems. L&A testing is ineffective in part because BMDs "know" the time and date of the test and the election. Neither L&A nor parallel testing can probe even a small fraction of the possible voting transactions that could comprise enough votes to change outcomes. Under mild assumptions, to develop a model of voter interactions with BMDs accurate enough to ensure that parallel tests could reliably detect changes to 5% of the votes (which could change margins by 10% or more) would require monitoring the behavior of more than a million voters in each jurisdiction in minute detail---but the median turnout by jurisdiction in the U.S. is under 3000 voters. Given an accurate model of voter behavior, the number of tests required is still larger than the turnout in a typical U.S. jurisdiction. Under optimistic assumptions, passive testing that has a 99% chance of detecting a 1% change to the margin with a 1% false alarm rate is impossible in jurisdictions with fewer than about 1 million voters, even if the "normal" spoiled ballot rate were known exactly and did not vary from election to election and place to place
    • …
    corecore