141 research outputs found
What We Are Paying for: A Quality Adjusted Price Index for Laptop Microprocessors
A microprocessor contains the central processing unit and takes the role of the ābrainā for a computer. For the past decades, we have benefited greatly from its technological improvement. To accurately measure the contribution of such technological improvement to economic growth, we need a quality adjusted price index, which also helps us understand quality and technology trends in microprocessors. The quality trend in desktop microprocessors has been extensively studied. I focus on microprocessors for laptops for my senior economics thesis. Using data I newly collected on laptop microprocessor prices and performance metrics, I construct a quality adjusted price index spanning the past ten years. Across a range of empirical specifications, I find a sharp decrease in quality adjusted price over 2004-2013, but smaller in magnitude since 2010. These results might suggest a different technological improvement pattern and/or changing pricing strategies in the laptop microprocessor segment of the industry
Automatic Kappa Weighting for Instrumental Variable Models of Complier Treatment Effects
We propose debiased machine learning estimators for complier parameters, such
as local average treatment effect, with high dimensional covariates. To do so,
we characterize the doubly robust moment function for the entire class of
complier parameters as the combination of Wald and weight
formulations. We directly estimate the weights, rather than their
components, in order to eliminate the numerically unstable step of inverting
propensity scores of high dimensional covariates. We prove our estimator is
balanced, consistent, asymptotically normal, and semiparametrically efficient,
and use it to estimate the effect of 401(k) participation on the distribution
of net financial assets.Comment: 68 pages, 5 figures, 2 table
Active Noise Control System With Adaptive Wind Noise Mitigation
This disclosure describes adaptive wind noise mitigation to provide adaptive acoustic transparency that is based on ambient wind levels. An acoustic transparency system is proposed that includes feedback and feedforward filters. The feedback filter can be dynamically throttled to mitigate low-frequency band wind noise. The feedforward filter is utilized to reduce mid-frequency band wind noise while still maintaining or enhancing high frequency acoustic transparency to enable better conversation quality. A two-microphone coherence-based metric is utilized to detect wind events and to adaptively adjust a transparency level based on the detected wind noise. Digital signal processing (DSP) control blocks are utilized to mitigate mid-and-low frequency wind noise passing into a userās ear, while maintaining or enhancing high frequency transparency gain that enables more speech to pass through to the userās ear, thereby improving conversational quality even in the presence of loud wind noise
Adapting to Misspecification
Empirical research typically involves a robustness-efficiency tradeoff. A
researcher seeking to estimate a scalar parameter can invoke strong assumptions
to motivate a restricted estimator that is precise but may be heavily biased,
or they can relax some of these assumptions to motivate a more robust, but
variable, unrestricted estimator. When a bound on the bias of the restricted
estimator is available, it is optimal to shrink the unrestricted estimator
towards the restricted estimator. For settings where a bound on the bias of the
restricted estimator is unknown, we propose adaptive shrinkage estimators that
minimize the percentage increase in worst case risk relative to an oracle that
knows the bound. We show that adaptive estimators solve a weighted convex
minimax problem and provide lookup tables facilitating their rapid computation.
Revisiting five empirical studies where questions of model specification arise,
we examine the advantages of adapting to -- rather than testing for --
misspecification.Comment: 69 pages, 12 figure
Using Multiple Outcomes to Improve the Synthetic Control Method
When there are multiple outcome series of interest, Synthetic Control
analyses typically proceed by estimating separate weights for each outcome. In
this paper, we instead propose estimating a common set of weights across
outcomes, by balancing either a vector of all outcomes or an index or average
of them. Under a low-rank factor model, we show that these approaches lead to
lower bias bounds than separate weights, and that averaging leads to further
gains when the number of outcomes grows. We illustrate this via simulation and
in a re-analysis of the impact of the Flint water crisis on educational
outcomes.Comment: 36 pages, 6 figure
A fast and reliable numerical method for analyzing loaded rolling element bearing displacements and stiffness
The load-displacement relation for rolling element bearing is a system of nonlinear algebraic equations describing the relationship of bearing forces and displacements needed to compute the bearing stiffness. The computed bearing stiffness is typically employed to represent the bearing effect when modeling the whole geared rotor system to optimize the system parameters to minimize the unwanted vibrations. In this study, a robust numerical scheme called the energy method is developed and applied to solve for the bearing displacements from the potential energy of the bearing system instead of solving these nonlinear algebraic equations using the classical numerical integration. The proposed energy method is based on seeking the minimal potential energy derived from the theory of elasticity that describes the potential energy as a function of the displacements of inner ring of rolling bearing relative to the housing support structure. Therefore, solving the system of nonlinear algebraic equations is converted into solving a global optimization problem in which the potential energy term is the objective function. The global optimization algorithm produces the bearing displacements that make the potential energy function of bearing system minimum. Parameter studies for bearing stiffness as the explicit expressions of bearing displacements are conducted with the varying unloaded contact angles and the varying orbital positions of rolling elements. The analysis applying the energy method is shown to yield the correct solution efficiently and reliably
Study on Matching Ability Between Cement Particle Size and Permeability in the Process of Oil Reservoir Plugging
In order to satisfy the plugging demands of injecting the cement plugging agent into reservoirs with different radial depths, the technical studies of cement particle size optimization should be conducted. Through indoor experiment, the relationship between cement particle size and permeability was investigated by both macroscopic and microcosmic analysis. It is observed that the reservoirs which permeabilities are within 50~200mD are matching well with the cement agents which particle sizes are less than 5Ī¼m. And the permeabilities within 200~400mD are matching well with the cement agents which particle sizes are within 5~10Ī¼m, the permeabilities within 400~700mD are matching well with the cement agents which particle sizes are within 10~20Ī¼m, the permeabilities are above 700mD are matching well with the cement agents which particle sizes are more than 20Ī¼m. The plugging success rates of all the matching experiments are exceeding 90%. This research result is important to direct the plugging operation in the field.Key words: Plugging off and channeling prevention; Cement particle size; Permeability; Matching relationship; Experimental stud
- ā¦