43 research outputs found
Partial Linear Quantile Regression and Bootstrap Confidence Bands
In this paper uniform confidence bands are constructed for nonparametric quantile estimates of regression functions. The method is based on the bootstrap, where resampling is done from a suitably estimated empirical density function (edf) for residuals. It is known that the approximation error for the uniform confidence band by the asymptotic Gumbel distribution is logarithmically slow. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. Comparison to classic asymptotic uniform bands is presented through a simulation study. An economic application considers the labour market differential effect with respect to different education levels.Bootstrap, Quantile Regression, Confidence Bands, Nonparametric Fitting, Kernel Smoothing, Partial Linear Model
High energy resummations & QCD phenomenology
Tesis doctoral inĂ©dita. Universidad AutĂłnoma de Madrid, Facultad de Ciencias, Instituto de FĂsica TeĂłrica. Fecha de lectura: 16-09-201
Studying Turbulence Using Numerical Simulation Databases. 4: Proceedings of the 1992 Summer Program
Papers are presented under the following subject areas: small scales; turbulence physics; compressible flow and modeling; and reacting flows and combustion
wavelet domain inversion and joint deconvolution/interpolation of geophysical data
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2003.Includes bibliographical references (leaves 168-174).This thesis presents two innovations to geophysical inversion. The first provides a framework and an algorithm for combining linear deconvolution methods with geostatistical interpolation techniques. This allows for sparsely sampled data to aid in image deblurring problems, or, conversely, noisy and blurred data to aid in sample interpolation. In order to overcome difficulties arising from high dimensionality, the solution must be derived in the correct framework and the structure of the problem must be exploited by an iterative solution algorithm. The effectiveness of the method is demonstrated first on a synthetic problem involving satellite remotely sensed data, and then on a real 3-D seismic data set combined with well logs. The second innovation addresses how to use wavelets in a linear geophysical inverse problem. Wavelets have lead to great successes in image compression and denoising, so it is interesting to see what, if anything, they can do for a general linear inverse problem. It is shown that a simple nonlinear operation of weighting and thresholding wavelet coefficients can consistently outperform classical linear inverse methods in terms of mean-square error across a broad range of noise magnitude in the data. Wavelets allow for an adaptively smoothed solution: smoothed more in uninteresting regions, less at geologically important transitions.(cont.) A third issue is also addressed, somewhat separate from the first two: the correct manipulation of discrete geophysical data. The theory of fractional splines is introduced, which allows for optimal approximation of real signals on a digital computer. Using splines, it can be shown that a linear operation on the spline can be equivalently represented by a matrix operating on the coefficients of a certain spline basis function. The form of the matrix, however, depends completely on the spline basis, and incorrect discretization of the operator into a matrix can lead to large errors in the resulting matrix/vector product.by Jonathan A. Kane.Ph.D
Recommended from our members
Simulation of sea-state sequences
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The present PhD study, in its first part, uses artificial neural networks (ANNs), an optimization technique called simulated annealing, and statistics to simulate the significant wave height (Hs) and mean zero-up-crossing period ( ) of 3-hourly sea-states of a location in the North East Pacific using a proposed distribution called hepta-parameter spline distribution for the conditional distribution of Hs or given some inputs. Two different seven- network sets of ANNs for the simulation and prediction of Hs and were trained using 20-year observed Hs’s and ’s. The preceding Hs’s and ’s were the most important inputs given to the networks, but the starting day of the simulated period was also necessary. However, the code replaced the day with the corresponding time and the season. The networks were trained by a simulated annealing algorithm and the outputs of the two sets of networks were used for calculating the parameters of the probability density function (pdf) of the proposed hepta-parameter distribution. After the calculation of the seven parameters of the pdf from the network outputs, the Hs and of the future sea-state is predicted by generating random numbers from the corresponding pdf.
In another part of the thesis, vertical piles have been studied with the goal of identifying the range of sea-states suitable for the safe pile driving operation. Pile configuration including the non-linear foundation and the gap between the pile and the pile sleeve shims were modeled using the finite elements analysis facilities within ABAQUS. Dynamic analyses of the system for a sea-state characterized by Hs and and modeled as a combination of several wave components were performed. A table of safe and unsafe sea-states was generated by repeating the analysis for various sea-states. If the prediction for a particular sea-state is repeated N times of which n times prove to be safe, then it could be said that the predicted sea-state is safe with the probability of 100(n/N).
The last part of the thesis deals with the Hs return values. The return value is a widely used measure of wave extremes having an important role in determining the design wave used in the design of maritime structures. In this part, Hs return value was calculated demonstrating another application of the above simulation of future 3-hourly Hs’s. The maxima method for calculating return values was applied in such a way that avoids the conventional need for unrealistic assumptions. The significant wave height return value has also been calculated using the convolution concept from a model presented by Anderson et al. (2001)
Mathematical Studies of Photochemical Air Pollution
In Part I a new, comprehensive model for a chemically reacting plume, is presented, that accounts for the effects of incomplete turbulent macro- and micro- mixing on chemical reactions between plume and ambient constituents. This "Turbulent Reacting Plume Model" (TRPM) is modular in nature, allowing for the use of different levels of approximation of the phenomena involved. The core of the model consists of the evolution equations for reaction progress variables appropriate for evolving, spatially varying systems ("local phenomenal extent of reaction"). These equations estimate the interaction of mixing and chemical reaction and require input parameters characterizing internal plume behavior, such as relative dispersion and fine scale plume segregation. The model addresses deficiencies in previous reactive plume models. Calculations performed with the TRPM are compared with the experimental data of P.J.H. Builtjes for the reaction between NO in a point source plume and ambient O3, taking place in a wind tunnel simulating a neutral atmospheric boundary layer. The comparison shows the TRPM capable of quantitatively predicting the retardation imposed on the evolution of nonlinear plume chemistry by incomplete mixing. Part IA (Chapters 1 to 3) contains a detailed description of the TRPM structure and comparisons of calculations with measurements, as well as a literature survey of reactive plume models. Part IB (Chapters 4 to 7) contains studies on the turbulent dispersion and reaction phenomena and plume dynamics, thus exposing in detail the underlying concepts and methods relevant to turbulent reactive plume modeling. New formulations for describing in-plume phenomena, such as the "Localized Production of Fluctuations Model" for the calculation of the plume concentration variance are included here.
Part II (Chapter 8) presents a collection of distribution-based statistical methods that are appropriate for characterizing extreme events in air pollution studies. Applications to the evaluation of air quality standards, formulation of rollback calculations, and to the use of plume models are included here.</p
Functional quantization
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 119-121).Data is rarely obtained for its own sake; oftentimes, it is a function of the data that we care about. Traditional data compression and quantization techniques, designed to recreate or approximate the data itself, gloss over this point. Are performance gains possible if source coding accounts for the user's function? How about when the encoders cannot themselves compute the function? We introduce the notion of functional quantization and use the tools of high-resolution analysis to get to the bottom of this question. Specifically, we consider real-valued raw data Xn/1 and scalar quantization of each component Xi of this data. First, under the constraints of fixed-rate quantization and variable-rate quantization, we obtain asymptotically optimal quantizer point densities and bit allocations. Introducing the notions of functional typicality and functional entropy, we then obtain asymptotically optimal block quantization schemes for each component. Next, we address the issue of non-monotonic functions by developing a model for high-resolution non-regular quantization. When these results are applied to several examples we observe striking improvements in performance.Finally, we answer three questions by means of the functional quantization framework: (1) Is there any benefit to allowing encoders to communicate with one another? (2) If transform coding is to be performed, how does a functional distortion measure influence the optimal transform? (3) What is the rate loss associated with a suboptimal quantizer design? In the process, we demonstrate how functional quantization can be a useful and intuitive alternative to more general information-theoretic techniques.by Vinith Misra.M.Eng