1,292 research outputs found

    Color Filter Array Image Analysis for Joint Denoising and Demosaicking

    Get PDF
    Noise is among the worst artifacts that affect the perceptual quality of the output from a digital camera. While cost-effective and popular, single-sensor solutions to camera architectures are not adept at noise suppression. In this scheme, data are typically obtained via a spatial subsampling procedure implemented as a color filter array (CFA), a physical construction whereby each pixel location measures the intensity of the light corresponding to only a single color. Aside from undersampling, observations made under noisy conditions typically deteriorate the estimates of the full-color image in the reconstruction process commonly referred to as demosaicking or CFA interpolation in the literature. A typical CFA scheme involves the canonical color triples (i.e., red, green, blue), and the most prevalent arrangement is called Bayer pattern. As the general trend of increased image resolution continues due to prevalence of multimedia, the importance of interpolation is de-emphasized while the concerns for computational efficiency, noise, and color fidelity play an increasingly prominent role in the decision making of a digital camera architect. For instance, the interpolation artifacts become less noticeable as the size of the pixel shrinks with respect to the image features, while the decreased dimensionality of the pixel sensors on the complementary metal oxide semiconductor (CMOS) and charge coupled device (CCD) sensors make the pixels more susceptible to noise. Photon-limited influences are also evident in low-light photography, ranging from a specialty camera for precision measurement to indoor consumer photography. Sensor data, which can be interpreted as subsampled or incomplete image data, undergo a series of image processing procedures in order to produce a digital photograph. However, these same steps may amplify noise introduced during image acquisition. Specifically, the demosaicking step is a major source of conflict between the image processing pipeline and image sensor noise characterization because the interpolation methods give high priority to preserving the sharpness of edges and textures. In the presence of noise, noise patterns may form false edge structures; therefore, the distortions at the output are typically correlated with the signal in a complicated manner that makes noise modelling mathematically intractable. Thus, it is natural to conceive of a rigorous tradeoff between demosaicking and image denoising

    Statistical Software for State Space Methods

    Get PDF
    In this paper we review the state space approach to time series analysis and establish the notation that is adopted in this special volume of the Journal of Statistical Software. We first provide some background on the history of state space methods for the analysis of time series. This is followed by a concise overview of linear Gaussian state space analysis including the modelling framework and appropriate estimation methods. We discuss the important class of unobserved component models which incorporate a trend, a seasonal, a cycle, and fixed explanatory and intervention variables for the univariate and multivariate analysis of time series. We continue the discussion by presenting methods for the computation of different estimates for the unobserved state vector: filtering, prediction, and smoothing. Estimation approaches for the other parameters in the model are also considered. Next, we discuss how the estimation procedures can be used for constructing confidence intervals, detecting outlier observations and structural breaks, and testing model assumptions of residual independence, homoscedasticity, and normality. We then show how ARIMA and ARIMA components models fit in the state space framework to time series analysis. We also provide a basic introduction for non-Gaussian state space models. Finally, we present an overview of the software tools currently available for the analysis of time series with state space methods as they are discussed in the other contributions to this special volume.

    Unobserved Component Time Series Models with ARCH Disturbances

    Get PDF
    We are also grateful to Neil Shephard, Mervyn King, Sushil Wadhwani, Manuel Arellano, Herman van Dijk, Rob Engle, and several anonymous referees for their comments. In addition we would like to thank Ray Chou. Frank Diebold, and Charles Goodharl for supplying us with the data used in the applicalions. The second author acknowledges financial support from the Basque Government; the third author acknowledges support from the LSE Financial Markets Group and the Spanish Ministry of Educalion and Science.Publicad

    The application of neural networks to anodic stripping voltammetry to improve trace metal analysis

    Get PDF
    This thesis describes a novel application of an artificial neural network and links together the two diverse disciplines of electroanalytical chemistry and information sciences. The artificial neural network is used to process data obtained from a Differential Pulse Anodic Stripping (DPAS) electroanalytical scan and produces as an output, predictions of lead concentration in samples where the concentration is less than 100 parts per billion. A comparative study of several post analysis processing techniques is presented, both traditional and neural. Through this it is demonstrated that by using a neural network, both the accuracy and the precision of the concentration predictions are increased by a factor of approximately two, over those obtained using a traditional, peak height calibration curve method. Statistical justification for these findings is provided Furthermore it is shown that, by post processing with a neural network, good quantitative predictions of heavy metal concentration may be made from instrument responses so poor that, if using tradition methods of calibration, the analytical scan would have had to be repeated. As part of the research the author has designed and built a complete computer controlled analytical instrument which provides output both to a graphical display and to the neural network. This instrument, which is fully described in the text, is operated via a mouse driven user interface written by the author

    Density forecasting in financial risk modelling

    Get PDF
    As a result of an increasingly stringent regulation aimed at monitoring financial risk exposures, nowadays the risk measurement systems play a crucial role in all banks. In this thesis we tackle a variety of problems, related to density forecasting, which are fundamental to market risk managers. The computation of risk measures (e.g. Value-at-Risk) for any portfolio of financial assets requires the generation of density forecasts for the driving risk factors. Appropriate testing procedures must then be identified for an accurate appraisal of these forecasts. We start our research by assessing whether option-implied densities, which constitute the most obvious forecasts of the distribution of the underlying asset at expiry, do actually represent unbiased forecasts. We first extract densities from options on currency and equity index futures, by means of both traditional and original specifications. We then appraise them, via rigorous density forecast evaluation tools, and we find evidence of the presence of biases. In the second part of the thesis, we focus on modelling the dynamics of the volatility curve, in order to measure the vega risk exposure for various delta-hedged option portfolios. We propose to use a linear Kalman filter approach, which gives more precise forecasts of the vega risk exposure than alternative, well-established models. In the third part, we derive a continuous time model for the dynamics of equity index returns from a data set of 5-minute returns. A model inferred from high-frequency typical of risk measures calculations. The last part of our work deals with evaluating density forecasts of the joint distribution of the risk factors. We find that, given certain specifications for the multivariate density forecast, a goodness-of-fit procedure based on the Empirical Characteristic Function displays good statistical properties in detecting misspecifications of different nature in the forecasts

    Financial Risk Measurement for Financial Risk Management

    Get PDF
    Current practice largely follows restrictive approaches to market risk measurement, such as historical simulation or RiskMetrics. In contrast, we propose flexible methods that exploit recent developments in financial econometrics and are likely to produce more accurate risk assessments, treating both portfolio-level and asset-level analysis. Asset-level analysis is particularly challenging because the demands of real-world risk management in financial institutions - in particular, real-time risk tracking in very high-dimensional situations - impose strict limits on model complexity. Hence we stress powerful yet parsimonious models that are easily estimated. In addition, we emphasize the need for deeper understanding of the links between market risk and macroeconomic fundamentals, focusing primarily on links among equity return volatilities, real growth, and real growth volatilities. Throughout, we strive not only to deepen our scientific understanding of market risk, but also cross-fertilize the academic and practitioner communities, promoting improved market risk measurement technologies that draw on the best of both.Market risk, volatility, GARCH

    Signal Extraction and the Formulation of Unobserved Components Models

    Get PDF
    corecore