62 research outputs found

    Cumulative subject index

    Get PDF

    Spatio-Spectral Sampling and Color Filter Array Design

    Get PDF
    Owing to the growing ubiquity of digital image acquisition and display, several factors must be considered when developing systems to meet future color image processing needs, including improved quality, increased throughput, and greater cost-effectiveness. In consumer still-camera and video applications, color images are typically obtained via a spatial subsampling procedure implemented as a color filter array (CFA), a physical construction whereby only a single component of the color space is measured at each pixel location. Substantial work in both industry and academia has been dedicated to post-processing this acquired raw image data as part of the so-called image processing pipeline, including in particular the canonical demosaicking task of reconstructing a full-color image from the spatially subsampled and incomplete data acquired using a CFA. However, as we detail in this chapter, the inherent shortcomings of contemporary CFA designs mean that subsequent processing steps often yield diminishing returns in terms of image quality. For example, though distortion may be masked to some extent by motion blur and compression, the loss of image quality resulting from all but the most computationally expensive state-of-the-art methods is unambiguously apparent to the practiced eye. … As the CFA represents one of the first steps in the image acquisition pipeline, it largely determines the maximal resolution and computational efficiencies achievable by subsequent processing schemes. Here, we show that the attainable spatial resolution yielded by a particular choice of CFA is quantifiable and propose new CFA designs to maximize it. In contrast to the majority of the demosaicking literature, we explicitly consider the interplay between CFA design and properties of typical image data and its implications for spatial reconstruction quality. Formally, we pose the CFA design problem as simultaneously maximizing the allowable spatio-spectral support of luminance and chrominance channels, subject to a partitioning requirement in the Fourier representation of the sensor data. This classical aliasing-free condition preserves the integrity of the color image data and thereby guarantees exact reconstruction when demosaicking is implemented as demodulation (demultiplexing in frequency)

    Making SGD Parameter-Free

    Full text link
    We develop an algorithm for parameter-free stochastic convex optimization (SCO) whose rate of convergence is only a double-logarithmic factor larger than the optimal rate for the corresponding known-parameter setting. In contrast, the best previously known rates for parameter-free SCO are based on online parameter-free regret bounds, which contain unavoidable excess logarithmic terms compared to their known-parameter counterparts. Our algorithm is conceptually simple, has high-probability guarantees, and is also partially adaptive to unknown gradient norms, smoothness, and strong convexity. At the heart of our results is a novel parameter-free certificate for SGD step size choice, and a time-uniform concentration result that assumes no a-priori bounds on SGD iterates

    Nonlinear Multibody Dynamics of Wind Turbines

    Get PDF

    Asteroseismology of the Transiting Exoplanet Host HD 17156 with HST FGS

    Full text link
    Observations conducted with the Fine Guidance Sensor on Hubble Space Telescope (HST) providing high cadence and precision time-series photometry were obtained over 10 consecutive days in December 2008 on the host star of the transiting exoplanet HD 17156b. During this time 10^12 photons (corrected for detector deadtime) were collected in which a noise level of 163 parts per million per 30 second sum resulted, thus providing excellent sensitivity to detection of the analog of the solar 5-minute p-mode oscillations. For HD 17156 robust detection of p-modes supports determination of the stellar mean density of 0.5301 +/- 0.0044 g/cm^3 from a detailed fit to the observed frequencies of modes of degree l = 0, 1, and 2. This is the first star for which direct determination of the mean stellar density has been possible using both asteroseismology and detailed analysis of a transiting planet light curve. Using the density constraint from asteroseismology, and stellar evolution modeling results in M_star = 1.285 +/- 0.026 solar, R_star = 1.507 +/- 0.012 solar, and a stellar age of 3.2 +/- 0.3 Gyr.Comment: Accepted by ApJ; 16 pages, 18 figure

    Master index

    Get PDF

    Stabilization of parareal algorithms for long time computation of a class of highly oscillatory Hamiltonian flows using data

    Full text link
    Applying parallel-in-time algorithms to multiscale Hamiltonian systems to obtain stable long time simulations is very challenging. In this paper, we present novel data-driven methods aimed at improving the standard parareal algorithm developed by Lion, Maday, and Turinici in 2001, for multiscale Hamiltonian systems. The first method involves constructing a correction operator to improve a given inaccurate coarse solver through solving a Procrustes problem using data collected online along parareal trajectories. The second method involves constructing an efficient, high-fidelity solver by a neural network trained with offline generated data. For the second method, we address the issues of effective data generation and proper loss function design based on the Hamiltonian function. We show proof-of-concept by applying the proposed methods to a Fermi-Pasta-Ulum (FPU) problem. The numerical results demonstrate that the Procrustes parareal method is able to produce solutions that are more stable in energy compared to the standard parareal. The neural network solver can achieve comparable or better runtime performance compared to numerical solvers of similar accuracy. When combined with the standard parareal algorithm, the improved neural network solutions are slightly more stable in energy than the improved numerical coarse solutions

    Effects of subducted seamounts on megathrust earthquake nucleation and rupture propagation

    Get PDF
    Author Posting. © American Geophysical Union, 2012. This article is posted here by permission of American Geophysical Union for personal use, not for redistribution. The definitive version was published in Geophysical Research Letters 39 (2012): L24302, doi:10.1029/2012GL053892.Subducted seamounts have been linked to interplate earthquakes, but their specific effects on earthquake mechanism remain controversial. A key question is under what conditions a subducted seamount will generate or stop megathrust earthquakes. Here we show results from numerical experiments in the framework of rate- and state-dependent friction law in which a seamount is characterized as a patch of elevated effective normal stress on the thrust interface. We find that whether subducted seamounts generate or impede megathrust earthquakes depends critically on their relative locations to the earthquake nucleation zone defined by depth-variable friction parameters. A seamount may act as a rupture barrier and such barrier effect is most prominent when the seamount sits at an intermediate range of the seamount-to-trench distances (20–100% of the nucleation-zone-to-trench distance). Moreover, we observe that seamount-induced barriers can turn into asperities on which megathrust earthquakes can nucleate at shallow depths and rupture the entire seismogenic zone. These results suggest that a strong barrier patch may not necessarily reduce the maximum size of earthquakes. Instead, the barrier could experience large coseismic slip when it is ruptured.This work is supported by the NSF Grant EAR-1015221 and WHOI Deep Ocean Exploration Institute awards 27071150 and 25051162.2013-06-1
    • …
    corecore