3,632 research outputs found
NIMBUS-7 SBUV (Solar Backscatter Ultraviolet) observations of solar UV spectral irradiance variations caused by solar rotation and active-region evolution for the period November 7, 1978 - November 1, 1980
Observations of temporal variations of the solar UV spectral irradiance over several days to a few weeks in the 160-400 nm wavelength range are presented. Larger 28-day variations and a second episode of 13-day variations occurred during the second year of measurements. The thirteen day periodicity is not a harmonic of the 28-day periodicity. The 13-day periodicity dominates certain episodes of solar activity while others are dominated by 28-day periods accompanied by a week 14-day harmonic. Techniques for removing noise and long-term trends are described. Time series analysis results are presented for the Si II lines near 182 nm, the Al I continuum in the 190 nm to 205 nm range, the Mg I continuum in the 210 nm to 250 nm range, the MgII H & K lines at 280 nm, the Mg I line at 285 nm, and the Ca II K & H lines at 393 and 397 nm
Object recognition using multi-view imaging
Single view imaging data has been used in most previous research in computer vision and
image understanding and lots of techniques have been developed. Recently with the fast
development and dropping cost of multiple cameras, it has become possible to have many
more views to achieve image processing tasks. This thesis will consider how to use the
obtained multiple images in the application of target object recognition.
In this context, we present two algorithms for object recognition based on scale-
invariant feature points. The first is single view object recognition method (SOR), which
operates on single images and uses a chirality constraint to reduce the recognition errors
that arise when only a small number of feature points are matched. The procedure is
extended in the second multi-view object recognition algorithm (MOR) which operates on
a multi-view image sequence and, by tracking feature points using a dynamic programming
method in the plenoptic domain subject to the epipolar constraint, is able to fuse feature
point matches from all the available images, resulting in more robust recognition.
We evaluated these algorithms using a number of data sets of real images capturing
both indoor and outdoor scenes. We demonstrate that MOR is better than SOR particularly for noisy and low resolution images, and it is also able to recognize objects that are
partially occluded by combining it with some segmentation techniques
Investigating Light Curve Modulation via Kernel Smoothing. I. Application to 53 fundamental mode and first-overtone Cepheids in the LMC
Recent studies have revealed a hitherto unknown complexity of Cepheid
pulsation. We implement local kernel regression to search for both period and
amplitude modulations simultaneously in continuous time and to investigate
their detectability, and test this new method on 53 classical Cepheids from the
OGLE-III catalog. We determine confidence intervals using parametric and
non-parametric bootstrap sampling to estimate significance and investigate
multi-periodicity using a modified pre-whitening approach that relies on
time-dependent light curve parameters. We find a wide variety of period and
amplitude modulations and confirm that first overtone pulsators are less stable
than fundamental mode Cepheids. Significant temporal variations in period are
more frequently detected than those in amplitude. We find a range of modulation
intensities, suggesting that both amplitude and period modulations are
ubiquitous among Cepheids. Over the 12-year baseline offered by OGLE-III, we
find that period changes are often non-linear, sometimes cyclic, suggesting
physical origins beyond secular evolution. Our method more efficiently detects
modulations (period and amplitude) than conventional methods reliant on
pre-whitening with constant light curve parameters and more accurately
pre-whitens time series, removing spurious secondary peaks effectively.Comment: Re-submitted including revisions to Astronomy and Astrophysic
Wavelet Methods for Studying the Onset of Strong Plasma Turbulence
Wavelet basis functions are a natural tool for analyzing turbulent flows
containing localized coherent structures of different spatial scales. Here,
wavelets are used to study the onset and subsequent transition to fully
developed turbulence from a laminar state. Originally applied to neutral fluid
turbulence, an iterative wavelet technique decomposes the field into coherent
and incoherent contributions. In contrast to Fourier power spectra, finite time
Lyapunov exponents (FTLE), and simple measures of intermittency such as
non-Gaussian statistics of field increments, the wavelet technique is found to
provide a quantitative measure for the onset of turbulence and to track the
transition to fully developed turbulence. The wavelet method makes no
assumptions about the structure of the coherent current sheets or the
underlying plasma model. Temporal evolution of the coherent and incoherent
wavelet fluctuations is found to be highly correlated with the magnetic field
energy and plasma thermal energy, respectively. The onset of turbulence is
identified with the rapid growth of a background of incoherent fluctuations
spreading across a range of scales and a corresponding drop in the coherent
components. This is suggestive of the interpretation of the coherent and
incoherent wavelet fluctuations as measures of coherent structures (e.g.,
current sheets) and dissipation, respectively. The ratio of the incoherent to
coherent fluctuations is found to be fairly uniform across different
plasma models and provides an empirical threshold for turbulence onset. The
technique is illustrated through examples. First, it is applied to the
Kelvin--Helmholtz instability from different simulation models including fully
kinetic, hybrid (kinetic ion/fluid electron), and Hall MHD simulations. Second,
it is applied to the development of turbulence downstream of the bowshock in a
magnetosphere simulation
Frequency-Domain Stochastic Modeling of Stationary Bivariate or Complex-Valued Signals
There are three equivalent ways of representing two jointly observed
real-valued signals: as a bivariate vector signal, as a single complex-valued
signal, or as two analytic signals known as the rotary components. Each
representation has unique advantages depending on the system of interest and
the application goals. In this paper we provide a joint framework for all three
representations in the context of frequency-domain stochastic modeling. This
framework allows us to extend many established statistical procedures for
bivariate vector time series to complex-valued and rotary representations.
These include procedures for parametrically modeling signal coherence,
estimating model parameters using the Whittle likelihood, performing
semi-parametric modeling, and choosing between classes of nested models using
model choice. We also provide a new method of testing for impropriety in
complex-valued signals, which tests for noncircular or anisotropic second-order
statistical structure when the signal is represented in the complex plane.
Finally, we demonstrate the usefulness of our methodology in capturing the
anisotropic structure of signals observed from fluid dynamic simulations of
turbulence.Comment: To appear in IEEE Transactions on Signal Processin
An introduction to the interim digital SAR processor and the characteristics of the associated Seasat SAR imagery
Basic engineering data regarding the Interim Digital SAR Processor (IDP) and the digitally correlated Seasat synthetic aperature radar (SAR) imagery are presented. The correlation function and IDP hardware/software configuration are described, and a preliminary performance assessment presented. The geometric and radiometric characteristics, with special emphasis on those peculiar to the IDP produced imagery, are described
Sub-pixel Registration In Computational Imaging And Applications To Enhancement Of Maxillofacial Ct Data
In computational imaging, data acquired by sampling the same scene or object at different times or from different orientations result in images in different coordinate systems. Registration is a crucial step in order to be able to compare, integrate and fuse the data obtained from different measurements. Tomography is the method of imaging a single plane or slice of an object. A Computed Tomography (CT) scan, also known as a CAT scan (Computed Axial Tomography scan), is a Helical Tomography, which traditionally produces a 2D image of the structures in a thin section of the body. It uses X-ray, which is ionizing radiation. Although the actual dose is typically low, repeated scans should be limited. In dentistry, implant dentistry in specific, there is a need for 3D visualization of internal anatomy. The internal visualization is mainly based on CT scanning technologies. The most important technological advancement which dramatically enhanced the clinician\u27s ability to diagnose, treat, and plan dental implants has been the CT scan. Advanced 3D modeling and visualization techniques permit highly refined and accurate assessment of the CT scan data. However, in addition to imperfections of the instrument and the imaging process, it is not uncommon to encounter other unwanted artifacts in the form of bright regions, flares and erroneous pixels due to dental bridges, metal braces, etc. Currently, removing and cleaning up the data from acquisition backscattering imperfections and unwanted artifacts is performed manually, which is as good as the experience level of the technician. On the other hand the process is error prone, since the editing process needs to be performed image by image. We address some of these issues by proposing novel registration methods and using stonecast models of patient\u27s dental imprint as reference ground truth data. Stone-cast models were originally used by dentists to make complete or partial dentures. The CT scan of such stone-cast models can be used to automatically guide the cleaning of patients\u27 CT scans from defects or unwanted artifacts, and also as an automatic segmentation system for the outliers of the CT scan data without use of stone-cast models. Segmented data is subsequently used to clean the data from artifacts using a new proposed 3D inpainting approach
- …