3,254 research outputs found
Analysis of Star Identification Algorithms due to Uncompensated Spatial Distortion
With the evolution of spacecraft systems, we see the growing need for smaller, more affordable, and robust spacecrafts that can be jettisoned with ease and sent to sites to perform a myriad of operations that a larger craft would prohibit, or that can be quickly manipulated from performing one task into another. The developing requirements have led to the creation of Nano-Satellites, or CubeSats. The question then remains, how to navigate the expanse of space with such a minute spacecraft? A solution to this is using the stars themselves as a means of navigation. This can be accomplished by measuring the distance between stars in a camera image and determining the stars\u27 identities. Once identified, the spacecraft can obtain its position and facing. A series of star identification algorithms called Lost in Space Algorithms (LISAs) are used to recognize the stars in an image and assess the accuracy and error associated with each algorithm. This is done by creating various images from a simulated camera, using a program called MATLAB, along with images of actual stars with uncompensated errors. It is shown how suitable these algorithms are for use in space navigation, what constraints and impediments each have, and if low quality cameras using these algorithms can solve the Lost in Space problem
Development of a low-cost multi-camera star tracker for small satellites
This thesis presents a novel small satellite star tracker that uses an array of low-cost, off the shelf imaging sensors to achieve high accuracy attitude determination performance. The theoretical analysis of improvements in star detectability achieved by stacking images from multiple cameras is presented. An image processing algorithm is developed to combine images from multiple cameras with arbitrary focal lengths, principal point offsets, distortions, and misalignments. The star tracker also implements other algorithms including the region growing algorithm, the intensity weighted centroid algorithm, the geometric voting algorithm for star identification, and the singular value decomposition algorithm for attitude determination. A star tracker software simulator is used to test the algorithms by generating star images with sensor noises, lens defocusing, and lens distortion. A hardware prototype is being assembled for eventual night sky testing to verify simulated performance levels. Star tracker flight hardware is being developed in the Laboratory for Advanced Space Systems at Illinois (LASSI) at the University of Illinois at Urbana Champaign for future CubeSat missions
Cometary Astrometry
Modern techniques for making cometary astrometric observations, reducing these observations, using accurate reference star catalogs, and computing precise orbits and ephemerides are discussed in detail and recommendations and suggestions are given in each area
LSST Science Book, Version 2.0
A survey that can cover the sky in optical bands over wide fields to faint
magnitudes with a fast cadence will enable many of the exciting science
opportunities of the next decade. The Large Synoptic Survey Telescope (LSST)
will have an effective aperture of 6.7 meters and an imaging camera with field
of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over
20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with
fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a
total point-source depth of r~27.5. The LSST Science Book describes the basic
parameters of the LSST hardware, software, and observing plans. The book
discusses educational and outreach opportunities, then goes on to describe a
broad range of science that LSST will revolutionize: mapping the inner and
outer Solar System, stellar populations in the Milky Way and nearby galaxies,
the structure of the Milky Way disk and halo and other objects in the Local
Volume, transient and variable objects both at low and high redshift, and the
properties of normal and active galaxies at low and high redshift. It then
turns to far-field cosmological topics, exploring properties of supernovae to
z~1, strong and weak lensing, the large-scale distribution of galaxies and
baryon oscillations, and how these different probes may be combined to
constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at
http://www.lsst.org/lsst/sciboo
Planetary Detection Efficiency of the Magnification 3000 Microlensing Event OGLE-2004-BLG-343
OGLE-2004-BLG-343 was a microlensing event with peak magnification
A_{max}=3000+/-1100, by far the highest-magnification event ever analyzed and
hence potentially extremely sensitive to planets orbiting the lens star. Due to
human error, intensive monitoring did not begin until 43 minutes after peak, at
which point the magnification had fallen to A~1200, still by far the highest
ever observed. As the light curve does not show significant deviations due to a
planet, we place upper limits on the presence of such planets by extending the
method of Yoo et al. (2004b), which combines light-curve analysis with priors
from a Galactic model of the source and lens populations, to take account of
finite-source effects. This is the first event so analyzed for which
finite-source effects are important, and hence we develop two new techniques
for evaluating these effects. Somewhat surprisingly, we find that
OGLE-2004-BLG-343 is no more sensitive to planets than two previously analyzed
events with A_{max}~100, despite the fact that it was observed at ~12 times
higher magnification. However, we show that had the event been observed over
its peak, it would have been sensitive to almost all Neptune-mass planets over
a factor of 5 of projected separation and even would have had some sensitivity
to Earth-mass planets. This shows that some microlensing events being detected
in current experiments are sensitive to very low-mass planets. We also give
suggestions on how extremely high-magnification events can be more promptly
monitored in the future.Comment: 50 pages, 13 figures, published in The Astrophysical Journa
Micro-Arcsecond Radio Astrometry
Astrometry provides the foundation for astrophysics. Accurate positions are
required for the association of sources detected at different times or
wavelengths, and distances are essential to estimate the size, luminosity,
mass, and ages of most objects. Very Long Baseline Interferometry at radio
wavelengths, with diffraction-limited imaging at sub-milliarcsec resolution,
has long held the promise of micro-arcsecond astrometry. However, only in the
past decade has this been routinely achieved. Currently, parallaxes for sources
across the Milky Way are being measured with ~10 uas accuracy and proper
motions of galaxies are being determined with accuracies of ~1 uas/y. The
astrophysical applications of these measurements cover many fields, including
star formation, evolved stars, stellar and super-massive black holes, Galactic
structure, the history and fate of the Local Group, the Hubble constant, and
tests of general relativity. This review summarizes the methods used and the
astrophysical applications of micro-arcsecond radio astrometry.Comment: To appear in Annual Reviews of Astronomy and Astrophysics (2014
Searching for periodic sources with LIGO. II: Hierarchical searches
The detection of quasi-periodic sources of gravitational waves requires the
accumulation of signal-to-noise over long observation times. If not removed,
Earth-motion induced Doppler modulations, and intrinsic variations of the
gravitational-wave frequency make the signals impossible to detect. These
effects can be corrected (removed) using a parameterized model for the
frequency evolution. We compute the number of independent corrections
required for incoherent search strategies which use stacked
power spectra---a demodulated time series is divided into segments of
length , each segment is FFTed, the power is computed, and the
spectra are summed up. We estimate that the sensitivity of an all-sky search
that uses incoherent stacks is a factor of 2--4 better than would be achieved
using coherent Fourier transforms; incoherent methods are computationally
efficient at exploring large parameter spaces. A two-stage hierarchical search
which yields another 20--60% improvement in sensitivity in all-sky searches for
old (>= 1000 yr) slow (= 40 yr) fast (<=
1000 Hz) pulsars. Assuming 10^{12} flops of effective computing power for data
analysis, enhanced LIGO interferometers should be sensitive to: (i) Galactic
core pulsars with gravitational ellipticities of \epsilon\agt5\times 10^{-6}
at 200 Hz, (ii) Gravitational waves emitted by the unstable r-modes of newborn
neutron stars out to distances of ~8 Mpc, and (iii) neutron stars in LMXB's
with x-ray fluxes which exceed . Moreover,
gravitational waves from the neutron star in Sco X-1 should be detectable is
the interferometer is operated in a signal-recycled, narrow-band configuration.Comment: 22 Pages, 13 Figure
- âŠ