101,690 research outputs found
Improving Photometric Redshifts using GALEX Observations for the SDSS Stripe 82 and the Next Generation of SZ Cluster Surveys
Four large-area Sunyaev-Zeldovich (SZ) experiments -- APEX-SZ, SPT, ACT, and
Planck -- promise to detect clusters of galaxies through the distortion of
Cosmic Microwave Background photons by hot (> 10^6 K) cluster gas (the SZ
effect) over thousands of square degrees. A large observational follow-up
effort to obtain redshifts for these SZ-detected clusters is under way. Given
the large area covered by these surveys, most of the redshifts will be obtained
via the photometric redshift (photo-z) technique. Here we demonstrate, in an
application using ~3000 SDSS stripe 82 galaxies with r<20, how the addition of
GALEX photometry (FUV, NUV) greatly improves the photometric redshifts of
galaxies obtained with optical griz or ugriz photometry. In the case where
large spectroscopic training sets are available, empirical neural-network-based
techniques (e.g., ANNz) can yield a photo-z scatter of . If large spectroscopic training sets are not available, the addition of
GALEX data makes possible the use simple maximum likelihood techniques, without
resorting to Bayesian priors, and obtains , accuracy that
approaches the accuracy obtained using spectroscopic training of neural
networks on ugriz observations. This improvement is especially notable for blue
galaxies. To achieve these results, we have developed a new set of high
resolution spectral templates based on physical information about the star
formation history of galaxies. We envision these templates to be useful for the
next generation of photo-z applications. We make our spectral templates and new
photo-z catalogs available to the community at
http://www.ice.csic.es/personal/jimenez/PHOTOZ .Comment: 10 pages, 8 figure
Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition
Motion representation plays a vital role in human action recognition in
videos. In this study, we introduce a novel compact motion representation for
video action recognition, named Optical Flow guided Feature (OFF), which
enables the network to distill temporal information through a fast and robust
approach. The OFF is derived from the definition of optical flow and is
orthogonal to the optical flow. The derivation also provides theoretical
support for using the difference between two frames. By directly calculating
pixel-wise spatiotemporal gradients of the deep feature maps, the OFF could be
embedded in any existing CNN based video action recognition framework with only
a slight additional cost. It enables the CNN to extract spatiotemporal
information, especially the temporal information between frames simultaneously.
This simple but powerful idea is validated by experimental results. The network
with OFF fed only by RGB inputs achieves a competitive accuracy of 93.3% on
UCF-101, which is comparable with the result obtained by two streams (RGB and
optical flow), but is 15 times faster in speed. Experimental results also show
that OFF is complementary to other motion modalities such as optical flow. When
the proposed method is plugged into the state-of-the-art video action
recognition framework, it has 96:0% and 74:2% accuracy on UCF-101 and HMDB-51
respectively. The code for this project is available at
https://github.com/kevin-ssy/Optical-Flow-Guided-Feature.Comment: CVPR 2018. code available at
https://github.com/kevin-ssy/Optical-Flow-Guided-Featur
HoloTrap: Interactive hologram design for multiple dynamic optical trapping
This work presents an application that generates real-time holograms to be
displayed on a holographic optical tweezers setup; a technique that allows the
manipulation of particles in the range from micrometres to nanometres. The
software is written in Java, and uses random binary masks to generate the
holograms. It allows customization of several parameters that are dependent on
the experimental setup, such as the specific characteristics of the device
displaying the hologram, or the presence of aberrations. We evaluate the
software's performance and conclude that real-time interaction is achieved. We
give our experimental results from manipulating 5 micron-diametre microspheres
using the program.Comment: 17 pages, 6 figure
Sparse optical flow regularisation for real-time visual tracking
Optical flow can greatly improve the robustness of visual tracking algorithms. While dense optical flow algorithms have various applications, they can not be used for real-time solutions without resorting to GPU calculations. Furthermore, most optical flow algorithms fail in challenging lighting environments due to the violation of the brightness constraint. We propose a simple but effective iterative regularisation scheme for real-time, sparse optical flow algorithms, that is shown to be robust to sudden illumination changes and can handle large displacements. The algorithm proves to outperform well known techniques in real life video sequences, while being much faster to calculate. Our solution increases the robustness of a real-time particle filter based tracking application, consuming only a fraction of the available CPU power. Furthermore, a new and realistic optical flow dataset with annotated ground truth is created and made freely available for research purposes
The development of local solar irradiance for outdoor computer graphics rendering
Atmospheric effects are approximated by solving the light transfer equation, LTE, of a given viewing path. The resulting accumulated spectral energy (its visible band) arriving at the observer’s eyes, defines the colour of the object currently on the line of sight. Due to the convenience of using a single rendering equation to solve the LTE for daylight sky and distant objects (aerial perspective), recent methods had opt for a similar kind of approach. Alas, the burden that the real-time calculation brings to the foil had forced these methods to make simplifications that were not in line with the actual world observation. Consequently, the results of these methods are laden with visual-errors. The two most common simplifications made were: i) assuming the atmosphere as a full-scattering medium only and ii) assuming a single density atmosphere profile. This research explored the possibility of replacing the real-time calculation involved in solving the LTE with an analytical-based approach. Hence, the two simplifications made by the previous real-time methods can be avoided. The model was implemented on top of a flight simulator prototype system since the requirements of such system match the objectives of this study. Results were verified against the actual images of the daylight skies. Comparison was also made with the previous methods’ results to showcase the proposed model strengths and advantages over its peers
- …