10,340 research outputs found
Space Warps II. New Gravitational Lens Candidates from the CFHTLS Discovered through Citizen Science
We report the discovery of 29 promising (and 59 total) new lens candidates
from the CFHT Legacy Survey (CFHTLS) based on about 11 million classifications
performed by citizen scientists as part of the first Space Warps lens search.
The goal of the blind lens search was to identify lens candidates missed by
robots (the RingFinder on galaxy scales and ArcFinder on group/cluster scales)
which had been previously used to mine the CFHTLS for lenses. We compare some
properties of the samples detected by these algorithms to the Space Warps
sample and find them to be broadly similar. The image separation distribution
calculated from the Space Warps sample shows that previous constraints on the
average density profile of lens galaxies are robust. SpaceWarps recovers about
65% of known lenses, while the new candidates show a richer variety compared to
those found by the two robots. This detection rate could be increased to 80% by
only using classifications performed by expert volunteers (albeit at the cost
of a lower purity), indicating that the training and performance calibration of
the citizen scientists is very important for the success of Space Warps. In
this work we present the SIMCT pipeline, used for generating in situ a sample
of realistic simulated lensed images. This training sample, along with the
false positives identified during the search, has a legacy value for testing
future lens finding algorithms. We make the pipeline and the training set
publicly available.Comment: 23 pages, 12 figures, MNRAS accepted, minor to moderate changes in
this versio
Direct Observation of Cosmic Strings via their Strong Gravitational Lensing Effect: II. Results from the HST/ACS Image Archive
We have searched 4.5 square degrees of archival HST/ACS images for cosmic
strings, identifying close pairs of similar, faint galaxies and selecting
groups whose alignment is consistent with gravitational lensing by a long,
straight string. We find no evidence for cosmic strings in five large-area HST
treasury surveys (covering a total of 2.22 square degrees), or in any of 346
multi-filter guest observer images (1.18 square degrees). Assuming that
simulations ccurately predict the number of cosmic strings in the universe,
this non-detection allows us to place upper limits on the unitless Universal
cosmic string tension of G mu/c^2 < 2.3 x 10^-6, and cosmic string density of
Omega_s < 2.1 x 10^-5 at the 95% confidence level (marginalising over the other
parameter in each case). We find four dubious cosmic string candidates in 318
single filter guest observer images (1.08 square degrees), which we are unable
to conclusively eliminate with existing data. The confirmation of any one of
these candidates as cosmic strings would imply G mu/c^2 ~ 10^-6 and Omega_s ~
10^-5. However, we estimate that there is at least a 92% chance that these
string candidates are random alignments of galaxies. If we assume that these
candidates are indeed false detections, our final limits on G mu/c^2 and
Omega_s fall to 6.5 x 10^-7 and 7.3 x 10^-6. Due to the extensive sky coverage
of the HST/ACS image archive, the above limits are universal. They are quite
sensitive to the number of fields being searched, and could be further reduced
by more than a factor of two using forthcoming HST data.Comment: 21 pages, 18 figure
Support Vector Machine classification of strong gravitational lenses
The imminent advent of very large-scale optical sky surveys, such as Euclid
and LSST, makes it important to find efficient ways of discovering rare objects
such as strong gravitational lens systems, where a background object is
multiply gravitationally imaged by a foreground mass. As well as finding the
lens systems, it is important to reject false positives due to intrinsic
structure in galaxies, and much work is in progress with machine learning
algorithms such as neural networks in order to achieve both these aims. We
present and discuss a Support Vector Machine (SVM) algorithm which makes use of
a Gabor filterbank in order to provide learning criteria for separation of
lenses and non-lenses, and demonstrate using blind challenges that under
certain circumstances it is a particularly efficient algorithm for rejecting
false positives. We compare the SVM engine with a large-scale human examination
of 100000 simulated lenses in a challenge dataset, and also apply the SVM
method to survey images from the Kilo-Degree Survey.Comment: Accepted by MNRA
Benchmarking Image Processing Algorithms for Unmanned Aerial System-Assisted Crack Detection in Concrete Structures
This paper summarizes the results of traditional image processing algorithms for detection of defects in concrete using images taken by Unmanned Aerial Systems (UASs). Such algorithms are useful for improving the accuracy of crack detection during autonomous inspection of bridges and other structures, and they have yet to be compared and evaluated on a dataset of concrete images taken by UAS. The authors created a generic image processing algorithm for crack detection, which included the major steps of filter design, edge detection, image enhancement, and segmentation, designed to uniformly compare dierent edge detectors. Edge detection was carried out by six filters in the spatial (Roberts, Prewitt, Sobel, and Laplacian of Gaussian) and frequency (Butterworth and Gaussian) domains. These algorithms were applied to fifty images each of defected and sound concrete. Performances of the six filters were compared in terms of accuracy, precision, minimum detectable crack width, computational time, and noise-to-signal ratio. In general, frequency domain techniques were slower than spatial domain methods because of the computational intensity of the Fourier and inverse Fourier transformations used to move between spatial and frequency domains. Frequency domain methods also produced noisier images than spatial domain methods. Crack detection in the spatial domain using the Laplacian of Gaussian filter proved to be the fastest, most accurate, and most precise method, and it resulted in the finest detectable crack width. The Laplacian of Gaussian filter in spatial domain is recommended for future applications of real-time crack detection using UAS
Space Warps: I. Crowd-sourcing the Discovery of Gravitational Lenses
We describe Space Warps, a novel gravitational lens discovery service that
yields samples of high purity and completeness through crowd-sourced visual
inspection. Carefully produced colour composite images are displayed to
volunteers via a web- based classification interface, which records their
estimates of the positions of candidate lensed features. Images of simulated
lenses, as well as real images which lack lenses, are inserted into the image
stream at random intervals; this training set is used to give the volunteers
instantaneous feedback on their performance, as well as to calibrate a model of
the system that provides dynamical updates to the probability that a classified
image contains a lens. Low probability systems are retired from the site
periodically, concentrating the sample towards a set of lens candidates. Having
divided 160 square degrees of Canada-France-Hawaii Telescope Legacy Survey
(CFHTLS) imaging into some 430,000 overlapping 82 by 82 arcsecond tiles and
displaying them on the site, we were joined by around 37,000 volunteers who
contributed 11 million image classifications over the course of 8 months. This
Stage 1 search reduced the sample to 3381 images containing candidates; these
were then refined in Stage 2 to yield a sample that we expect to be over 90%
complete and 30% pure, based on our analysis of the volunteers performance on
training images. We comment on the scalability of the SpaceWarps system to the
wide field survey era, based on our projection that searches of 10 images
could be performed by a crowd of 10 volunteers in 6 days.Comment: 21 pages, 13 figures, MNRAS accepted, minor to moderate changes in
this versio
X-Ray Image Processing and Visualization for Remote Assistance of Airport Luggage Screeners
X-ray technology is widely used for airport luggage inspection nowadays. However, the ever-increasing sophistication of threat-concealment measures and types of threats, together with the natural complexity, inherent to the content of each individual luggage make x-ray raw images obtained directly from inspection systems unsuitable to clearly show various luggage and threat items, particularly low-density objects, which poses a great challenge for airport screeners.
This thesis presents efforts spent in improving the rate of threat detection using image processing and visualization technologies. The principles of x-ray imaging for airport luggage inspection and the characteristics of single-energy and dual-energy x-ray data are first introduced. The image processing and visualization algorithms, selected and proposed for improving single energy and dual energy x-ray images, are then presented in four categories: (1) gray-level enhancement, (2) image segmentation, (3) pseudo coloring, and (4) image fusion. The major contributions of this research include identification of optimum combinations of common segmentation and enhancement methods, HSI based color-coding approaches and dual-energy image fusion algorithms —spatial information-based and wavelet-based image fusions. Experimental results generated with these image processing and visualization algorithms are shown and compared. Objective image quality measures are also explored in an effort to reduce the overhead of human subjective assessments and to provide more reliable evaluation results.
Two application software are developed − an x-ray image processing application (XIP) and a wireless tablet PC-based remote supervision system (RSS). In XIP, we implemented in a user-friendly GUI the preceding image processing and visualization algorithms. In RSS, we ported available image processing and visualization methods to a wireless mobile supervisory station for screener assistance and supervision.
Quantitative and on-site qualitative evaluations for various processed and fused x-ray luggage images demonstrate that using the proposed algorithms of image processing and visualization constitutes an effective and feasible means for improving airport luggage inspection
Detection of complete and partial chromosome gains and losses by comparative genomic in situ hybridization
Comparative genomic in situ hybridization (CGH) provides a new possibility for searching genomes for imbalanced genetic material. Labeled genomic test DNA, prepared from clinical or tumor specimens, is mixed with differently labeled control DNA prepared from cells with normal chromosome complements. The mixed probe is used for chromosomal in situ suppression (CISS) hybridization to normal metaphase spreads (CGH-metaphase spreads). Hybridized test and control DNA sequences are detected via different fluorochromes, e.g., fluorescein isothiocyanate (FITC) and tetraethylrhodamine isothiocyanate (TRITC). The ratios of FITC/TRITC fluorescence intensities for each chromosome or chromosome segment should then reflect its relative copy number in the test genome compared with the control genome, e.g., 0.5 for monosomies, 1 for disomies, 1.5 for trisomies, etc. Initially, model experiments were designed to test the accuracy of fluorescence ratio measurements on single chromosomes. DNAs from up to five human chromosome-specific plasmid libraries were labeled with biotin and digoxigenin in different hapten proportions. Probe mixtures were used for CISS hybridization to normal human metaphase spreads and detected with FITC and TRITC. An epifluorescence microscope equipped with a cooled charge coupled device (CCD) camera was used for image acquisition. Procedures for fluorescence ratio measurements were developed on the basis of commercial image analysis software. For hapten ratios 4/1, 1/1 and 1/4, fluorescence ratio values measured for individual chromosomes could be used as a single reliable parameter for chromosome identification. Our findings indicate (1) a tight correlation of fluorescence ratio values with hapten ratios, and (2) the potential of fluorescence ratio measurements for multiple color chromosome painting. Subsequently, genomic test DNAs, prepared from a patient with Down syndrome, from blood of a patient with Tcell prolymphocytic leukemia, and from cultured cells of a renal papillary carcinoma cell line, were applied in CGH experiments. As expected, significant differences in the fluorescence ratios could be measured for chromosome types present in different copy numbers in these test genomes, including a trisomy of chromosome 21, the smallest autosome of the human complement. In addition, chromosome material involved in partial gains and losses of the different tumors could be mapped to their normal chromosome counterparts in CGH-metaphase spreads. An alternative and simpler evaluation procedure based on visual inspection of CCD images of CGH-metaphase spreads also yielded consistent results from several independent observers. Pitfalls, methodological improvements, and potential applications of CGH analyses are discussed
A sample of low energy bursts from FRB 121102
We present 41 bursts from the first repeating fast radio burst discovered
(FRB 121102). A deep search has allowed us to probe unprecedentedly low burst
energies during two consecutive observations (separated by one day) using the
Arecibo telescope at 1.4 GHz. The bursts are generally detected in less than a
third of the 580-MHz observing bandwidth, demonstrating that narrow-band FRB
signals may be more common than previously thought. We show that the bursts are
likely faint versions of previously reported multi-component bursts. There is a
striking lack of bursts detected below 1.35 GHz and simultaneous VLA
observations at 3 GHz did not detect any of the 41 bursts, but did detect one
that was not seen with Arecibo, suggesting preferred radio emission frequencies
that vary with epoch. A power law approximation of the cumulative distribution
of burst energies yields an index that is much steeper than the
previously reported value of . The discrepancy may be evidence for a
more complex energy distribution. We place constraints on the possibility that
the associated persistent radio source is generated by the emission of many
faint bursts ( ms). We do not see a connection between burst
fluence and wait time. The distribution of wait times follows a log-normal
distribution centered around s; however, some bursts have wait times
below 1 s and as short as 26 ms, which is consistent with previous reports of a
bimodal distribution. We caution against exclusively integrating over the full
observing band during FRB searches, because this can lower signal-to-noise.Comment: Accepted version. 16 pages, 7 figures, 1 tabl
- …