33,223 research outputs found
Active Image-based Modeling with a Toy Drone
Image-based modeling techniques can now generate photo-realistic 3D models
from images. But it is up to users to provide high quality images with good
coverage and view overlap, which makes the data capturing process tedious and
time consuming. We seek to automate data capturing for image-based modeling.
The core of our system is an iterative linear method to solve the multi-view
stereo (MVS) problem quickly and plan the Next-Best-View (NBV) effectively. Our
fast MVS algorithm enables online model reconstruction and quality assessment
to determine the NBVs on the fly. We test our system with a toy unmanned aerial
vehicle (UAV) in simulated, indoor and outdoor experiments. Results show that
our system improves the efficiency of data acquisition and ensures the
completeness of the final model.Comment: To be published on International Conference on Robotics and
Automation 2018, Brisbane, Australia. Project Page:
https://huangrui815.github.io/active-image-based-modeling/ The author's
personal page: http://www.sfu.ca/~rha55
Terahertz Security Image Quality Assessment by No-reference Model Observers
To provide the possibility of developing objective image quality assessment
(IQA) algorithms for THz security images, we constructed the THz security image
database (THSID) including a total of 181 THz security images with the
resolution of 127*380. The main distortion types in THz security images were
first analyzed for the design of subjective evaluation criteria to acquire the
mean opinion scores. Subsequently, the existing no-reference IQA algorithms,
which were 5 opinion-aware approaches viz., NFERM, GMLF, DIIVINE, BRISQUE and
BLIINDS2, and 8 opinion-unaware approaches viz., QAC, SISBLIM, NIQE, FISBLIM,
CPBD, S3 and Fish_bb, were executed for the evaluation of the THz security
image quality. The statistical results demonstrated the superiority of Fish_bb
over the other testing IQA approaches for assessing the THz image quality with
PLCC (SROCC) values of 0.8925 (-0.8706), and with RMSE value of 0.3993. The
linear regression analysis and Bland-Altman plot further verified that the
Fish__bb could substitute for the subjective IQA. Nonetheless, for the
classification of THz security images, we tended to use S3 as a criterion for
ranking THz security image grades because of the relatively low false positive
rate in classifying bad THz image quality into acceptable category (24.69%).
Interestingly, due to the specific property of THz image, the average pixel
intensity gave the best performance than the above complicated IQA algorithms,
with the PLCC, SROCC and RMSE of 0.9001, -0.8800 and 0.3857, respectively. This
study will help the users such as researchers or security staffs to obtain the
THz security images of good quality. Currently, our research group is
attempting to make this research more comprehensive.Comment: 13 pages, 8 figures, 4 table
The Space Object Ontology
Achieving space domain awareness requires the
identification, characterization, and tracking of space objects.
Storing and leveraging associated space object data for purposes
such as hostile threat assessment, object identification, and
collision prediction and avoidance present further challenges.
Space objects are characterized according to a variety of
parameters including their identifiers, design specifications,
components, subsystems, capabilities, vulnerabilities, origins,
missions, orbital elements, patterns of life, processes, operational
statuses, and associated persons, organizations, or nations. The
Space Object Ontology provides a consensus-based realist
framework for formulating such characterizations in a
computable fashion. Space object data are aligned with classes
and relations in the Space Object Ontology and stored in a
dynamically updated Resource Description Framework triple
store, which can be queried to support space domain awareness
and the needs of spacecraft operators. This paper presents the
core of the Space Object Ontology, discusses its advantages over
other approaches to space object classification, and demonstrates
its ability to combine diverse sets of data from multiple sources
within an expandable framework. Finally, we show how the
ontology provides benefits for enhancing and maintaining longterm
space domain awareness
The Spitzer c2d Survey of Large, Nearby, Interstellar Clouds. I. Chamaeleon II Observed with MIPS
We present maps of over 1.5 square degrees in Chamaeleon (Cha) II at 24, 70,
and 160 micron observed with the Spitzer Space Telescope Multiband Imaging
Photometer for Spitzer (MIPS) and a 1.2 square degree millimeter map from SIMBA
on the Swedish-ESO Submillimetre Telescope (SEST). The c2d Spitzer Legacy
Team's data reduction pipeline is described in detail. Over 1500 24 micron
sources and 41 70 micron sources were detected by MIPS with fluxes greater than
10-sigma. More than 40 potential YSOs are identified with a MIPS and 2MASS
color-color diagram and by their spectral indices, including two previously
unknown sources with 24 micron excesses. Our new SIMBA millimeter map of Cha II
shows that only a small fraction of the gas is in compact structures with high
column densities. The extended emission seen by MIPS is compared with previous
CO observations. Some selected interesting sources, including two detected at 1
mm, associated with Cha II are discussed in detail and their SEDs presented.
The classification of these sources using MIPS data is found to be consistent
with previous studies.Comment: 44 pages, 12 figures (1 color), to be published in Ap
Assessing the role of EO in biodiversity monitoring: options for integrating in-situ observations with EO within the context of the EBONE concept
The European Biodiversity Observation Network (EBONE) is a European contribution on terrestrial monitoring to GEO BON, the Group on Earth Observations Biodiversity Observation Network. EBONE’s aims are to develop a system of biodiversity observation at regional, national and European levels by assessing existing approaches in terms of their validity and applicability starting in Europe, then expanding to regions in Africa. The objective of EBONE is to deliver:
1. A sound scientific basis for the production of statistical estimates of stock and change of key indicators;
2. The development of a system for estimating past changes and forecasting and testing policy options and management strategies for threatened ecosystems and species;
3. A proposal for a cost-effective biodiversity monitoring system.
There is a consensus that Earth Observation (EO) has a role to play in monitoring biodiversity. With its capacity to observe detailed spatial patterns and variability across large areas at regular intervals, our instinct suggests that EO could deliver the type of spatial and temporal coverage that is beyond reach with in-situ efforts. Furthermore, when considering the emerging networks of in-situ observations, the prospect of enhancing the quality of the information whilst reducing cost through integration is compelling. This report gives a realistic assessment of the role of EO in biodiversity monitoring and the options for integrating in-situ observations with EO within the context of the EBONE concept (cfr. EBONE-ID1.4). The assessment is mainly based on a set of targeted pilot studies. Building on this assessment, the report then presents a series of recommendations on the best options for using EO in an effective, consistent and sustainable biodiversity monitoring scheme.
The issues that we faced were many:
1. Integration can be interpreted in different ways. One possible interpretation is: the combined use of independent data sets to deliver a different but improved data set; another is: the use of one data set to complement another dataset.
2. The targeted improvement will vary with stakeholder group: some will seek for more efficiency, others for more reliable estimates (accuracy and/or precision); others for more detail in space and/or time or more of everything.
3. Integration requires a link between the datasets (EO and in-situ). The strength of the link between reflected electromagnetic radiation and the habitats and their biodiversity observed in-situ is function of many variables, for example: the spatial scale of the observations; timing of the observations; the adopted nomenclature for classification; the complexity of the landscape in terms of composition, spatial structure and the physical environment; the habitat and land cover types under consideration.
4. The type of the EO data available varies (function of e.g. budget, size and location of region, cloudiness, national and/or international investment in airborne campaigns or space technology) which determines its capability to deliver the required output.
EO and in-situ could be combined in different ways, depending on the type of integration we wanted to achieve and the targeted improvement. We aimed for an improvement in accuracy (i.e. the reduction in error of our indicator estimate calculated for an environmental zone). Furthermore, EO would also provide the spatial patterns for correlated in-situ data.
EBONE in its initial development, focused on three main indicators covering:
(i) the extent and change of habitats of European interest in the context of a general habitat assessment;
(ii) abundance and distribution of selected species (birds, butterflies and plants); and
(iii) fragmentation of natural and semi-natural areas.
For habitat extent, we decided that it did not matter how in-situ was integrated with EO as long as we could demonstrate that acceptable accuracies could be achieved and the precision could consistently be improved. The nomenclature used to map habitats in-situ was the General Habitat Classification. We considered the following options where the EO and in-situ play different roles:
using in-situ samples to re-calibrate a habitat map independently derived from EO; improving the accuracy of in-situ sampled habitat statistics, by post-stratification with correlated EO data; and using in-situ samples to train the classification of EO data into habitat types where the EO data delivers full coverage or a larger number of samples.
For some of the above cases we also considered the impact that the sampling strategy employed to deliver the samples would have on the accuracy and precision achieved.
Restricted access to European wide species data prevented work on the indicator ‘abundance and distribution of species’.
With respect to the indicator ‘fragmentation’, we investigated ways of delivering EO derived measures of habitat patterns that are meaningful to sampled in-situ observations
Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries
With advanced image journaling tools, one can easily alter the semantic
meaning of an image by exploiting certain manipulation techniques such as
copy-clone, object splicing, and removal, which mislead the viewers. In
contrast, the identification of these manipulations becomes a very challenging
task as manipulated regions are not visually apparent. This paper proposes a
high-confidence manipulation localization architecture which utilizes
resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder
network to segment out manipulated regions from non-manipulated ones.
Resampling features are used to capture artifacts like JPEG quality loss,
upsampling, downsampling, rotation, and shearing. The proposed network exploits
larger receptive fields (spatial maps) and frequency domain correlation to
analyze the discriminative characteristics between manipulated and
non-manipulated regions by incorporating encoder and LSTM network. Finally,
decoder network learns the mapping from low-resolution feature maps to
pixel-wise predictions for image tamper localization. With predicted mask
provided by final layer (softmax) of the proposed architecture, end-to-end
training is performed to learn the network parameters through back-propagation
using ground-truth masks. Furthermore, a large image splicing dataset is
introduced to guide the training process. The proposed method is capable of
localizing image manipulations at pixel level with high precision, which is
demonstrated through rigorous experimentation on three diverse datasets
- …