256 research outputs found
Comparison of spatial downscaling methods of general circulation model results to study climate variability during the last glacial maximum
The extent to which climate conditions influenced the spatial
distribution of hominin populations in the past is highly debated.
General circulation models (GCMs) and archaeological data have been
used to address this issue. Most GCMs are not currently capable of
simulating past surface climate conditions with sufficiently
detailed spatial resolution to distinguish areas of potential
hominin habitat, however. In this paper, we propose a statistical
downscaling method (SDM) for increasing the resolution of climate
model outputs in a computationally efficient way. Our method uses a
generalised additive model (GAM), calibrated over present-day
climatology data, to statistically downscale temperature and
precipitation time series from the outputs of a GCM simulating the
climate of the Last Glacial Maximum (19 000–23 000 BP) over western
Europe. Once the SDM is calibrated, we first interpolate the
coarse-scale GCM outputs to the final resolution and then use the
GAM to compute surface air temperature and precipitation levels
using these interpolated GCM outputs and fine-resolution
geographical variables such as topography and distance from an
ocean. The GAM acts as a transfer function, capturing non-linear
relationships between variables at different spatial scales and
correcting for the GCM biases. We tested three different techniques
for the first interpolation of GCM output: bilinear, bicubic and
kriging. The resulting SDMs were evaluated by comparing downscaled
temperature and precipitation at local sites with paleoclimate
reconstructions based on paleoclimate archives (archaeozoological
and palynological data) and the impact of the interpolation
technique on patterns of variability was explored. The SDM based on
kriging interpolation, providing the best accuracy, was then
validated on present-day data outside of the calibration period. Our
results show that the downscaled temperature and precipitation
values are in good agreement with paleoclimate reconstructions at
local sites, and that our method for producing fine-grained
paleoclimate simulations is therefore suitable for conducting
paleo-anthropological research. It is nonetheless important to
calibrate the GAM on a range of data encompassing the data to be
downscaled. Otherwise, the SDM is likely to overcorrect the
coarse-grain data. In addition, the bilinear and bicubic
interpolation techniques were shown to distort either the temporal
variability or the values of the response variables, while the
kriging method offered the best compromise. Since climate
variability is an aspect of the environment to which human
populations may have responded in the past, the choice of
interpolation technique is therefore an important consideration.</p
Solving the Direction Field for Discrete Agent Motion
Models for pedestrian dynamics are often based on microscopic approaches
allowing for individual agent navigation. To reach a given destination, the
agent has to consider environmental obstacles. We propose a direction field
calculated on a regular grid with a Moore neighborhood, where obstacles are
represented by occupied cells. Our developed algorithm exactly reproduces the
shortest path with regard to the Euclidean metric.Comment: 8 pages, 4 figure
Multiple detection using the eigenvalues of the spectral matrix
In this study we treat the problem of detecting from multidimensional data, the number of uncorrelated signais in
passive array treatment as it is the case in underwater acoustics, array processing and seismology .
We use four detection criteria. Some of them are known, like AIC and MDL criteria where direct Kullback's
divergence is the information measure; we prolong them using the inverse Kullback's divergence. We also adapt a
new criterion using the logarithm of the likelihood ratio that has a chi square distribution and we suggest a simplified
threshold criterion that uses the eigenvalues of the spectral matrix of the data .
We study and compare the performances of these criteria in realistic simulations . The first one is inspired by the
problems of array processing and the second one by seismic problems.
Finally we study the robustness of these criteria when the classical hypothesis of uncorrelated noises having equal
variances is not fulfilled . Thus we outline some application limits of these criteria .Critères de détection, résultats sur des simulation
Improvement of passive array treatment by estimation of the spectral matrix of noises
Array processing aims to characterize impinging sources front recorded data ; a model of the noise spectral matrix is necessary
for the treatment . One usually suppose either that this matrix is known or that the noises are uncorrelated and have equal
variances on each sensor.
We present here an algorithm to estimate the noise spectral matrix when the noises are uncorrelated and have différent variances
on each sensor. It needs technics of the principal components analysis; thus it uses the eigensystem of the spectral matrix of the
received signais (the number of impinging signais is assumed known) .
We show on simulations that, if the spectral matrix of the noises is estimated with this algorithm, the following array processing
treatments give improved results .Présentation d'une méthode pour estimer la matrice spectrale des bruits lorsqu'ils sont non corrélés et ont des puissances différentes sur les capteurs. Utilisation des techniques d'analyse en composantes principales et donc des éléments propres de la matrice spectrale des signaux reçus. Simulations justifiant l'emploi de cet algorithm
Decomposition-based mission planning for fixed-wing UAVs surveying in wind
This paper presents a new method for planning fixed-wing aerial survey paths that ensures efficient image coverage of a large complex agricultural field in the presence of wind. By decomposing any complex polygonal field into multiple convex polygons, the traditional back-and-forth boustrophedon paths can be used to ensure coverage of these decomposed regions. To decompose a complex field in an efficient and fast manner, a top-down recursive greedy approach is used to traverse the search space in order to minimise flight time of the survey. This optimisation can be computed fast enough for use in the field. As wind can severely affect flight time, it is included in the flight time calculation in a systematic way using a verified cost function that offer greatly reduced survey times in wind. Other improved cost functions have been developed to take into account real world problems, e.g. No Fly Zones, in addition to flight time. A number of real surveys are performed in order to show the flight time in wind model is accurate, to make further comparisons to previous techniques and to show that the proposed method works in real-world conditions providing total image coverage. A number of missions are generated and flown for real complex agricultural fields. In addition to this, the wind field around a survey area is measured from a multi-rotor carrying an ultrasonic wind speed sensor. This shows that the assumption of steady uniform wind holds true for the small areas and time scales of a Unmanned Aerial Vehicle (UAV) aerial survey.</div
Adaptive Path Planning for Depth Constrained Bathymetric Mapping with an Autonomous Surface Vessel
This paper describes the design, implementation and testing of a suite of
algorithms to enable depth constrained autonomous bathymetric (underwater
topography) mapping by an Autonomous Surface Vessel (ASV). Given a target depth
and a bounding polygon, the ASV will find and follow the intersection of the
bounding polygon and the depth contour as modeled online with a Gaussian
Process (GP). This intersection, once mapped, will then be used as a boundary
within which a path will be planned for coverage to build a map of the
Bathymetry. Methods for sequential updates to GP's are described allowing
online fitting, prediction and hyper-parameter optimisation on a small embedded
PC. New algorithms are introduced for the partitioning of convex polygons to
allow efficient path planning for coverage. These algorithms are tested both in
simulation and in the field with a small twin hull differential thrust vessel
built for the task.Comment: 21 pages, 9 Figures, 1 Table. Submitted to The Journal of Field
Robotic
Localization of correlated sources by array processing using spatial smoothing
In this paper, the classical array processing methods are separated in two classes : uncoupled solutions and global solutions . We
expose the method that uses the spatial smooting to decorrelate the received signais . Then we apply these array processing
methods to signais that are recorded in an underwater acoustics experiment ; in this situation the spatial smoothing is compulsary .
Results are discussed .Dans cet article, nous regroupons les diverses méthodes connues de traitement d'antenne en deux catégories : méthodes
découplées, méthodes globales . Nous présentons la méthode du lissage spatial qui permet de décorréler les sources à la
réception . Nous appliquons ensuite ces méthodes de traitement d'antenne à des signaux enregistrés au cours d'une expérimentation
en acoustique sous-marine dans laquelle une onde monochromatique a été émise dans différentes configurations
géométriques et météorologiques . Dans cette situation, le lissage spatial doit être utilisé pour décorréler les trajets multiples
Multi-site generalised dissimilarity modelling: using zeta diversity to differentiate drivers of turnover in rare and widespread species
1. Generalised dissimilarity modelling (GDM) applies pairwise beta diversity as a measure of species turnover with the purpose of explaining changes in species composition under changing environments or along environmental gradients. Beta diversity only captures turnover across pairs of sites and, therefore, disproportionately represents turnover in rare species across communities. By contrast, zeta diversity, the average number of shared species across multiple sites, captures the full spectrum of rare, intermediate and widespread species as they
contribute differently to compositional turnover. 2. We show how integrating zeta diversity into GDMs (which we term multi-site generalised dissimilarity modelling,
MS-GDM), provides a more information rich approach to modelling how communities respond to environmental variation and change. We demonstrate the value of including zeta diversity in biodiversity assessment and modelling using BirdLife Australia Atlas data. Zeta diversity values for different numbers of sites (the order of zeta) are regressed against environmental differences and distance using two kinds of regressions: shape constrained additive models and a combination of I-splines and generalised linear models. 3. Applying MS-GDM to different orders of zeta revealed shifts in the importance of environmental variables in explaining species turnover, varying with the order of zeta and thus with the level of co-occurrence of the species and, by extension, their commonness and rarity. In particular, precipitation gradients emerged as drivers in the turnover of rare species, whereas temperature gradients were more important drivers of turnover in widespread species. 4. Appreciation of the factors that drive compositional turnover across multiple sites is necessary for accommodating the full spectrum of compositional turnover across rare to common species. This extends beyond understanding drivers for pairwise beta diversity only. MS-GDM provides a valuable addition to the toolkit of GDM, with further potential for survey gap analysis and prediction of species composition in unsampled sites
The minimum energy expenditure shortest path method
This article discusses the addition of an energy parameter to the shortest path execution process; namely, the energy expenditure by a character during execution of the path. Given a simple environment in which a character has the ability to perform actions related to locomotion, such as walking and stair stepping, current techniques execute the shortest path based on the length of the extracted root trajectory. However, actual humans acting in constrained environments do not plan only according to shortest path criterion, they conceptually measure the path that minimizes the amount of energy expenditure. On this basis, it seems that virtual characters should also execute their paths according to the minimization of actual energy expenditure as well. In this article, a simple method that uses a formula for computing vanadium dioxide () levels, which is a proxy for the energy expenditure by humans during various activities, is presented. The presented solution could be beneficial in any situation requiring a sophisticated perspective of the path-execution process. Moreover, it can be implemented in almost every path-planning method that has the ability to measure stepping actions or other actions of a virtual character
- …