30,428 research outputs found
Where It Is Better to Live: In an "European" or an "American" City?
We start from the well known fact that, in most European cities, central locations are occupied by rich households; while in American cities, they are occupied by poor households. This paper tries to answer to the question: witch type of urban structure is better for the households, an European or an American one? We are using a dynamic residential model, where the spatial repartition of amenities is endogenously modified by the spatial repartition of social groups. At every period, the equilibrium spatial structure of the city is determined by the transport costs and by the spatial repartition of amenities; but, between periods, the spatial repartition of amenities changes, rich households generating local amenities in the locations they occupy, and then the spatial structure of the city changes. For every combination of utility level, or for every population size, the city may have several long term equilibria. We explicitly analyse two of them: an “American equilibrium†with the poor living in the centre and the rich in the periphery, and a “European equibrium†with the rich living in the centre and the poor in the periphery. We analyze these equilibriums in two situations (open-city and closed-city) and, in both cases, we compare the two equilibria from an efficiency point of view. The results show that in both cases, an American structure is more efficient.
Simulation of Rapidly-Exploring Random Trees in Membrane Computing with P-Lingua and Automatic Programming
Methods based on Rapidly-exploring Random Trees (RRTs) have been
widely used in robotics to solve motion planning problems. On the other hand, in the
membrane computing framework, models based on Enzymatic Numerical P systems
(ENPS) have been applied to robot controllers, but today there is a lack of planning
algorithms based on membrane computing for robotics. With this motivation, we
provide a variant of ENPS called Random Enzymatic Numerical P systems with
Proteins and Shared Memory (RENPSM) addressed to implement RRT algorithms
and we illustrate it by simulating the bidirectional RRT algorithm. This paper is an
extension of [21]a. The software presented in [21] was an ad-hoc simulator, i.e, a tool
for simulating computations of one and only one model that has been hard-coded.
The main contribution of this paper with respect to [21] is the introduction of a novel
solution for membrane computing simulators based on automatic programming. First,
we have extended the P-Lingua syntax –a language to define membrane computing
models– to write RENPSM models. Second, we have implemented a new parser based
on Flex and Bison to read RENPSM models and produce source code in C language
for multicore processors with OpenMP. Finally, additional experiments are presented.Ministerio de EconomĂa, Industria y Competitividad TIN2017-89842-
Robustness of nuclear core activity reconstruction by data assimilation
We apply a data assimilation techniques, inspired from meteorological
applications, to perform an optimal reconstruction of the neutronic activity
field in a nuclear core. Both measurements, and information coming from a
numerical model, are used. We first study the robustness of the method when the
amount of measured information decreases. We then study the influence of the
nature of the instruments and their spatial repartition on the efficiency of
the field reconstruction
Virtual Environments for Training: From Individual Learning to Collaboration with Humanoids
The next generation of virtual environments for training is oriented towards
collaborative aspects. Therefore, we have decided to enhance our platform for
virtual training environments, adding collaboration opportunities and
integrating humanoids. In this paper we put forward a model of humanoid that
suits both virtual humans and representations of real users, according to
collaborative training activities. We suggest adaptations to the scenario model
of our platform making it possible to write collaborative procedures. We
introduce a mechanism of action selection made up of a global repartition and
an individual choice. These models are currently being integrated and validated
in GVT, a virtual training tool for maintenance of military equipments,
developed in collaboration with the French company NEXTER-Group
Electronics Cooling Fan Noise Prediction
Using the finite volume CFD software FLUENT, one fan was studied at a given
flow rate (1.5m3/min) for three different operational rotating speeds : 2,000,
2,350 and 2,700 rpm. The turbulent air flow analysis predicts the acoustic
behavior of the fan. The best fan operating window, i.e. the one giving the
best ratio between noise emissions and cooling performance, can then be
determined. The broadband noise acoustic model is used. As the computation is
steady state, a simple Multiple Reference Frame model (MRF, also known as
stationary rotor approach) is used to represent the fan. This approach is able
to capture the effects of the flow non-uniformity at the fan inlet together
with their impact on the fan performance. Furthermore, it is not requiring a
fan curve as an input to the model. When compared to the available catalog data
the simulation results show promising qualitative agreement that may be used
for fan design and selection purposes.Comment: Submitted on behalf of TIMA Editions
(http://irevues.inist.fr/tima-editions
Who produces for whom in the world economy?
For nearly two decades, the share of trade in inputs, also called vertical trade, has dramatically increased. This paper suggests a new measure of international trade: “value-added trade”. Like many existing estimates, “value-added trade” is net of double-counted vertical trade. It also reallocate trade flows to their original input-producing industries and countries and allows to answer the question “who produces for whom”. In 2004, 27% of international trade were "only" vertical specialization trade. The sector repartition of value-added trade is very different from the sector repartition of standard trade. Value-added trade is less regionalized than standard trade.Globalization, Vertical trade, Regionalisation
The Weight Function in the Subtree Kernel is Decisive
Tree data are ubiquitous because they model a large variety of situations,
e.g., the architecture of plants, the secondary structure of RNA, or the
hierarchy of XML files. Nevertheless, the analysis of these non-Euclidean data
is difficult per se. In this paper, we focus on the subtree kernel that is a
convolution kernel for tree data introduced by Vishwanathan and Smola in the
early 2000's. More precisely, we investigate the influence of the weight
function from a theoretical perspective and in real data applications. We
establish on a 2-classes stochastic model that the performance of the subtree
kernel is improved when the weight of leaves vanishes, which motivates the
definition of a new weight function, learned from the data and not fixed by the
user as usually done. To this end, we define a unified framework for computing
the subtree kernel from ordered or unordered trees, that is particularly
suitable for tuning parameters. We show through eight real data classification
problems the great efficiency of our approach, in particular for small
datasets, which also states the high importance of the weight function.
Finally, a visualization tool of the significant features is derived.Comment: 36 page
Optimal design of measurement network for neutronic activity field reconstruction by data assimilation
Using data assimilation framework, to merge information from model and
measurement, an optimal reconstruction of the neutronic activity field can be
determined for a nuclear reactor core. In this paper, we focus on solving the
inverse problem of determining an optimal repartition of the measuring
instruments within the core, to get the best possible results from the data
assimilation reconstruction procedure. The position optimisation is realised
using Simulated Annealing algorithm, based on the Metropolis-Hastings one.
Moreover, in order to address the optimisation computing challenge, algebraic
improvements of data assimilation have been developed and are presented here.Comment: 24 pages, 10 figure
- …