22,414 research outputs found
Modelling indoor air carbon dioxide concentration using grey-box models
Predictive control is the strategy that has the greatest reported benefits when it is implemented in a building energy management system. Predictive control requires low-order models to assess different scenarios and determine which strategy should be implemented to achieve a good compromise between comfort, energy consumption and energy cost. Usually, a deterministic approach is used to create low-order models to estimate the indoor CO2 concentration using the differential equation of the tracer-gas mass balance. However, the use of stochastic differential equations based on the tracer-gas mass balance is not common. The objective of this paper is to assess the potential of creating predictive models for a specific room using for the first time a stochastic grey-box modelling approach to estimate future CO2 concentrations. First of all, a set of stochastic differential equations are defined. Then, the model parameters are estimated using a maximum likelihood method. Different models are defined, and tested using a set of statistical methods. The approach used combines physical knowledge and information embedded in the monitored data to identify a suitable parametrization for a simple model that is more accurate than commonly used deterministic approaches. As a consequence, predictive control can be easily implemented in energy management systems.Peer ReviewedPostprint (author's final draft
Probably Unknown: Deep Inverse Sensor Modelling In Radar
Radar presents a promising alternative to lidar and vision in autonomous
vehicle applications, able to detect objects at long range under a variety of
weather conditions. However, distinguishing between occupied and free space
from raw radar power returns is challenging due to complex interactions between
sensor noise and occlusion.
To counter this we propose to learn an Inverse Sensor Model (ISM) converting
a raw radar scan to a grid map of occupancy probabilities using a deep neural
network. Our network is self-supervised using partial occupancy labels
generated by lidar, allowing a robot to learn about world occupancy from past
experience without human supervision. We evaluate our approach on five hours of
data recorded in a dynamic urban environment. By accounting for the scene
context of each grid cell our model is able to successfully segment the world
into occupied and free space, outperforming standard CFAR filtering approaches.
Additionally by incorporating heteroscedastic uncertainty into our model
formulation, we are able to quantify the variance in the uncertainty throughout
the sensor observation. Through this mechanism we are able to successfully
identify regions of space that are likely to be occluded.Comment: 6 full pages, 1 page of reference
Multi-market minority game: breaking the symmetry of choice
Generalization of the minority game to more than one market is considered. At
each time step every agent chooses one of its strategies and acts on the market
related to this strategy. If the payoff function allows for strong fluctuation
of utility then market occupancies become inhomogeneous with preference given
to this market where the fluctuation occured first. There exists a critical
size of agent population above which agents on bigger market behave
collectively. In this regime there always exists a history of decisions for
which all agents on a bigger market react identically.Comment: 15 pages, 12 figures, Accepted to 'Advances in Complex Systems
Electrode level Monte Carlo model of radiation damage effects on astronomical CCDs
Current optical space telescopes rely upon silicon Charge Coupled Devices
(CCDs) to detect and image the incoming photons. The performance of a CCD
detector depends on its ability to transfer electrons through the silicon
efficiently, so that the signal from every pixel may be read out through a
single amplifier. This process of electron transfer is highly susceptible to
the effects of solar proton damage (or non-ionizing radiation damage). This is
because charged particles passing through the CCD displace silicon atoms,
introducing energy levels into the semi-conductor bandgap which act as
localized electron traps. The reduction in Charge Transfer Efficiency (CTE)
leads to signal loss and image smearing. The European Space Agency's
astrometric Gaia mission will make extensive use of CCDs to create the most
complete and accurate stereoscopic map to date of the Milky Way. In the context
of the Gaia mission CTE is referred to with the complementary quantity Charge
Transfer Inefficiency (CTI = 1-CTE). CTI is an extremely important issue that
threatens Gaia's performances. We present here a detailed Monte Carlo model
which has been developed to simulate the operation of a damaged CCD at the
pixel electrode level. This model implements a new approach to both the charge
density distribution within a pixel and the charge capture and release
probabilities, which allows the reproduction of CTI effects on a variety of
measurements for a large signal level range in particular for signals of the
order of a few electrons. A running version of the model as well as a brief
documentation and a few examples are readily available at
http://www.strw.leidenuniv.nl/~prodhomme/cemga.php as part of the CEMGA java
package (CTI Effects Models for Gaia).Comment: Accepted by MNRAS on 13 February 2011. 15 pages, 7 figures and 5
table
The Catalytic and Non-catalytic Functions of the Brahma Chromatin-Remodeling Protein Collaborate to Fine-Tune Circadian Transcription in Drosophila.
Daily rhythms in gene expression play a critical role in the progression of circadian clocks, and are under regulation by transcription factor binding, histone modifications, RNA polymerase II (RNAPII) recruitment and elongation, and post-transcriptional mechanisms. Although previous studies have shown that clock-controlled genes exhibit rhythmic chromatin modifications, less is known about the functions performed by chromatin remodelers in animal clockwork. Here we have identified the Brahma (Brm) complex as a regulator of the Drosophila clock. In Drosophila, CLOCK (CLK) is the master transcriptional activator driving cyclical gene expression by participating in an auto-inhibitory feedback loop that involves stimulating the expression of the main negative regulators, period (per) and timeless (tim). BRM functions catalytically to increase nucleosome density at the promoters of per and tim, creating an overall restrictive chromatin landscape to limit transcriptional output during the active phase of cycling gene expression. In addition, the non-catalytic function of BRM regulates the level and binding of CLK to target promoters and maintains transient RNAPII stalling at the per promoter, likely by recruiting repressive and pausing factors. By disentangling its catalytic versus non-catalytic functions at the promoters of CLK target genes, we uncovered a multi-leveled mechanism in which BRM fine-tunes circadian transcription
Activities recognition and worker profiling in the intelligent office environment using a fuzzy finite state machine
Analysis of the office workersâ activities of daily working in an intelligent office environment can be used to optimize energy consumption and also office workersâ comfort. To achieve this end, it is essential to recognise office workersâ activities including short breaks, meetings and non-computer activities to allow an optimum control strategy to be implemented. In this paper, fuzzy finite state machines are used to model an office workerâs behaviour. The model will incorporate sensory data collected from the environment as the input and some pre-defined fuzzy states are used to develop the model. Experimental results are presented to illustrate the effectiveness of this approach. The activity models of different individual workers as inferred from the sensory devices can be distinguished. However, further investigation is required to create a more complete model
Techniques to Understand Computer Simulations: Markov Chain Analysis
The aim of this paper is to assist researchers in understanding the dynamics of simulation models that have been implemented and can be run in a computer, i.e. computer models. To do that, we start by explaining (a) that computer models are just input-output functions, (b) that every computer model can be re-implemented in many different formalisms (in particular in most programming languages), leading to alternative representations of the same input-output relation, and (c) that many computer models in the social simulation literature can be usefully represented as time-homogeneous Markov chains. Then we argue that analysing a computer model as a Markov chain can make apparent many features of the model that were not so evident before conducting such analysis. To prove this point, we present the main concepts needed to conduct a formal analysis of any time-homogeneous Markov chain, and we illustrate the usefulness of these concepts by analysing 10 well-known models in the social simulation literature as Markov chains. These models are: ââŹÂ˘ Schelling\'s (1971) model of spatial segregation ââŹÂ˘ Epstein and Axtell\'s (1996) Sugarscape ââŹÂ˘ Miller and Page\'s (2004) standing ovation model ââŹÂ˘ Arthur\'s (1989) model of competing technologies ââŹÂ˘ Axelrod\'s (1986) metanorms models ââŹÂ˘ Takahashi\'s (2000) model of generalized exchange ââŹÂ˘ Axelrod\'s (1997) model of dissemination of culture ââŹÂ˘ Kinnaird\'s (1946) truels ââŹÂ˘ Axelrod and Bennett\'s (1993) model of competing bimodal coalitions ââŹÂ˘ Joyce et al.\'s (2006) model of conditional association In particular, we explain how to characterise the transient and the asymptotic dynamics of these computer models and, where appropriate, how to assess the stochastic stability of their absorbing states. In all cases, the analysis conducted using the theory of Markov chains has yielded useful insights about the dynamics of the computer model under study.Computer Modelling, Simulation, Markov, Stochastic Processes, Analysis, Re-Implementation
- âŚ