2,732 research outputs found
Network Topology as a Driver of Bistability in the lac Operon
The lac operon in Escherichia coli has been studied extensively and is one of
the earliest gene systems found to undergo both positive and negative control.
The lac operon is known to exhibit bistability, in the sense that the operon is
either induced or uninduced. Many dynamical models have been proposed to
capture this phenomenon. While most are based on complex mathematical
formulations, it has been suggested that for other gene systems network
topology is sufficient to produce the desired dynamical behavior.
We present a Boolean network as a discrete model for the lac operon. We
include the two main glucose control mechanisms of catabolite repression and
inducer exclusion in the model and show that it exhibits bistability. Further
we present a reduced model which shows that lac mRNA and lactose form the core
of the lac operon, and that this reduced model also exhibits the same dynamics.
This work corroborates the claim that the key to dynamical properties is the
topology of the network and signs of interactions.Comment: 15 pages, 13 figures, supplemental information include
Recommended from our members
Removing opportunities to calculate improves students' performance on subsequent word problems.
BackgroundIn two studies we investigated whether removing opportunities to calculate could improve students' subsequent ability to solve similar word problems. Students were first asked to write explanations for three word-problems that they thought would help another student understand the problems. Half of the participants explained typical word problems (i.e., problems with enough information to make calculating an answer possible), while the other half explained the same problems with numbers removed, making calculating an answer impossible. We hypothesized that removing opportunities to calculate would induce students to think relationally about the word problems, which would result in higher levels of performance on subsequent transfer problems.ResultsIn both studies, participants who explained the non-calculable problems performed significantly better on the transfer test than participants who explained the typical (i.e., calculable) problems. This was so in spite of the manipulation not fully suppressing students' desire to calculate. Many students in the non-calculable group explicitly stated that they needed numbers in order to answer the question or made up numbers with which to calculate. There was a significant, positive relationship between the frequency with which students made up numbers and their self-reported mathematics anxiety.ConclusionsWe hypothesized that the mechanism at play was a reduction in instrumental thinking (and an increase in relational thinking). Interventions designed to help students remediate prior mathematical failure should perhaps focus less on the specific skills students are lacking, and more on the dispositions they bring to the task of "doing mathematics.
Value-added Teacher Estimates as Part of Teacher Evaluations: Exploring the Effects of Data and Model Specifications on the Stability of Teacher Value-added Scores
In this study we explored the effects of statistical controls, single versus multiple cohort models, and student sample size on the stability of teacher value-added estimates (VAEs). We estimated VAEs for all 5th grade mathematics teachers in a large urban district by fitting two level mixed models using four cohorts of student data. We found that student sample size had the largest effect on changes in teachers’ relative standing and designation into performance groups, while control variables affected VAEs only minimally. However, we also found that teacher VAEs showed a fair degree of stability; year-to-year correlations ranged between .62 and .66, and changes in teacher effectiveness systematically varied by teacher experience, with beginning teachers showing the largest improvements over the four years under study. Our results suggest that some model specifications are likely to produce teacher value-added scores that can reflect meaningful differences in teachers while we also found that other models might produce VAEs that might be unreliable. Â
Inventory Existing Risk Scenarios
This report provides an inventory of existing hazard data, spatial data sets and socioeconomic projections to process scenario information and future risk projections for the ENHANCE case studies. As a basis for this inventory, we conducted a small survey across the EHNHANCE cases study on their data needs. Table 1.1 provides a preliminary overview of the hazard- and socioeconomic data and scenario's required within the different case studies. This overview on the case study data needs and the data availability within the different case study partners, was discussed during the project meetings in Venice, May 2013 and Ispra (September 2013).
During the meeting in Ispra, the case studies were offered a 2 days hands on workshop on how to use scenario and risk data or their case studies. This workshop was offered by IVM and JRC.
Since the ENHANCE project follows a risk based approach, we similarly have focused this report on (1) data and projections for different types of natural hazards (Chapter 2) and (2) trends in socioeconomic factors that influence exposure and vulnerability to the natural hazard (Chapter 3). In addition, we have specifically outlined methods to process socioeconomic scenarios (Chapter 4) and probabilistic methods (Chapter 5) to describe extreme events with a very low probability.
The main objectives of this report are to:
- Provide an inventory of dynamic hazard scenarios at the pan-European scale, based on existing information at JRC or other institutes;
- Provide an inventory of socioeconomic data and projections in Europe as well as some global outlook projections, possibly relevant for ENHANCE;
- Develop a probabilistic risk framework for identifying probabilities of extreme events in the case studies
Grouping based feature attribution in metacontrast masking
The visibility of a target can be strongly suppressed by metacontrast masking.
Still, some features of the target can be perceived within the mask. Usually,
these rare cases of feature mis-localizations are assumed to reflect errors of
the visual system. To the contrary, I will show that feature
"mis-localizations" in metacontrast masking follow rules of
motion grouping and, hence, should be viewed as part of a systematic feature
attribution process
Emergence of long memory in stock volatility from a modified Mike-Farmer model
The Mike-Farmer (MF) model was constructed empirically based on the
continuous double auction mechanism in an order-driven market, which can
successfully reproduce the cubic law of returns and the diffusive behavior of
stock prices at the transaction level. However, the volatility (defined by
absolute return) in the MF model does not show sound long memory. We propose a
modified version of the MF model by including a new ingredient, that is, long
memory in the aggressiveness (quantified by the relative prices) of incoming
orders, which is an important stylized fact identified by analyzing the order
flows of 23 liquid Chinese stocks. Long memory emerges in the volatility
synthesized from the modified MF model with the DFA scaling exponent close to
0.76, and the cubic law of returns and the diffusive behavior of prices are
also produced at the same time. We also find that the long memory of order
signs has no impact on the long memory property of volatility, and the memory
effect of order aggressiveness has little impact on the diffusiveness of stock
prices.Comment: 6 pages, 6 figures and 1 tabl
Moderation of antipsychotic-induced weight gain by energy balance gene variants in the RUPP autism network risperidone studies.
Second-generation antipsychotic exposure, in both children and adults, carries significant risk for excessive weight gain that varies widely across individuals. We queried common variation in key energy balance genes (FTO, MC4R, LEP, CNR1, FAAH) for their association with weight gain during the initial 8 weeks in the two NIMH Research Units on Pediatric Psychopharmacology Autism Network trials (N=225) of risperidone for treatment of irritability in children/adolescents aged 4-17 years with autism spectrum disorders. Variants in the cannabinoid receptor (CNR)-1 promoter (P=1.0 × 10(-6)), CNR1 (P=9.6 × 10(-5)) and the leptin (LEP) promoter (P=1.4 × 10(-4)) conferred robust-independent risks for weight gain. A model combining these three variants was highly significant (P=1.3 × 10(-9)) with a 0.85 effect size between lowest and highest risk groups. All results survived correction for multiple testing and were not dependent on dose, plasma level or ethnicity. We found no evidence for association with a reported functional variant in the endocannabinoid metabolic enzyme, fatty acid amide hydrolase, whereas body mass index-associated single-nucleotide polymorphisms in FTO and MC4R showed only trend associations. These data suggest a substantial genetic contribution of common variants in energy balance regulatory genes to individual antipsychotic-associated weight gain in children and adolescents, which supersedes findings from prior adult studies. The effects are robust enough to be detected after only 8 weeks and are more prominent in this largely treatment naive population. This study highlights compelling directions for further exploration of the pharmacogenetic basis of this concerning multifactorial adverse event
On the future of astrostatistics: statistical foundations and statistical practice
This paper summarizes a presentation for a panel discussion on "The Future of
Astrostatistics" held at the Statistical Challenges in Modern Astronomy V
conference at Pennsylvania State University in June 2011. I argue that the
emerging needs of astrostatistics may both motivate and benefit from
fundamental developments in statistics. I highlight some recent work within
statistics on fundamental topics relevant to astrostatistical practice,
including the Bayesian/frequentist debate (and ideas for a synthesis),
multilevel models, and multiple testing. As an important direction for future
work in statistics, I emphasize that astronomers need a statistical framework
that explicitly supports unfolding chains of discovery, with acquisition,
cataloging, and modeling of data not seen as isolated tasks, but rather as
parts of an ongoing, integrated sequence of analyses, with information and
uncertainty propagating forward and backward through the chain. A prototypical
example is surveying of astronomical populations, where source detection,
demographic modeling, and the design of survey instruments and strategies all
interact.Comment: 8 pp, 2 figures. To appear in "Statistical Challenges in Modern
Astronomy V," (Lecture Notes in Statistics, Vol. 209), ed. Eric D. Feigelson
and G. Jogesh Babu; publication planned for Sep 2012; see
http://www.springer.com/statistics/book/978-1-4614-3519-
Studies of the limit order book around large price changes
We study the dynamics of the limit order book of liquid stocks after
experiencing large intra-day price changes. In the data we find large
variations in several microscopical measures, e.g., the volatility the bid-ask
spread, the bid-ask imbalance, the number of queuing limit orders, the activity
(number and volume) of limit orders placed and canceled, etc. The relaxation of
the quantities is generally very slow that can be described by a power law of
exponent . We introduce a numerical model in order to understand
the empirical results better. We find that with a zero intelligence deposition
model of the order flow the empirical results can be reproduced qualitatively.
This suggests that the slow relaxations might not be results of agents'
strategic behaviour. Studying the difference between the exponents found
empirically and numerically helps us to better identify the role of strategic
behaviour in the phenomena.Comment: 19 pages, 7 figure
- …