2,782 research outputs found

    Physiological Environment Induces Quick Response – Slow Exhaustion Reactions

    Get PDF
    In vivo environments are highly crowded and inhomogeneous, which may affect reaction processes in cells. In this study we examined the effects of intracellular crowding and an inhomogeneity on the behavior of in vivo reactions by calculating the spectral dimension (ds), which can be translated into the reaction rate function. We compared estimates of anomaly parameters obtained from fluorescence correlation spectroscopy (FCS) data with fractal dimensions derived from transmission electron microscopy (TEM) image analysis. FCS analysis indicated that the anomalous property was linked to physiological structure. Subsequent TEM analysis provided an in vivo illustration; soluble molecules likely percolate between intracellular clusters, which are constructed in a self-organizing manner. We estimated a cytoplasmic spectral dimension ds to be 1.39 ± 0.084. This result suggests that in vivo reactions initially run faster than the same reactions in a homogeneous space; this conclusion is consistent with the anomalous character indicated by FCS analysis. We further showed that these results were compatible with our Monte-Carlo simulation in which the anomalous behavior of mobile molecules correlates with the intracellular environment, leading to description as a percolation cluster, as demonstrated using TEM analysis. We confirmed by the simulation that the above-mentioned in vivo like properties are different from those of homogeneously concentrated environments. Additionally, simulation results indicated that crowding level of an environment might affect diffusion rate of reactant. Such knowledge of the spatial information enables us to construct realistic models for in vivo diffusion and reaction systems

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques

    Market Stability vs. Market Resilience: Regulatory Policies Experiments in an Agent-Based Model with Low- and High- Frequency Trading

    Get PDF
    We investigate the effects of different regulatory policies directed towards high-frequency trading (HFT) through an agent-based model of a limit order book able to generate flash crashes as the result of the interactions between low- and high-frequency (HF) traders. We analyze the impact of the imposition of minimum resting times, of circuit breakers (both ex-post and ex-ante types), of cancellation fees and of transaction taxes on asset price volatility and on the occurrence and duration of ash crashes. In the model, low- frequency agents adopt trading rules based on chronological time and can switch between fundamentalist and chartist strategies. In contrast, high-frequency traders activation is event-driven and depends on price fluctuations. In addition, high-frequency traders employ low-latency directional strategies that exploit market information and they can cancel their orders depending on expected profits. Monte-Carlo simulations reveal that reducing HF order cancellation, via minimum resting times or cancellation fees, or discouraging HFT via financial transaction taxes, reduces market volatility and the frequency of ash crashes. However, these policies also imply a longer duration of flash crashes. Furthermore, the introduction of an ex-ante circuit breaker markedly reduces price volatility and removes ash crashes. In contrast, ex-post circuit breakers do not affect market volatility and they increase the duration of flash crashes. Our results show that HFT-targeted policies face a trade-o between market stability and resilience. Policies that reduce volatility and the incidence of flash crashes also imply a reduced ability of the market to quickly recover from a crash. The dual role of HFT, as both a cause of the flash crash and a fundamental actor in the post-crash recovery underlies the above trade-off

    Rugged free-energy landscapes in disordered spin systems

    Get PDF
    This thesis is an attempt to provide a new outlook on complex systems, as well as some physical answers for certain models, taking a computational approach. We have focused on disordered systems, addressing two traditional problems in three spatial dimensions: the Edwards-Anderson spin glass and the Diluted Antiferromagnet in a Field (the physical realisation of the random-field Ising model). These systems have been studied by means of large-scale Monte Carlo simulations, exploiting a variety of platforms, which include the Janus special-purpose supercomputer. Two main themes are explored throughout: a) the relationship between the (experimentally unreachable) equilibrium phase and the non-equilibrium evolution and b) the computation and efficient treatment of rugged free-energy landscapes. We perform a thorough study of the low-temperature phase of the D=3 Edwards-Anderson spin glass, where we establish a time-length dictionary and a finite-time scaling formalism to link, in a quantitative way, the experimental non-equilibrium regime and the finite-size equilibrium phase. At the experimentally relevant scales, the replica symmetry breaking theory emerges as the appropriate theoretical picture. We also introduce Tethered Monte Carlo, a general strategy for the study of systems with rugged free-energy landscapes. This formalism provides a general method to guide the exploration of the configuration space by constraining one or more reaction coordinates. From these tethered simulations, the Helmholtz potential associated to the reaction coordinates is reconstructed, yielding all the information about the system. We use this method to provide a comprehensive picture of the critical behaviour in the Diluted Antiferromagnet in a Field.Comment: PhD Thesis. Defended at the Universidad Complutense de Madrid on October 21, 201

    Ground-level atmospheric gamma-ray flux measurements in the 1-6 MeV range

    Get PDF
    This thesis deals with the measurement of atmospheric gamma ray flux in the 1-6 MeV range at ground level. These measurements were carried out using a Compton gamma ray telescope, developed at the University of New Hampshire. It utilizes the Compton scattering principle to detect and image gamma ray sources. The telescope was used to measure ground level atmospheric gamma rays at four locations (Leadville (10200 ft), Boulder (5430 ft), Mt. Washington (6072 ft) and Durham (80 ft)) which ranged in atmospheric depth from 720-1033 g/cm\sp2 and in local cutoff rigidity from 1.4-2.9 GV. Data was collected over a two week period at each location during 1987. The results yielded for the first time statistically atmospheric gamma ray flux values at large depths in the atmosphere. The analysis provided differential energy flux (photon/cm\sp2-s-sr-MeV) at various zenith angles (10\sp\circ-40\sp\circ) in the 1-6 MeV energy range. The zenith angle dependence of the differential energy flux indicated a cos\sp{\rm n}\theta dependence where n \approx 2.8 at higher altitudes (Leadville and Mt. Washington) and n \approx 2.0 deeper in the atmosphere (Boulder and Durham). The vertical intensity fitted a power law spectrum of index \approx 1.2, with the spectrum softening at large atmospheric depths. The atmospheric depth dependence shows an e-folding depth of 153 g/cm\sp2. Using this depth dependence, all existing measurements below 700 g/cm\sp2 were normalized to sea level. Good agreement is seen among the normalized sea level flux corresponding to different experiments. Comparing experimental results with existing theoretical and Monte Carlo calculations in the 1-10 MeV range, the measurements indicate a softer power law spectrum, indicating the need to further examine the calculations. Combining UNH results with University of California (Riverside) measurements, indicate a weak rigidity dependence in the vertical atmospheric gamma ray intensity
    corecore