47,046 research outputs found

    Interpretation of the OGLE Q2237+0305 microlensing light-curve

    Get PDF
    The four bright images of the gravitationally lensed quasar Q2237+0305 are being monitored from the ground (eg. OGLE collaboration, Apache Point Observatory) in the hope of observing a high magnification event (HME). Over the past three seasons (1997-1999) the OGLE collaboration has produced microlensing light-curves with unprecedented coverage. These demonstrate smooth, independent (therefore microlensing) variability between the images (Wozniak et al. 2000a,b; OGLE web page). We have retrospectively compared probability functions for high-magnification event parameters with several observed light-curve features. We conclude that the 1999 image C peak was due to the source having passed outside of a cusp rather than to a caustic crossing. In addition, we find that the image C light-curve shows evidence for a caustic crossing between the 1997 and 1998 observing seasons involving the appearance of new critical images. Our models predict that the next image C event is most likely to arrive 500 days following the 1999 peak, but with a large uncertainty (100-2000 days). Finally, given the image A light-curve derivative at the end of the 1999 observing season, our modelling suggests that a caustic crossing will occur between the 1999 and 2000 observing seasons, implying a minimum for the image A light-curve ~1-1.5 magnitudes fainter than the November 1999 level.Comment: 11 pages, 15 figures. Accepted for publication in M.N.R.A.

    Predicting caustic crossing high magnification events in Q2237+0305

    Full text link
    The central regions of the gravitationally lensed quasar Q2237+0305 can be indirectly resolved on nano-arcsecond scales if viewed spectrophotometricly during a microlensing high magnification event (HME). Q2237+0305 is currently being monitored from the ground (eg. OGLE collaboration, Apache Point Observatory), with the goal, among others, of triggering ground and spacecraft based target of opportunity (TOO) observations of an HME. In this work we investigate the rate of change (trigger) in image brightness that signals an imminent HME and importantly, the separation between the trigger and the event peak. In addition, we produce colour dependent model light-curves by combining high-resolution microlensing simulations with a realistic model for a thermal accretion disc source. We make hypothetical target of opportunity spectroscopic observations using our determination of the appropriate trigger as a guide. We find that if the source spectrum varies with source radius, a 3 observation TOO program should be able to observe a microlensing change in the continuum slope following a light-curve trigger with a success rate of >80%.Comment: 17 pages, 16 figures, accepted for publication in M.N.R.A.

    High-resolution distributed sampling of bandlimited fields with low-precision sensors

    Full text link
    The problem of sampling a discrete-time sequence of spatially bandlimited fields with a bounded dynamic range, in a distributed, communication-constrained, processing environment is addressed. A central unit, having access to the data gathered by a dense network of fixed-precision sensors, operating under stringent inter-node communication constraints, is required to reconstruct the field snapshots to maximum accuracy. Both deterministic and stochastic field models are considered. For stochastic fields, results are established in the almost-sure sense. The feasibility of having a flexible tradeoff between the oversampling rate (sensor density) and the analog-to-digital converter (ADC) precision, while achieving an exponential accuracy in the number of bits per Nyquist-interval per snapshot is demonstrated. This exposes an underlying ``conservation of bits'' principle: the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed along the amplitude axis (sensor-precision) and space (sensor density) in an almost arbitrary discrete-valued manner, while retaining the same (exponential) distortion-rate characteristics. Achievable information scaling laws for field reconstruction over a bounded region are also derived: With N one-bit sensors per Nyquist-interval, Θ(logN)\Theta(\log N) Nyquist-intervals, and total network bitrate Rnet=Θ((logN)2)R_{net} = \Theta((\log N)^2) (per-sensor bitrate Θ((logN)/N)\Theta((\log N)/N)), the maximum pointwise distortion goes to zero as D=O((logN)2/N)D = O((\log N)^2/N) or D=O(Rnet2βRnet)D = O(R_{net} 2^{-\beta \sqrt{R_{net}}}). This is shown to be possible with only nearest-neighbor communication, distributed coding, and appropriate interpolation algorithms. For a fixed, nonzero target distortion, the number of fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal Processing and re-submitted to the IEEE Transactions on Information Theor

    Efficient Dynamic Importance Sampling of Rare Events in One Dimension

    Get PDF
    Exploiting stochastic path integral theory, we obtain \emph{by simulation} substantial gains in efficiency for the computation of reaction rates in one-dimensional, bistable, overdamped stochastic systems. Using a well-defined measure of efficiency, we compare implementations of ``Dynamic Importance Sampling'' (DIMS) methods to unbiased simulation. The best DIMS algorithms are shown to increase efficiency by factors of approximately 20 for a 5kBT5 k_B T barrier height and 300 for 9kBT9 k_B T, compared to unbiased simulation. The gains result from close emulation of natural (unbiased), instanton-like crossing events with artificially decreased waiting times between events that are corrected for in rate calculations. The artificial crossing events are generated using the closed-form solution to the most probable crossing event described by the Onsager-Machlup action. While the best biasing methods require the second derivative of the potential (resulting from the ``Jacobian'' term in the action, which is discussed at length), algorithms employing solely the first derivative do nearly as well. We discuss the importance of one-dimensional models to larger systems, and suggest extensions to higher-dimensional systems.Comment: version to be published in Phys. Rev.

    Dual-Topology Hamiltonian-Replica-Exchange Overlap Histogramming Method to Calculate Relative Free Energy Difference in Rough Energy Landscape

    Get PDF
    A novel overlap histogramming method based on Dual-Topology Hamiltonian-Replica-Exchange simulation technique is presented to efficiently calculate relative free energy difference in rough energy landscape, in which multiple conformers coexist and are separated by large energy barriers. The proposed method is based on the realization that both DT-HERM exchange efficiency and confidence of free energy determination in overlap histogramming method depend on the same criteria: neighboring states' energy derivative distribution overlap. In this paper, we demonstrate this new methodology by calculating free energy difference between amino acids: Leucine and Asparagine, which is an identified chanllenging system for free energy simulations.Comment: 14 pages with 4 figure

    Pulse processing routines for neutron time-of-flight data

    Full text link
    A pulse shape analysis framework is described, which was developed for n_TOF-Phase3, the third phase in the operation of the n_TOF facility at CERN. The most notable feature of this new framework is the adoption of generic pulse shape analysis routines, characterized by a minimal number of explicit assumptions about the nature of pulses. The aim of these routines is to be applicable to a wide variety of detectors, thus facilitating the introduction of the new detectors or types of detectors into the analysis framework. The operational details of the routines are suited to the specific requirements of particular detectors by adjusting the set of external input parameters. Pulse recognition, baseline calculation and the pulse shape fitting procedure are described. Special emphasis is put on their computational efficiency, since the most basic implementations of these conceptually simple methods are often computationally inefficient.Comment: 13 pages, 10 figures, 5 table

    Modeling and Propagation of Noisy Waveforms in Static Timing Analysis

    Full text link
    A technique based on the sensitivity of the output to input waveform is presented for accurate propagation of delay information through a gate for the purpose of static timing analysis (STA) in the presence of noise. Conventional STA tools represent a waveform by its arrival time and slope. However, this is not an accurate way of modeling the waveform for the purpose of noise analysis. The key contribution of our work is the development of a method that allows efficient propagation of equivalent waveforms throughout the circuit. Experimental results demonstrate higher accuracy of the proposed sensitivity-based gate delay propagation technique, SGDP, compared to the best of existing approaches. SGDP is compatible with the current level of gate characterization in conventional ASIC cell libraries, and as a result, it can be easily incorporated into commercial STA tools to improve their accuracy.Comment: Submitted on behalf of EDAA (http://www.edaa.com/
    corecore