1,503 research outputs found

    Regulation, valuation and systemic liquidity.

    Get PDF
    It is a commonly held view that International Financial Reporting Standards (IFRSs), adopted by the European Union in 2005 and by other jurisdictions, compounded the recent fi nancial crisis. Application of the IAS 39 rule that governs loan-loss provisions and extends mark-to-market valuation of assets meant that when credit prices fell sharply in 2007 and assets were revalued using the new lower prices, it triggered a need for institutions to raise capital by selling assets, which pushed prices down further, causing more revaluations and more selling in a vicious circle. Mark-to-market volatility added to this unstable dynamic by keeping new buyers away. Fair value accounting rules are pro-cyclical and can contribute to the systemic disappearance of liquidity.1 The price of assets if they were to be sold immediately fell substantially below the price of the same assets if they were to be held to maturity or for some time period beyond the crisis. This liquidity premium was no longer a fraction of a percentage point, but tens of percentage points. A number of observers have concluded that mark-to-market accounting should be suspended during a crisis. On its own, I believe this initiative would further weaken incentives for responsible lending in the good times. Nor would it solve the problem in bad times. The pro-cyclical use of market prices is not the preserve of accounting standards –it also lies at the heart of modern financial regulation. Financial crashes are not random. They always follow booms. Offering forbearance from mark-to-market accounting or other rules during a crisis, yet using these rules at other times, such as during the preceding boom, would promote excessive lending and leverage in the good times. This asymmetry would contribute to more frequent and severe crashes. Second, crises are a time where a rumour becomes a self-fulfilling prophesy, as panic and fear spread. It is, arguably, not the time to generate a rise in uncertainty by changing accounting standards. There is room for a revision to the application of mark-to-market rules, but not a revision based on relying on the messenger’s every last word in good times and shooting him in the bad times. But the mechanisms that lead market participants to greet price declines with sell orders have not all to do with value accounting. Current prices, including spot and forward prices, play an important role in the market risk and credit risk management systems approved by financial regulators. Risk limits and sell orders are triggered in response to a rise in price volatility and/or a fall in price. The very philosophy of current banking regulation –risk sensitivity– is about incorporating market prices into the assessment and response to risk. It should be no surprise that if prices, both prices for current and future delivery, are pro-cyclical, then placing an increasing emphasis on price in the management and regulation of risk, will lead us to systemic collapse. This article examines the role of valuation and systemic liquidity and argues that an approach to how we apply mark-to-market accounting and market prices or risk that is driven more by an economic view can improve the systemic resilience of the fi nancial system.

    Laser cooling of new atomic and molecular species with ultrafast pulses

    Full text link
    We propose a new laser cooling method for atomic species whose level structure makes traditional laser cooling difficult. For instance, laser cooling of hydrogen requires single-frequency vacuum-ultraviolet light, while multielectron atoms need single-frequency light at many widely separated frequencies. These restrictions can be eased by laser cooling on two-photon transitions with ultrafast pulse trains. Laser cooling of hydrogen, antihydrogen, and many other species appears feasible, and extension of the technique to molecules may be possible.Comment: revision of quant-ph/0306099, submitted to PR

    Processing Issues in Top-Down Approaches to Quantum Computer Development in Silicon

    Get PDF
    We describe critical processing issues in our development of single atom devices for solid-state quantum information processing. Integration of single 31P atoms with control gates and single electron transistor (SET) readout structures is addressed in a silicon-based approach. Results on electrical activation of low energy (15 keV) P implants in silicon show a strong dose effect on the electrical activation fractions. We identify dopant segregation to the SiO2/Si interface during rapid thermal annealing as a dopant loss channel and discuss measures of minimizing it. Silicon nanowire SET pairs with nanowire width of 10 to 20 nm are formed by electron beam lithography in SOI. We present first results from Coulomb blockade experiments and discuss issues of control gate integration for sub-40nm gate pitch levels

    Short-Pulse, Compressed Ion Beams at the Neutralized Drift Compression Experiment

    Full text link
    We have commenced experiments with intense short pulses of ion beams on the Neutralized Drift Compression Experiment (NDCX-II) at Lawrence Berkeley National Laboratory, with 1-mm beam spot size within 2.5 ns full-width at half maximum. The ion kinetic energy is 1.2 MeV. To enable the short pulse duration and mm-scale focal spot radius, the beam is neutralized in a 1.5-meter-long drift compression section following the last accelerator cell. A short-focal-length solenoid focuses the beam in the presence of the volumetric plasma that is near the target. In the accelerator, the line-charge density increases due to the velocity ramp imparted on the beam bunch. The scientific topics to be explored are warm dense matter, the dynamics of radiation damage in materials, and intense beam and beam-plasma physics including select topics of relevance to the development of heavy-ion drivers for inertial fusion energy. Below the transition to melting, the short beam pulses offer an opportunity to study the multi-scale dynamics of radiation-induced damage in materials with pump-probe experiments, and to stabilize novel metastable phases of materials when short-pulse heating is followed by rapid quenching. First experiments used a lithium ion source; a new plasma-based helium ion source shows much greater charge delivered to the target.Comment: 4 pages, 2 figures, 1 table. Submitted to the proceedings for the Ninth International Conference on Inertial Fusion Sciences and Applications, IFSA 201

    Detection of low energy single ion impacts in micron scale transistors at room temperature

    Get PDF
    We report the detection of single ion impacts through monitoring of changes in the source-drain currents of field effect transistors (FET) at room temperature. Implant apertures are formed in the interlayer dielectrics and gate electrodes of planar, micro-scale FETs by electron beam assisted etching. FET currents increase due to the generation of positively charged defects in gate oxides when ions (121Sb12+, 14+, Xe6+; 50 to 70 keV) impinge into channel regions. Implant damage is repaired by rapid thermal annealing, enabling iterative cycles of device doping and electrical characterization for development of single atom devices and studies of dopant fluctuation effects

    Congested Traffic States in Empirical Observations and Microscopic Simulations

    Full text link
    We present data from several German freeways showing different kinds of congested traffic forming near road inhomogeneities, specifically lane closings, intersections, or uphill gradients. The states are localized or extended, homogeneous or oscillating. Combined states are observed as well, like the coexistence of moving localized clusters and clusters pinned at road inhomogeneities, or regions of oscillating congested traffic upstream of nearly homogeneous congested traffic. The experimental findings are consistent with a recently proposed theoretical phase diagram for traffic near on-ramps [D. Helbing, A. Hennecke, and M. Treiber, Phys. Rev. Lett. {\bf 82}, 4360 (1999)]. We simulate these situations with a novel continuous microscopic single-lane model, the ``intelligent driver model'' (IDM), using the empirical boundary conditions. All observations, including the coexistence of states, are qualitatively reproduced by describing inhomogeneities with local variations of one model parameter. We show that the results of the microscopic model can be understood by formulating the theoretical phase diagram for bottlenecks in a more general way. In particular, a local drop of the road capacity induced by parameter variations has practically the same effect as an on-ramp.Comment: Now published in Phys. Rev. E. Minor changes suggested by a referee are incorporated; full bibliographic info added. For related work see http://www.mtreiber.de/ and http://www.helbing.org

    Low dose CT vs plain abdominal radiography for the investigation of the acute abdomen

    Get PDF
    Background: To compare low-dose abdominal computed tomography (LDCT) with plain abdominal radiography (AR) in the primary investigation of acute abdominal pain to determine if there is a difference in diagnostic yield, the number of additional investigations required and hospital length of stay (LOS). Methods: This randomized controlled trial was approved by the institutional review board, and informed consent was obtained. Patients presenting to the emergency department with an acute abdomen and who would normally be investigated with AR were randomized to either AR or LDCT. The estimated radiation dose of the LDCT protocol was 2–3 mSv compared to 1.1 mSv for AR. Pearson\u27s chi-square and the independent samples t-test were used for the statistical analysis. Results: A total of 142 patients were eligible, and after exclusions and omitting those with incomplete data, 55 patients remained for analysis in the AR arm and 53 in the LDCT arm. A diagnosis could be obtained in 12 (21.8%) patients investigated with AR compared to 34 (64.2%) for LDCT (P \u3c 0.001). Twenty-eight (50.9%) patients in the AR group required further imaging during their admission compared to 14 (26.4%) in the LDCT group (P= 0.009). There was no difference in the median hospital LOS (3.84 days for AR versus 4.24 days for LDCT, P= 0.83). Conclusion: LDCT demonstrates a superior diagnostic yield over AR and reduces the number of subsequent imaging tests for a minimal cost in radiation exposure. However, there is no difference in the overall hospital LOS between the two imaging strategies

    Cellular automata approach to three-phase traffic theory

    Full text link
    The cellular automata (CA) approach to traffic modeling is extended to allow for spatially homogeneous steady state solutions that cover a two dimensional region in the flow-density plane. Hence these models fulfill a basic postulate of a three-phase traffic theory proposed by Kerner. This is achieved by a synchronization distance, within which a vehicle always tries to adjust its speed to the one of the vehicle in front. In the CA models presented, the modelling of the free and safe speeds, the slow-to-start rules as well as some contributions to noise are based on the ideas of the Nagel-Schreckenberg type modelling. It is shown that the proposed CA models can be very transparent and still reproduce the two main types of congested patterns (the general pattern and the synchronized flow pattern) as well as their dependence on the flows near an on-ramp, in qualitative agreement with the recently developed continuum version of the three-phase traffic theory [B. S. Kerner and S. L. Klenov. 2002. J. Phys. A: Math. Gen. 35, L31]. These features are qualitatively different than in previously considered CA traffic models. The probability of the breakdown phenomenon (i.e., of the phase transition from free flow to synchronized flow) as function of the flow rate to the on-ramp and of the flow rate on the road upstream of the on-ramp is investigated. The capacity drops at the on-ramp which occur due to the formation of different congested patterns are calculated.Comment: 55 pages, 24 figure

    Probabilistic Description of Traffic Breakdowns

    Full text link
    We analyze the characteristic features of traffic breakdown. To describe this phenomenon we apply to the probabilistic model regarding the jam emergence as the formation of a large car cluster on highway. In these terms the breakdown occurs through the formation of a certain critical nucleus in the metastable vehicle flow, which enables us to confine ourselves to one cluster model. We assume that, first, the growth of the car cluster is governed by attachment of cars to the cluster whose rate is mainly determined by the mean headway distance between the car in the vehicle flow and, may be, also by the headway distance in the cluster. Second, the cluster dissolution is determined by the car escape from the cluster whose rate depends on the cluster size directly. The latter is justified using the available experimental data for the correlation properties of the synchronized mode. We write the appropriate master equation converted then into the Fokker-Plank equation for the cluster distribution function and analyze the formation of the critical car cluster due to the climb over a certain potential barrier. The further cluster growth irreversibly gives rise to the jam formation. Numerical estimates of the obtained characteristics and the experimental data of the traffic breakdown are compared. In particular, we draw a conclusion that the characteristic intrinsic time scale of the breakdown phenomenon should be about one minute and explain the case why the traffic volume interval inside which traffic breakdown is observed is sufficiently wide.Comment: RevTeX 4, 14 pages, 10 figure
    corecore