7,638 research outputs found

    Phase diagram of a solution undergoing inverse melting

    Full text link
    The phase diagram of α\alpha-cyclodextrin/water/4-methylpyridine solutions, a system undergoing inverse melting, has been studied by differential scanning calorimetry, rheological methods, and X-rays diffraction. Two different fluid phases separated by a solid region have been observed in the high α\alpha-cyclodextrin concentration range (cc≄\geq150 mg/ml). Decreasing cc, the temperature interval where the solid phase exists decreases and eventually disappears, and a first order phase transition is observed between the two different fluid phases.Comment: 4 pages, 5 figures, accepted on Physical Review E (R

    Pole Dancing: 3D Morphs for Tree Drawings

    Full text link
    We study the question whether a crossing-free 3D morph between two straight-line drawings of an nn-vertex tree can be constructed consisting of a small number of linear morphing steps. We look both at the case in which the two given drawings are two-dimensional and at the one in which they are three-dimensional. In the former setting we prove that a crossing-free 3D morph always exists with O(log⁥n)O(\log n) steps, while for the latter Θ(n)\Theta(n) steps are always sufficient and sometimes necessary.Comment: Appears in the Proceedings of the 26th International Symposium on Graph Drawing and Network Visualization (GD 2018

    Which diagnostic tests are most useful in a chest pain unit protocol?

    Get PDF
    Background The chest pain unit (CPU) provides rapid diagnostic assessment for patients with acute, undifferentiated chest pain, using a combination of electrocardiographic (ECG) recording, biochemical markers and provocative cardiac testing. We aimed to identify which elements of a CPU protocol were most diagnostically and prognostically useful. Methods The Northern General Hospital CPU uses 2–6 hours of serial ECG / ST segment monitoring, CK-MB(mass) on arrival and at least two hours later, troponin T at least six hours after worst pain and exercise treadmill testing. Data were prospectively collected over an eighteen-month period from patients managed on the CPU. Patients discharged after CPU assessment were invited to attend a follow-up appointment 72 hours later for ECG and troponin T measurement. Hospital records of all patients were reviewed to identify adverse cardiac events over the subsequent six months. Diagnostic accuracy of each test was estimated by calculating sensitivity and specificity for: 1) acute coronary syndrome (ACS) with clinical myocardial infarction and 2) ACS with myocyte necrosis. Prognostic value was estimated by calculating the relative risk of an adverse cardiac event following a positive result. Results Of the 706 patients, 30 (4.2%) were diagnosed as ACS with myocardial infarction, 30 (4.2%) as ACS with myocyte necrosis, and 32 (4.5%) suffered an adverse cardiac event. Sensitivities for ACS with myocardial infarction and myocyte necrosis respectively were: serial ECG / ST segment monitoring 33% and 23%; CK-MB(mass) 96% and 63%; troponin T (using 0.03 ng/ml threshold) 96% and 90%. The only test that added useful prognostic information was exercise treadmill testing (relative risk 6 for cardiac death, non-fatal myocardial infarction or arrhythmia over six months). Conclusion Serial ECG / ST monitoring, as used in our protocol, adds little diagnostic or prognostic value in patients with a normal or non-diagnostic initial ECG. CK-MB(mass) can rule out ACS with clinical myocardial infarction but not myocyte necrosis(defined as a troponin elevation without myocardial infarction). Using a low threshold for positivity for troponin T improves sensitivity of this test for myocardial infarction and myocardial necrosis. Exercise treadmill testing predicts subsequent adverse cardiac events

    Natural clustering: the modularity approach

    Full text link
    We show that modularity, a quantity introduced in the study of networked systems, can be generalized and used in the clustering problem as an indicator for the quality of the solution. The introduction of this measure arises very naturally in the case of clustering algorithms that are rooted in Statistical Mechanics and use the analogy with a physical system.Comment: 11 pages, 5 figure enlarged versio

    Delta hedging in discrete time under stochastic interest rate

    Get PDF
    We propose a methodology based on the Laplace transform to compute the variance of the hedging error due to time discretization for financial derivatives when the interest rate is stochastic. Our approach can be applied to any affine model for asset prices and to a very general class of hedging strategies, including Delta hedging. We apply it in a two-dimensional market model, obtained by combining the models of Black-Scholes and Vasicek, where we compare a strategy that correctly takes into account the variability of interest rates to one that erroneously assumes that they are deterministic. We show that the differences between the two strategies can be very significant. The factors with stronger influence are the ratio between the standard deviations of the equity and that of the interest rate, and their correlation. The methodology is also applied to study the Delta hedging strategy for an interest rate option in the Cox-Ingersoll and Ross model, measuring the variance of the hedging error as a function of the frequency of the rebalancing dates. We compare the results obtained to those coming from a classical Monte Carlo simulation

    An Interactive Tool to Explore and Improve the Ply Number of Drawings

    Full text link
    Given a straight-line drawing Γ\Gamma of a graph G=(V,E)G=(V,E), for every vertex vv the ply disk DvD_v is defined as a disk centered at vv where the radius of the disk is half the length of the longest edge incident to vv. The ply number of a given drawing is defined as the maximum number of overlapping disks at some point in R2\mathbb{R}^2. Here we present a tool to explore and evaluate the ply number for graphs with instant visual feedback for the user. We evaluate our methods in comparison to an existing ply computation by De Luca et al. [WALCOM'17]. We are able to reduce the computation time from seconds to milliseconds for given drawings and thereby contribute to further research on the ply topic by providing an efficient tool to examine graphs extensively by user interaction as well as some automatic features to reduce the ply number.Comment: Appears in the Proceedings of the 25th International Symposium on Graph Drawing and Network Visualization (GD 2017

    The discovery of 12min X-ray pulsations from 1WGA J1958.2+3232

    Get PDF
    During a systematic search for periodic signals in a sample of ROSAT PSPC (0.1-2.4 keV) light curves, we discovered 12min large amplitude X-ray pulsations in 1WGA J1958.2+3232, an X-ray source which lies close to the galactic plane. The energy spectrum is well fit by a power law with a photon index of 0.8, corresponding to an X-ray flux of about 10E-12 ergs cmE-2 sE-1. The source is probably a long period, low luminosity X-ray pulsar, similar to X Per, or an intermediate polar.Comment: 5 pages (figures included). Accepted for publication on MNRA

    Identification of network modules by optimization of ratio association

    Get PDF
    We introduce a novel method for identifying the modular structures of a network based on the maximization of an objective function: the ratio association. This cost function arises when the communities detection problem is described in the probabilistic autoencoder frame. An analogy with kernel k-means methods allows to develop an efficient optimization algorithm, based on the deterministic annealing scheme. The performance of the proposed method is shown on a real data set and on simulated networks

    Cost functions for pairwise data clustering

    Full text link
    Cost functions for non-hierarchical pairwise clustering are introduced, in the probabilistic autoencoder framework, by the request of maximal average similarity between the input and the output of the autoencoder. The partition provided by these cost functions identifies clusters with dense connected regions in data space; differences and similarities with respect to a well known cost function for pairwise clustering are outlined.Comment: 5 pages, 4 figure
    • 

    corecore