501 research outputs found

    Finger patterns produced by thermomagnetic instability in superconductors

    Full text link
    A linear analysis of thermal diffusion and Maxwell equations is applied to study the thermomagnetic instability in a type-II superconducting slab. It is shown that the instability can lead to formation of spatially nonuniform distributions of magnetic field and temperature. The distributions acquire a finger structure with fingers perpendicular to the screening current direction. We derive the criterion for the instability, and estimate its build-up time and characteristic finger width. The fingering instability emerges when the background electric field is larger than a threshold field, E>EcE>E_c, and the applied magnetic field exceeds a value Hfing1/EH_fing \propto 1/\sqrt{E}. Numerical simulations support the analytical results, and allow to follow the development of the fingering instability beyond the linear regime. The fingering instability may be responsible for the nucleation of dendritic flux patterns observed in superconducting films using magneto-optical imaging.Comment: 8 pages, 6 figures, accepted to Phys. Rev. B; (new version: minor changes

    An algorithmic approach to the existence of ideal objects in commutative algebra

    Full text link
    The existence of ideal objects, such as maximal ideals in nonzero rings, plays a crucial role in commutative algebra. These are typically justified using Zorn's lemma, and thus pose a challenge from a computational point of view. Giving a constructive meaning to ideal objects is a problem which dates back to Hilbert's program, and today is still a central theme in the area of dynamical algebra, which focuses on the elimination of ideal objects via syntactic methods. In this paper, we take an alternative approach based on Kreisel's no counterexample interpretation and sequential algorithms. We first give a computational interpretation to an abstract maximality principle in the countable setting via an intuitive, state based algorithm. We then carry out a concrete case study, in which we give an algorithmic account of the result that in any commutative ring, the intersection of all prime ideals is contained in its nilradical

    Stress-dependent elastic properties of shales: measurement and modeling

    Get PDF
    Despite decades of research, current understanding of elastic properties of shales is insufficient as it is based on a limited number of observations caused by the time-consuming nature of testing resulting from their low permeability. Though it is well known that shales are highly anisotropic and assumed to be transversely isotropic (TI) media, few laboratory experiments have been carried out for measuring the five elastic constants that define TI media on well-preserved shales. Many previous measurements were made without control of pore pressure, which is crucial for the determination of shale elastic properties

    Dynamics of vortex penetration, jumpwise instabilities and nonlinear surface resistance of type-II superconductors in strong rf fields

    Full text link
    We consider nonlinear dynamics of a single vortex in a superconductor in a strong rf magnetic field B0sinωtB_0\sin\omega t. Using the London theory, we calculate the dissipated power Q(B0,ω)Q(B_0,\omega), and the transient time scales of vortex motion for the linear Bardeen-Stephen viscous drag force, which results in unphysically high vortex velocities during vortex penetration through the oscillating surface barrier. It is shown that penetration of a single vortex through the ac surface barrier always involves penetration of an antivortex and the subsequent annihilation of the vortex antivortex pairs. Using the nonlinear Larkin-Ovchinnikov (LO) viscous drag force at higher vortex velocities v(t)v(t) results in a jump-wise vortex penetration through the surface barrier and a significant increase of the dissipated power. We calculate the effect of dissipation on nonlinear vortex viscosity η(v)\eta(v) and the rf vortex dynamics and show that it can also result in the LO-type behavior, instabilities, and thermal localization of penetrating vortex channels. We propose a thermal feedback model of η(v)\eta(v), which not only results in the LO dependence of η(v)\eta(v) for a steady-state motion, but also takes into account retardation of temperature field around rapidly accelerating vortex, and a long-range interaction with the surface. We also address the effect of pinning on the nonlinear rf vortex dynamics and the effect of trapped magnetic flux on the surface resistance RsR_s calculated as a function or rf frequency and field. It is shown that trapped flux can result in a temperature-independent residual resistance RiR_i at low TT, and a hysteretic low-field dependence of Ri(B0)R_i(B_0), which can {\it decrease} as B0B_0 is increased, reaching a minimum at B0B_0 much smaller than the thermodynamic critical field BcB_c.Comment: 18 figure

    Vortex avalanches and magnetic flux fragmentation in superconductors

    Full text link
    We report results of numerical simulations of non isothermal dendritic flux penetration in type-II superconductors. We propose a generic mechanism of dynamic branching of a propagating hotspot of a flux flow/normal state triggered by a local heat pulse. The branching occurs when the flux hotspot reflects from inhomogeneities or the boundary on which magnetization currents either vanish, or change direction. Then the hotspot undergoes a cascade of successive splittings, giving rise to a dissipative dendritic-type flux structure. This dynamic state eventually cools down, turning into a frozen multi-filamentary pattern of magnetization currents.Comment: 4 pages, 4 figures, accepted to Phys. Rev. Let

    Effects of Baseline Left Ventricular Hypertrophy and Decreased Renal Function on Cardiovascular and Renal Outcomes in Patients with Fabry Disease Treated with Agalsidase Alfa: A Fabry Outcome Survey Study

    Get PDF
    PURPOSE: The initiation of enzyme-replacement therapy prior to the occurrence of substantial and irreversible organ damage in patients with Fabry disease is of critical importance. The Fabry Outcome Survey is an international disease registry of patients with a confirmed diagnosis of Fabry disease. In this study, data from the Fabry Outcome Survey were used for the assessment of the risks for cardiovascular and renal events in patients who received agalsidase alfa treatment. METHODS: Eligible patients were males and females aged ≥18 years with Fabry disease treated with agalsidase alfa. Cardiovascular events included myocardial infarction, left ventricular hypertrophy (LVH), heart failure, arrhythmia, conduction abnormality, and cardiac surgery. Renal events included dialysis, transplantation, and renal failure. Kaplan-Meier curves and log-rank tests were used for comparing event-free probabilities and time to first cardiovascular or renal event, from agalsidase alfa initiation to a maximum of 120 months, in patients with LVH versus normal left ventricular mass index (LVMI; ≤50 g/m2.7 in males and ≤48 g/m2.7 in females) at treatment initiation (baseline), and in patients with a low estimated glomerular filtration rate (eGFR; <90 mL/min/1.73 m2) versus normal eGFR at baseline. Multivariate Cox regression analysis was used for examining the association between key study variables and the risks for cardiovascular and renal events. FINDINGS: Among the 560 patients (269 males; 291 females) with available LVMI data, 306 (55%) had LVH and 254 (45%) had normal LVMI at baseline. The risk for a cardiovascular event was higher in the subgroup with LVH versus normal LVMI at baseline (hazard ratio [HR] = 1.57; 95% CI, 1.21-2.05; P < 0.001), but the risk for a renal event was similar between the 2 subgroups (HR = 1.90; 95% CI, 0.94-3.85; P = 0.074). Among the 1093 patients (551 males; 542 females) with available eGFR data, 433 (40%) had a low eGFR and 660 (60%) had a normal eGFR at baseline. The subgroup with a low eGFR at baseline had a significantly higher risk for a cardiovascular event (HR = 1.33; 95% CI, 1.04-1.70; P = 0.021) or a renal event (HR = 5.88; 95% CI, 2.73-12.68; P < 0.001) compared with patients with a normal eGFR at baseline. IMPLICATIONS: In the present study, the presence of LVH and/or reduced renal function at agalsidase alfa initiation was associated with a significantly higher risk for a cardiovascular or renal event, indicating that cardiovascular and renal pathologies in Fabry disease may be inter-related. Early initiation of agalsidase alfa treatment prior to the onset of severe organ damage may improve outcomes. ClinicalTrials.gov identifier: NCT03289065

    Towards an Axiomatization of Simple Analog Algorithms

    No full text
    International audienceWe propose a formalization of analog algorithms, extending the framework of abstract state machines to continuous-time models of computation

    Interaction of quasilocal harmonic modes and boson peak in glasses

    Full text link
    The direct proportionality relation between the boson peak maximum in glasses, ωb\omega_b, and the Ioffe-Regel crossover frequency for phonons, ωd\omega_d, is established. For several investigated materials ωb=(1.5±0.1)ωd\omega_b = (1.5\pm 0.1)\omega_d. At the frequency ωd\omega_d the mean free path of the phonons ll becomes equal to their wavelength because of strong resonant scattering on quasilocal harmonic oscillators. Above this frequency phonons cease to exist. We prove that the established correlation between ωb\omega_b and ωd\omega_d holds in the general case and is a direct consequence of bilinear coupling of quasilocal oscillators with the strain field.Comment: RevTex, 4 pages, 1 figur

    Smoothed Complexity Theory

    Get PDF
    Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worst-case or average-case analysis have accompanying complexity classes, like P and AvgP, respectively. While worst-case or average-case analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems. Smoothed analysis is a hybrid of worst-case and average-case analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity theory, which is necessary to classify problems according to their intrinsic difficulty. We propose a framework for smoothed complexity theory, define the relevant classes, and prove some first hardness results (of bounded halting and tiling) and tractability results (binary optimization problems, graph coloring, satisfiability). Furthermore, we discuss extensions and shortcomings of our model and relate it to semi-random models.Comment: to be presented at MFCS 201
    corecore