163 research outputs found

    The Power of Jets: New Clues from Radio Circular Polarization and X-rays

    Get PDF
    Jets are ubiquitous in accreting black holes. Often ignored, they may be a major contributor to the emitted spectral energy distribution for sub-Eddington black holes. For example, recent observations of radio-to-X-ray correlations and broad band spectra of X-ray binaries in the low/hard state can be explained by a significant synchrotron contribution from jets also to their IR-to-X-ray spectrum as proposed by Markoff, Falcke, Fender 2001. This model can also explain state-transitions from low/hard to high/soft states. Relativistic beaming of the jet X-ray emission could lead to the appearance of seemingly Super-Eddington X-rays sources in other galaxies. We show that a simple population synthesis model of X-ray binaries with relativistic beaming can well explain the currently found distribution of off-nucleus X-ray point sources in nearby galaxies. Specifically we suggest that the so-called ultra-luminous X-ray sources (ULXs, also IXOs) could well be relativistically beamed microblazars. The same model that can be used to explain X-ray binaries also fits Low-Luminosity AGN (LLAGN) and especially Sgr A* in the Galactic Center. The recent detection of significant circular polarization in AGN radio cores, ranging from bright quasars down to low-luminosity AGN like M81*, Sgr A* and even X-ray binaries, now places additional new constraints on the matter contents of such jets. The emerging picture are powerful jets with a mix of hot and cold matter, a net magnetic flux, and a stable magnetic north pole.Comment: to appear in: ``Lighthouses of the Universe'', Springer Verlag, ESO Astrophysics Symposia, Eds: R.Sunyaev, M.Gilfanov, E.Churazov, LaTex, 8 pages, 5 figures, also available at http://www.mpifr-bonn.mpg.de/staff/hfalcke/publications.html#lighthouse

    Empirische Risiko-Minimierung fĂĽr dynamische Datenstrukturen

    Get PDF
    Strukturen in Datensätzen sollen häufig durch einen funktionalen Zusammenhang dargestellt werden. Die Grundlage zur bestmöglichen Anpassung einer Funktion an die vorliegende Datenstruktur bezüglich eines geeignet gewählten Maßes ist in der Regel die Minimierung eines erwarteten Verlusts, des Risikos. Bei unbekannter Verteilung ist das empirische Risiko ein nahe liegender Ersatz. Bei unabhängig identisch verteilten Beobachtungen und nur geringen Voraussetzungen hat dieses empirische Risikominimierungsverfahren (ERM-Prinzip) gute Konsistenzeigenschaften. Die Theorie ist zusammen mit der darauf aufbauenden strukturellen Risiko-Minimierung die Grundlage für verschiedene Methoden der statistischen Lerntheorie, wie z.B. Support Vector Machines (SVM). Auf Grund der limitierenden Voraussetzungen des ERM-Prinzips ist es nicht zulässig, die SVM auf Daten mit Abhängigkeitsstrukturen anzuwenden. Die Analyse dynamischer, meist zeitlicher Strukturen nimmt aber einen immer größeren Platz in der modernen Datenanalyse ein, so dass eine Anwendung des Prinzips der empirischen Risiko-Minimierung auf solche Daten wünschenswert ist. Dazu muss die Theorie so erweitert werden, dass die Dynamik in den Daten als stochastischer Prozess auf den Fehlerterm innerhalb der Daten wirkt. In der vorliegenden Arbeit kann dafür die Konsistenz der empirischen Risiko-Minimierung durch Ausnutzen von Konsistenzsätzen der Martingal- und vor allem der Mixingal-Theorie nachgewiesen werden. Dadurch sind zahlreiche unterschiedliche Annahmen an die Abhängigkeitsstruktur in den Fehlern möglich. Zusätzlich ist für die Anwendung des ERM-Prinzips bei der Entwicklung von geeigneten Algorithmen eine exponentielle Konvergenzrate von entscheidender Bedeutung. Für Martingal- und auch Mixingal-Strukturen in den Daten können geeignete exponentielle Schranken nachgewiesen werden, die eine schnelle Konvergenz sicherstellen.Die empirische Risiko-Minimierung bildet somit auch bei Mixingal- und Martingal-Strukturen ein allgemeingültiges Prinzip. Damit kann der konzeptionell theoretische Teil der statistischen Lerntheorie nach Vapnik auch für dynamische Datenstrukturen genutzt werden.We consider the task of finding a functional relationship in a set of data. Given an appropriate set of functions to choose from, this leads to the minimization of an expected loss, i.e. a risk, with respect to a suitable measure. In the case when the underlying probability distribution is unknown the empirical risk is an obvious estimator that can be employed for the minimization problem. This empirical risk minimization principle (ERM-principle) has good consistency properties in the case of independent and identically distributed observations. The theory together with the Structural Risk Minimization, which based on it, is the basis for different methods in the context of statistical learning theory, like Support Vector Machines (SVM).The limiting assumptions of the ERM-principle do not permit an application of the SVM on data with dependence structures. However, the analysis of dynamical, usually temporal structures becomes more and more important in modern data analysis and an application of empirical risk minimization to data of this kind is desirable. The extension of this principle for cases of time dependent data has to include the modeled dynamic structure in the data. Thereby the dynamics are not represented directly, but by modeling the errors in the data as a dynamical stochastic process. The proof of the consistency of the ERM-principle under these more general assumptions is given using consistency theorems for Martingales and Mixingales as well, such that different temporal structures in the errors are possible. In addition, an exponential convergence rate is of crucial importance for the application of the ERM-Principle and for the development of appropriate algorithms. Suitable exponential bounds are proven for Martingale and Mixingale structures as well, which guarantee fast convergence.Thus, empirical risk minimization constitutes a general principle with Mixingale or Martingale structures in the data and the conceptional theoretical part of the statistical learning theory can be used with independent data as well as with dynamical structures

    Radio Emission as a Test of the Existence of Intermediate Mass Black Holes in Globular Clusters and Dwarf Spheroidal Galaxies

    Full text link
    We take the established relation between black hole mass, X-ray luminosity, and radio luminosity and show that intermediate mass black holes, such as those predicted to exist at the centers of globular clusters, will be easily identifiable objects in deep radio observations. We show that the radio observations will be far more senstive than any possible X-ray observations. We also discuss the likely optical photometric and spectroscopic appearance of such systems in the event that radio detections are made.Comment: 6 pages, no figures, accepted to MNRA

    Millimetre observations of a sub-arcsecond jet from Circinus X-1

    Full text link
    We present results from the first successful millimetre (combined 33 GHz and 35 GHz) observations of the neutron star X-ray binary Circinus X-1, using the Australia Telescope Compact Array. The source was clearly detected in all three observing epochs. We see strong evidence for a periastron flare beginning at MJD 55519.9 \pm 0.04 with estimated peak flux densities of up to 50 mJy and which proceeds to decline over the following four days. We directly resolve jet structures on sub-arcsecond scales. Flux density variability and distance from the core of nearby components suggests recent shock re-energisation, though we are unable to directly connect this with the observed flare. We suggest that, if the emission is powered by an unseen outflow, then a phase delay exists between flare onset and subsequent brightening of nearby components, with flows reaching mildly relativistic velocities. Given resolved structure positions, in comparison to past observations of Cir X-1, we find evidence that jet direction may vary with distance from the core, or the source's precession parameters have changed.Comment: Accepted for publication in MNRA

    Centronuclear myopathy in labrador retrievers: a recent founder mutation in the PTPLA gene has rapidly disseminated worldwide

    Get PDF
    Centronuclear myopathies (CNM) are inherited congenital disorders characterized by an excessive number of internalized nuclei. In humans, CNM results from ~70 mutations in three major genes from the myotubularin, dynamin and amphiphysin families. Analysis of animal models with altered expression of these genes revealed common defects in all forms of CNM, paving the way for unified pathogenic and therapeutic mechanisms. Despite these efforts, some CNM cases remain genetically unresolved. We previously identified an autosomal recessive form of CNM in French Labrador retrievers from an experimental pedigree, and showed that a loss-of-function mutation in the protein tyrosine phosphatase-like A (PTPLA) gene segregated with CNM. Around the world, client-owned Labrador retrievers with a similar clinical presentation and histopathological changes in muscle biopsies have been described. We hypothesized that these Labradors share the same PTPLA<sup>cnm</sup> mutation. Genotyping of an international panel of 7,426 Labradors led to the identification of PTPLA<sup>cnm</sup> carriers in 13 countries. Haplotype analysis demonstrated that the PTPLA<sup>cnm</sup> allele resulted from a single and recent mutational event that may have rapidly disseminated through the extensive use of popular sires. PTPLA-deficient Labradors will help define the integrated role of PTPLA in the existing CNM gene network. They will be valuable complementary large animal models to test innovative therapies in CNM
    • …
    corecore