655 research outputs found

    The eROSITA Final Equatorial-Depth Survey (eFEDS) -- Splashback radius of X-ray galaxy clusters using galaxies from HSC survey

    Full text link
    We present the splashback radius measurements around the SRG/eROSITA eFEDS X-ray selected galaxy clusters by cross-correlating them with HSC S19A photometric galaxies. The X-ray selection is expected to be less affected by systematics related to projection that affects optical cluster finder algorithms. We use a nearly volume-limited sample of 109 galaxy clusters selected in 0.5-2.0 keV band having luminosity LX>1043.5ergs1h2L_X > 10^{43.5}\,{\rm erg s^{-1} h^{-2}} within the redshift z<0.75z<0.75 and obtain measurements of the projected cross-correlation with a signal-to-noise of 17.4317.43. We model our measurements to infer a three-dimensional profile and find that the steepest slope is sharper than 3-3 and associate the location with the splashback radius. We infer the value of the 3D splashback radius rsp=1.450.26+0.30h1Mpcr_{\rm sp} = 1.45^{+0.30}_{-0.26}\,{\rm h^{-1} Mpc}. We also measure the weak lensing signal of the galaxy clusters and obtain halo mass log[M200m/h1M]=14.52±0.06\log[M_{\rm 200m}/{\rm h^{-1}M_\odot}] = 14.52 \pm 0.06 using the HSC-S16A shape catalogue data at the median redshift z=0.46z=0.46 of our cluster sample. We compare our rspr_{\rm sp} values with the spherical overdensity boundary r200m=1.75±0.08h1Mpcr_{\rm 200m} = 1.75 \pm 0.08\,{\rm h^{-1} Mpc} based on the halo mass which is consistent within 1.2σ1.2\sigma with the Λ\LambdaCDM predictions. Our constraints on the splashback radius, although broad, are the best measurements thus far obtained for an X-ray selected galaxy cluster sample.Comment: 15 pages, 10 figure

    Molecular Hydrogen and Global Star Formation Relations in Galaxies

    Full text link
    (ABRIDGED) We use hydrodynamical simulations of disk galaxies to study relations between star formation and properties of the molecular interstellar medium (ISM). We implement a model for the ISM that includes low-temperature (T<10^4K) cooling, directly ties the star formation rate to the molecular gas density, and accounts for the destruction of H2 by an interstellar radiation field from young stars. We demonstrate that the ISM and star formation model simultaneously produces a spatially-resolved molecular-gas surface density Schmidt-Kennicutt relation of the form Sigma_SFR \propto Sigma_Hmol^n_mol with n_mol~1.4 independent of galaxy mass, and a total gas surface density -- star formation rate relation Sigma_SFR \propto Sigma_gas^n_tot with a power-law index that steepens from n_tot~2 for large galaxies to n_tot>~4 for small dwarf galaxies. We show that deviations from the disk-averaged Sigma_SFR \propto Sigma_gas^1.4 correlation determined by Kennicutt (1998) owe primarily to spatial trends in the molecular fraction f_H2 and may explain observed deviations from the global Schmidt-Kennicutt relation.Comment: Version accepted by ApJ, high-res version available at http://kicp.uchicago.edu/~brant/astro-ph/molecular_ism/rk2007.pd

    Developing Data Stories as Enhanced Publications in Digital Humanities

    Get PDF
    This paper discusses the development of data-driven stories and the editorial processes underlying their production. Such ‘data stories’ have proliferated in journalism but are also increasingly developed within academia. Although ‘data stories’ lack a clear definition, there are similarities between the processes that underlie journalistic and academic data stories. However, there are also differences, specifically when it comes to epistemological claims. In this paper data stories as phenomenon and their use in journalism and in the Humanities form the context for the editorial protocol developed for CLARIAH Media Suite Data Stories

    Developing Data Stories in Digital Humanities: Challenges and Protocol

    Get PDF
    This article discusses the development of data-driven stories and the editorial processes underlying their production. Such ‘data stories’ have proliferated in journalism but are also increasingly developed within academia. Within CLARIAH, the Common Lab Infrastructure for the Arts and Humanities, we are developing data stories based on analyses of data and metadata available via the Media Suite, an online resource providing access to a wide range of multimedia collections. Although ‘data stories’ lack a clear definition, there are similarities between the processes that underlie journalistic and academic data stories. However, there are also differences, specifically when it comes to epistemological claims. In this article we discuss data stories as phenomenon and their use in journalism and in the Humanities, based on the three main elements of data stories: data, visualisation, and narration. This provides the context in which we developed an editorial protocol for the development of CLARIAH Media Suite Data Stories, which includes four phases: exploration, research, review, and publication. While exploration focuses on data selection, research focuses on narration. Visualisation plays a role in both of these phases. Review is geared towards quality control, and in the publication phase the data story is published and monitored. By discussing our editorial protocol, we hope to contribute to the debate about how to develop and account for academic data stories

    Data Stories in CLARIAH: Developing a Research Infrastructure for Storytelling with Heritage and Culture Data

    Get PDF
    Online stories, from blog posts to journalistic articles to scientific publications, are commonly illustrated with media (e.g. images, audio clips) or statistical summaries (e.g. tables and graphs). Such “illustrations” are the result of a process of acquiring, parsing, filtering, mining, representing, refining and interacting with data [3]. Unfortunately, such processes are typically taken for granted and seldom mentioned in the story itself. Although recently a wide variety of interactive data visualisation techniques have been developed (see e.g., [6]), in many cases the illustrations in such publications are static; this prevents different audiences from engaging with the data and analyses as they desire. In this paper, we share our experiences with the concept of “data stories” that tackles both issues, enhancing opportunities for outreach, reporting on scientific inquiry, and FAIR data representation [9]. In journalism data stories are becoming widely accepted as the output of a process that is in many aspects similar to that of a computational scholar: gaining insights by analyzing data sets using (semi-)automatized methods and presenting these insights using (interactive) visualizations and other textual outputs based on data [4] [7] [5] [6]. In the context of scientific output, data stories can be regarded as digital “publications enriched with or linking to related research results, such as research data, workflows, software, and possibly connections among them” [1]. However, as infrastructure for (peerreviewed) enhanced publications is in an early stage of development (see e.g., [2]), scholarly data stories are currently often produced as blog posts, discussing a relevant topic. These may be accompanied by illustrations not limited to a single graph or image but characterized by different forms of interactivity: readers can, for instance, change the perspective or zoom level of graphs, or cycle through images or audio clips. Having experimented successfully with various types and uses of data stories1 in the CLARIAH2 project, we are working towards a more generic, stable and sustainable infrastructure to create, publish, and archive data stories. This includes providing environments for reproduction of data stories and verification of data via “close reading”. From an infrastructure perspective, this involves the provisioning of services for persistent storage of data (e.g. triple stores), data registration and search (registries), data publication (SPARQL end-points, search-APIs), data visualization, and (versioned) query creation. These services can be used by environments to develop data stories, either or not facilitating additional data analysis steps. For data stories that make use of data analysis, for example via Jupyter Notebooks [8], the infrastructure also needs to take computational requirements (load balancing) and restrictions (security) into account. Also, when data sets are restricted for copyright or privacy reasons, authentication and authorization infrastructure (AAI) is required. The large and rich data sets in (European) heritage archives that are increasingly made interoperable using FAIR principles, are eminently qualified as fertile ground for data stories. We therefore hope to be able to present our experiences with data stories, share our strategy for a more generic solution and receive feedback on shared challenges

    Adipose tissue hyaluronan production improves systemic glucose homeostasis and primes adipocytes for CL 316,243-stimulated lipolysis

    Get PDF
    Plasma hyaluronan (HA) increases systemically in type 2 diabetes (T2D) and the HA synthesis inhibitor, 4-Methylumbelliferone, has been proposed to treat the disease. However, HA is also implicated in normal physiology. Therefore, we generated a Hyaluronan Synthase 2 transgenic mouse line, driven by a tet-response element promoter to understand the role of HA in systemic metabolism. To our surprise, adipocyte-specific overproduction of HA leads to smaller adipocytes and protects mice from high-fat-high-sucrose-diet-induced obesity and glucose intolerance. Adipocytes also have more free glycerol that can be released upon beta3 adrenergic stimulation. Improvements in glucose tolerance were not linked to increased plasma HA. Instead, an HA-driven systemic substrate redistribution and adipose tissue-liver crosstalk contributes to the systemic glucose improvements. In summary, we demonstrate an unexpected improvement in glucose metabolism as a consequence of HA overproduction in adipose tissue, which argues against the use of systemic HA synthesis inhibitors to treat obesity and T2D
    corecore