7,002 research outputs found

    Radiation from the first forming stars

    Full text link
    The evolution of radiation emitted during the dynamical collapse of metal-free protostellar clouds is investigated within a spherically symmetric hydrodynamical scheme that includes the transfer of radiation and the chemistry of the primordial gas. The cloud centre collapses on a time scale of about 10^5-10^6 years, thanks to line cooling from molecular hydrogen (H2). For most of the collapse time, when the evolution proceeds self-similarly, the luminosity slowly rises up to about 10^36 erg/s and is essentially due to H2 IR line emission. Later, continuum IR radiation provides an additional contribution, which is mostly due to the accretion of an infalling envelope upon a small hydrostatic protostellar core which develops in the centre. We follow the beginning of the accretion phase, when the enormous accretion rate (~ 0.1 Msun/yr) produces a very high continuum luminosity of about 10^36 erg/s. Despite the high luminosities, the radiation field is unable to affect the gas dynamics during the collapse and the first phases of accretion, because the opacity of the infalling gas is too small; this is quite different from present-day star formation. We also find that the protostellar evolution is similar among clouds with different initial configurations, including those resulting from 3D cosmological simulations of primordial objects; in particular, the shape of the molecular spectra is quite universal. Finally, we briefly discuss the detectability of this pristine cosmic star formation activity.Comment: 39 pages, 12 figures; revised version with major changes (including title) to appear in MNRA

    Observing exoplanets from the planet Earth: how our revolution around the Sun affects the detection of 1-year periods

    Get PDF
    We analysed a selected sample of exoplanets with orbital periods close to 1 year to study the effects of the spectral window on the data, affected by the 1 cycle/year aliasing due to the Earth motion around the Sun. We pointed out a few cases where a further observational effort would largely improve the reliability of the orbital solutions.Comment: Contribution to the Focus Point on "Highlights of Planetary Science in Italy" edited by P. Cerroni, E. Dotto, P. Paolicch

    Massive black hole and gas dynamics in mergers of galaxy nuclei - II. Black hole sinking in star-forming nuclear discs

    Full text link
    Mergers of gas-rich galaxies are key events in the hierarchical built-up of cosmic structures, and can lead to the formation of massive black hole binaries. By means of high-resolution hydrodynamical simulations we consider the late stages of a gas-rich major merger, detailing the dynamics of two circumnuclear discs, and of the hosted massive black holes during their pairing phase. During the merger gas clumps with masses of a fraction of the black hole mass form because of fragmentation. Such high-density gas is very effective in forming stars, and the most massive clumps can substantially perturb the black hole orbits. After 10\sim 10 Myr from the start of the merger a gravitationally bound black hole binary forms at a separation of a few parsecs, and soon after, the separation falls below our resolution limit of 0.390.39 pc. At the time of binary formation the original discs are almost completely disrupted because of SNa feedback, while on pc scales the residual gas settles in a circumbinary disc with mass 105M\sim 10^5 M_\odot. We also test that binary dynamics is robust against the details of the SNa feedback employed in the simulations, while gas dynamics is not. We finally highlight the importance of the SNa time-scale on our results.Comment: 10 pages, 11 figures, MNRAS in pres

    Multianalyte LC-MS-based methods in doping control: what are the implications for doping athletes?

    Get PDF
    Over the last 50 years, the list of doping substances and methods has been progressively expanding, being regularly reviewed by the international antidoping authorities (formerly the Medical Commission of the International Olympic Committee, and afterward, following its constitution in 1999, the World Anti-Doping Agency [WADA]). New substances/classes of substances have been periodically included in the list, keeping the pace with more advanced and sophisticated doping trends. At present, and apart from the prohibited performance enhancing and masking methods (e.g., blood transfusions and tampering strategies), the list comprises several hundreds of biologically active substances, with broad differences in their physicochemical properties (i.e., molecular weight, polarity and acid-basic properties) [1]. As a consequence, the ‘one class – one procedure’ approach, which had been followed by nearly all accredited antidoping laboratories worldwide until the turn of the millennium, is no longer sustainable. The need to minimize the overall number of independent analytical procedures, and, in parallel, to reduce the analytical costs, stimulated the development of multitargeted methods, aimed to increase the overall ratio of ‘target analytes: procedure’ [2–6]. The above evolution has not always been a straight forward process. The need to comply with the WADA technical requirements (both in terms of identification criteria and of minimum required performance limits [7,8]) and with the reduction of the reporting time (a constraint that becomes even more critical during international sport events, where the daily workload also drastically increases) has imposed a thorough re-planning of the analytical procedures. The development of an antidoping analytical method requires the appropriate knowledge not only of the biophysicochemical properties of the target analyte, but also of its PK profile. Historically, immunological methods and GC-based techniques were applied in antidoping science, as preferential screening methods for the detection of prohibited substances, which were originally limited to nonendogenous stimulants and narcotics. In the 1980s, GC–MS became the reference analytical platform for the detection and quantification of the majority of the low molecular weight doping substances [3–6]. In the following two decades, with the inclusion in the Prohibited List of new classes of low molecular weight, hydrophilic, thermolabile, nonvolatile analytes (including, but not limited to, glucocorticoids and designer steroids) and simultaneously of peptide hormones, scientists were obliged to design, develop, validate and apply techniques based on LC–MS/MS

    A tabu search heuristic based on k-diamonds for the weighted feedback vertex set problem

    No full text
    Given an undirected and vertex weighted graph G = (V,E,w), the Weighted Feedback Vertex Problem (WFVP) consists of finding a subset F ⊆ V of vertices of minimum weight such that each cycle in G contains at least one vertex in F. The WFVP on general graphs is known to be NP-hard and to be polynomially solvable on some special classes of graphs (e.g., interval graphs, co-comparability graphs, diamond graphs). In this paper we introduce an extension of diamond graphs, namely the k-diamond graphs, and give a dynamic programming algorithm to solve WFVP in linear time on this class of graphs. Other than solving an open question, this algorithm allows an efficient exploration of a neighborhood structure that can be defined by using such a class of graphs. We used this neighborhood structure inside our Iterated Tabu Search heuristic. Our extensive experimental show the effectiveness of this heuristic in improving the solution provided by a 2-approximate algorithm for the WFVPon general graphs

    Short term ozone effects on morbidity for the city of Milano, Italy, 1996-2003.

    Get PDF
    In this paper, we explore a range of concerns that arise in measuring short term ozone effects on health. In particular, we tackle the problem of measuring exposure using alternative daily measures of ozone derived from hourly concentrations. We adopt the exposure paradigm of Chiogna and Bellini (2002), and we compare its performances with respect to traditional exposure measures by exploiting model selection. For investigating model selection stability issues, we then apply the idea of bootstrapping the modelling process

    Transcriptomic effects of the non-steroidal anti-inflammatory drug Ibuprofen in the marine bivalve Mytilus galloprovincialis Lam

    Get PDF
    The transcriptomic effects of Ibuprofen (IBU) in the digestive gland tissue of Mytilus galloprovincialis Lam. specimens exposed at low environmental concentrations (250 ng L-1) are presented. Using a 1.7 K feature cDNA microarray along with linear models and empirical Bayes statistical methods 225 differentially expressed genes were identified in mussels treated with IBU across a 15-day period. Transcriptional dynamics were typical of an adaptive response with a peak of gene expression change at day 7 (177 features, representing about 11% of sequences available for analysis) and an almost full recovery at the end of the exposure period. Functional genomics by means of Gene Ontology term analysis unraveled typical mussel stress responses i.e. aminoglycan (chitin) metabolic processes but also more specific effects such as the regulation of NF-kappa B transcription factor activity. (C) 2016 Elsevier Ltd. All rights reserved

    Differential neural dynamics underling pragmatic and semantic affordance processing in macaque ventral premotor cortex

    Get PDF
    Premotor neurons play a fundamental role in transforming physical properties of observed objects, such as size and shape, into motor plans for grasping them, hence contributing to "pragmatic" affordance processing. Premotor neurons can also contribute to "semantic" affordance processing, as they can discharge differently even to pragmatically identical objects depending on their behavioural relevance for the observer (i.e. edible or inedible objects). Here, we compared the response of monkey ventral premotor area F5 neurons tested during pragmatic (PT) or semantic (ST) visuomotor tasks. Object presentation responses in ST showed shorter latency and lower object selectivity than in PT. Furthermore, we found a difference between a transient representation of semantic affordances and a sustained representation of pragmatic affordances at both the single neuron and population level. Indeed, responses in ST returned to baseline within 0.5 s whereas in PT they showed the typical sustained visual-to-motor activity during Go trials. In contrast, during No-go trials, the time course of pragmatic and semantic information processing was similar. These findings suggest that premotor cortex generates different dynamics depending on pragmatic and semantic information provided by the context in which the to-be-grasped object is presented

    The co-evolutionary relationship between digitalization and organizational agility: Ongoing debates, theoretical developments and future research perspectives

    Get PDF
    This study is the first to provide a systematic review of the literature focused on the relationship between digitalization and organizational agility (OA). It applies the bibliographic coupling method to 171 peer-reviewed contributions published by 30 June 2021. It uses the digitalization perspective to investigate the enablers, barriers and benefits of processes aimed at providing firms with the agility required to effectively face increasingly turbulent environments. Three different, though interconnected, thematic clusters are discovered and analysed, respectively focusing on big-data analytic capabilities as crucial drivers of OA, the relationship between digitalization and agility at a supply chain level, and the role of information technology capabilities in improving OA. By adopting a dynamic capabilities perspective, this study overcomes the traditional view, which mainly considers digital capabilities enablers of OA, rather than as possible outcomes. Our findings reveal that, in addition to being complex, the relationship between digitalization and OA has a bidirectional character. This study also identifies extant research gaps and develops 13 original research propositions on possible future research pathways and new managerial solutions

    Constitutional Opportunities and Risks of AI in the law-making process

    Get PDF
    Il saggio discute in prospettiva interdisciplinare le opportunità e i rischi costituzionali dell’impiego dell’IA nel procedimento legislativo. L’analisi distingue tre possibili utilizzi dell’IA da parte del legislatore (assistivo, di potenziamento umano e decisorio) per evidenziarne le differenti implicazioni costituzionali. Opportunità e rischi sono quindi discussi con riferimento a specifici casi d’uso e individuando possibili misure di salvaguardia per i principi costituzionali interessati. La tesi sostenuta dagli Autori è che l’impiego di IA assistiva è meno problematico di quanto si pensi comunemente, purché si adottino le necessarie misure di mitigazione dei rischi. Gli altri utilizzi tendono invece a risultare effettivamente incompatibili con i principi costituzionali fondamentali del procedimento legislativo.This paper explores the constitutional opportunities and risks linked to the use of AI in the legislative process, adopting an interdisciplinary approach. The analysis distinguishes between AI assisting, enhancing, and decision-making for legislators, highlighting their different constitutional implications. Opportunities and risks for each use are discussed with examples, emphasizing the need for constitutional safeguards. The Authors argue that assistive AI, with proper safeguards, poses fewer issues than commonly believed, while AI augmenting MPs abilities or making decisions may compromise fundamental principles of constitutional law and due legislative process
    corecore