72,610 research outputs found

    The Dawes Review 8: Measuring the Stellar Initial Mass Function

    Full text link
    The birth of stars and the formation of galaxies are cornerstones of modern astrophysics. While much is known about how galaxies globally and their stars individually form and evolve, one fundamental property that affects both remains elusive. This is problematic because this key property, the birth mass distribution of stars, referred to as the stellar initial mass function (IMF), is a key tracer of the physics of star formation that underpins almost all of the unknowns in galaxy and stellar evolution. It is perhaps the greatest source of systematic uncertainty in star and galaxy evolution. The past decade has seen a growing number and variety of methods for measuring or inferring the shape of the IMF, along with progressively more detailed simulations, paralleled by refinements in the way the concept of the IMF is applied or conceptualised on different physical scales. This range of approaches and evolving definitions of the quantity being measured has in turn led to conflicting conclusions regarding whether or not the IMF is universal. Here I review and compare the growing wealth of approaches to our understanding of this fundamental property that defines so much of astrophysics. I summarise the observational measurements from stellar analyses, extragalactic studies and cosmic constraints, and highlight the importance of considering potential IMF variations, reinforcing the need for measurements to quantify their scope and uncertainties carefully, in order for this field to progress. I present a new framework to aid the discussion of the IMF and promote clarity in the further development of this fundamental field.Comment: Accepted for publication in PASA. 52 pages, 10 figures. A bug in pasa-mnras.bst causes references beginning with lower-case letters (e.g., "de", "van") to be placed at the end of the reference list, rather than alphabetically. Kindly and skilled people are encouraged to correct this and share with the PASA editor

    Engineering simulations for cancer systems biology

    Get PDF
    Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions

    Testing the Universality of the Stellar IMF with Chandra and HST

    Get PDF
    The stellar initial mass function (IMF), which is often assumed to be universal across unresolved stellar populations, has recently been suggested to be "bottom-heavy" for massive ellipticals. In these galaxies, the prevalence of gravity-sensitive absorption lines (e.g. Na I and Ca II) in their near-IR spectra implies an excess of low-mass (m<=0.5m <= 0.5 MM_\odot) stars over that expected from a canonical IMF observed in low-mass ellipticals. A direct extrapolation of such a bottom-heavy IMF to high stellar masses (m>=8m >= 8 MM_\odot) would lead to a corresponding deficit of neutron stars and black holes, and therefore of low-mass X-ray binaries (LMXBs), per unit near-IR luminosity in these galaxies. Peacock et al. (2014) searched for evidence of this trend and found that the observed number of LMXBs per unit KK-band luminosity (N/LKN/L_K) was nearly constant. We extend this work using new and archival Chandra X-ray Observatory (Chandra) and Hubble Space Telescope (HST) observations of seven low-mass ellipticals where N/LKN/L_K is expected to be the largest and compare these data with a variety of IMF models to test which are consistent with the observed N/LKN/L_K. We reproduce the result of Peacock et al. (2014), strengthening the constraint that the slope of the IMF at m>=8m >= 8 MM_\odot must be consistent with a Kroupa-like IMF. We construct an IMF model that is a linear combination of a Milky Way-like IMF and a broken power-law IMF, with a steep slope (α1=\alpha_1= 3.843.84) for stars < 0.5 MM_\odot (as suggested by near-IR indices), and that flattens out (α2=\alpha_2= 2.142.14) for stars > 0.5 MM_\odot, and discuss its wider ramifications and limitations.Comment: Accepted for publication in ApJ; 7 pages, 2 figures, 1 tabl

    A critical investigation of the Osterwalder business model canvas: an in-depth case study

    Get PDF
    Although the Osterwalder business model canvas (BMC) is used by professionals worldwide, it has not yet been subject to a thorough investigation in academic literature. In this first contribution we present the results of an intensive, interactive process of data analysis, visual synthesis and textual rephrasing to gain insight into the business model of a single case (health television). The (textual and visual) representation of the business model needs to be consistent and powerful. Therefore, we start from the total value per customer segment. Besides the offer (or core value) additional value is created through customer related activities. The understanding of activities both on the strategic and tactical level reveals more insight into the total value creation. Moreover, value elements for one customer segment can induce value for others. The interaction between value for customer segments and activities results in a powerful customer value centred business model representation. Total value to customers generates activities and costs on the one hand and a revenue model on the other hand. Gross margins and sales volumes explain how value for customers contributes to profit. Another main challenge in business model mapping is in denominating the critical resources behind the activities. The Osterwalder business model canvas lacks consistency and power due to many overlaps which in turn are caused by the fixed architecture, the latter too easily leading to a filling-in exercise. Through its business model representation a company should first of all gain thorough understanding of it. Only then companies can evaluate the model and finally consider some adaptations

    UK energy in a global context: synthesis report

    Get PDF
    No description supplie

    Designing algorithms to aid discovery by chemical robots

    Get PDF
    Recently, automated robotic systems have become very efficient, thanks to improved coupling between sensor systems and algorithms, of which the latter have been gaining significance thanks to the increase in computing power over the past few decades. However, intelligent automated chemistry platforms for discovery orientated tasks need to be able to cope with the unknown, which is a profoundly hard problem. In this Outlook, we describe how recent advances in the design and application of algorithms, coupled with the increased amount of chemical data available, and automation and control systems may allow more productive chemical research and the development of chemical robots able to target discovery. This is shown through examples of workflow and data processing with automation and control, and through the use of both well-used and cutting-edge algorithms illustrated using recent studies in chemistry. Finally, several algorithms are presented in relation to chemical robots and chemical intelligence for knowledge discovery

    Detection of Signals from Cosmic Reionization using Radio Interferometric Signal Processing

    Full text link
    Observations of the HI 21cm transition line promises to be an important probe into the cosmic dark ages and epoch of reionization. One of the challenges for the detection of this signal is the accuracy of the foreground source removal. This paper investigates the extragalactic point source contamination and how accurately the bright sources (1\gtrsim 1 ~Jy) should be removed in order to reach the desired RMS noise and be able to detect the 21cm transition line. Here, we consider position and flux errors in the global sky-model for these bright sources as well as the frequency independent residual calibration errors. The synthesized beam is the only frequency dependent term included here. This work determines the level of accuracy for the calibration and source removal schemes and puts forward constraints for the design of the cosmic reionization data reduction scheme for the upcoming low frequency arrays like MWA,PAPER, etc. We show that in order to detect the reionization signal the bright sources need to be removed from the data-sets with a positional accuracy of 0.1\sim 0.1 arc-second. Our results also demonstrate that the efficient foreground source removal strategies can only tolerate a frequency independent antenna based mean residual calibration error of 0.2\lesssim 0.2 % in amplitude or 0.2\lesssim 0.2 degree in phase, if they are constant over each days of observations (6 hours). In future papers we will extend this analysis to the power spectral domain and also include the frequency dependent calibration errors and direction dependent errors (ionosphere, primary beam, etc).Comment: accepted by ApJ; 12 pages, 10 figure
    corecore