1,642 research outputs found

    Substellar Objects in Nearby Young Clusters VII: The substellar mass function revisited

    Get PDF
    The abundance of brown dwarfs (BDs) in young clusters is a diagnostic of star formation theory. Here we revisit the issue of determining the substellar initial mass function (IMF), based on a comparison between NGC1333 and IC348, two clusters in the Perseus star-forming region. We derive their mass distributions for a range of model isochrones, varying distances, extinction laws and ages, with comprehensive assessments of the uncertainties. We find that the choice of isochrone and other parameters have significant effects on the results, thus we caution against comparing IMFs obtained using different approaches. For NGC1333, we find that the star/BD ratio R is between 1.9 and 2.4, for all plausible scenarios, consistent with our previous work. For IC348, R is between 2.9 and 4.0, suggesting that previous studies have overestimated this value. Thus, the star forming process generates about 2.5-5 substellar objects per 10 stars. The derived star/BD ratios correspond to a slope of the power-law mass function of alpha=0.7-1.0 for the 0.03-1.0Msol mass range. The median mass in these clusters - the typical stellar mass - is between 0.13-0.30Msol. Assuming that NGC1333 is at a shorter distance than IC348, we find a significant difference in the cumulative distribution of masses between the two clusters, resulting from an overabundance of very low mass objects in NGC1333. Gaia astrometry will constrain the cluster distances better and will lead to a more definitive conclusion. Furthermore, ratio R is somewhat larger in IC348 compared with NGC1333, although this difference is still within the margins of error. Our results indicate that environments with higher object density may produce a larger fraction of very low mass objects, in line with predictions for brown dwarf formation through gravitational fragmentation of filaments falling into a cluster potential.Comment: 16 pages, 4 figures, accepted for publication in Ap

    Sustainable approaches for stormwater quality improvements with experimental geothermal paving systems

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.This research assesses the next generation of permeable pavement systems (PPS) incorporating ground source heat pumps (geothermal paving systems). Twelve experimental pilot-scaled pavement systems were assessed for its stormwater treatability in Edinburgh, UK. The relatively high variability of temperatures during the heating and cooling cycle of a ground source heat pump system embedded into the pavement structure did not allow the ecological risk of pathogenic microbial expansion and survival. Carbon dioxide monitoring indicated relatively high microbial activity on a geotextile layer and within the pavement structure. Anaerobic degradation processes were concentrated around the geotextile zone, where carbon dioxide concentrations reached up to 2000 ppm. The overall water treatment potential was high with up to 99% biochemical oxygen demand removal. The pervious pavement systems reduced the ecological risk of stormwater discharges and provided a low risk of pathogen growth

    Posterior accuracy and calibration under misspecification in Bayesian generalized linear models

    Full text link
    Generalized linear models (GLMs) are popular for data-analysis in almost all quantitative sciences, but the choice of likelihood family and link function is often difficult. This motivates the search for likelihoods and links that minimize the impact of potential misspecification. We perform a large-scale simulation study on double-bounded and lower-bounded response data where we systematically vary both true and assumed likelihoods and links. In contrast to previous studies, we also study posterior calibration and uncertainty metrics in addition to point-estimate accuracy. Our results indicate that certain likelihoods and links can be remarkably robust to misspecification, performing almost on par with their respective true counterparts. Additionally, normal likelihood models with identity link (i.e., linear regression) often achieve calibration comparable to the more structurally faithful alternatives, at least in the studied scenarios. On the basis of our findings, we provide practical suggestions for robust likelihood and link choices in GLMs

    Prediction can be safely used as a proxy for explanation in causally consistent Bayesian generalized linear models

    Full text link
    Bayesian modeling provides a principled approach to quantifying uncertainty in model parameters and model structure and has seen a surge of applications in recent years. Within the context of a Bayesian workflow, we are concerned with model selection for the purpose of finding models that best explain the data, that is, help us understand the underlying data generating process. Since we rarely have access to the true process, all we are left with during real-world analyses is incomplete causal knowledge from sources outside of the current data and model predictions of said data. This leads to the important question of when the use of prediction as a proxy for explanation for the purpose of model selection is valid. We approach this question by means of large-scale simulations of Bayesian generalized linear models where we investigate various causal and statistical misspecifications. Our results indicate that the use of prediction as proxy for explanation is valid and safe only when the models under consideration are sufficiently consistent with the underlying causal structure of the true data generating process

    Identification and Analysis of Patterns of Machine Learning Systems in the Connected, Adaptive Production

    Get PDF
    Over the past six decades, many companies have discovered the potential of computer-controlled systems in the manufacturing industry. Overall, digitization can be identified as one of the main drivers of cost reduction in the manufacturing industry. However, recent advances in Artificial Intelligence indicate that there is still untapped potential in the use and analysis of data in industry. Many reports and surveys indicate that machine learning solutions are slowly adapted and that the process of implementation is decelerated by inefficiencies. The goal of this paper is the systematic analysis of successfully implemented machine learning solutions in manufacturing as well as the derivation of a more efficient implementation approach. For this, three use cases have been identified for in-depth analysis and a framework for systematic comparisons between differently implemented solutions is developed. In all three use cases it is possible to derive implementation patterns as well as to identify key variables which determine the success of implementation. The identified patterns show that similar machine learning problems within the same use case can be solved with similar solutions. The results provide a heuristic for future implementation attempts tackling problems of similar nature

    Enhanced light emission from top-emitting organic light-emitting diodes by optimizing surface plasmon polariton losses

    Get PDF
    We demonstrate enhanced light extraction for monochrome top-emitting organic light-emitting diodes (OLEDs). The enhancement by a factor of 1.2 compared to a reference sample is caused by the use of a hole transport layer (HTL) material possessing a low refractive index (1.52). The low refractive index reduces the in-plane wave vector of the surface plasmon polariton (SPP) excited at the interface between the bottom opaque metallic electrode (anode) and the HTL. The shift of the SPP dispersion relation decreases the power dissipated into lost evanescent excitations and thus increases the outcoupling efficiency, although the SPP remains constant in intensity. The proposed method is suitable for emitter materials owning isotropic orientation of the transition dipole moments as well as anisotropic, preferentially horizontal orientation, resulting in comparable enhancement factors. Furthermore, for sufficiently low refractive indices of the HTL material, the SPP can be modeled as a propagating plane wave within other organic materials in the optical microcavity. Thus, by applying further extraction methods, such as micro lenses or Bragg gratings, it would become feasible to obtain even higher enhancements of the light extraction.Comment: 11 pages, 6 figures, will be submitted to PR

    Some models are useful, but how do we know which ones? Towards a unified Bayesian model taxonomy

    Full text link
    Probabilistic (Bayesian) modeling has experienced a surge of applications in almost all quantitative sciences and industrial areas. This development is driven by a combination of several factors, including better probabilistic estimation algorithms, flexible software, increased computing power, and a growing awareness of the benefits of probabilistic learning. However, a principled Bayesian model building workflow is far from complete and many challenges remain. To aid future research and applications of a principled Bayesian workflow, we ask and provide answers for what we perceive as two fundamental questions of Bayesian modeling, namely (a) "What actually is a Bayesian model?" and (b) "What makes a good Bayesian model?". As an answer to the first question, we propose the PAD model taxonomy that defines four basic kinds of Bayesian models, each representing some combination of the assumed joint distribution of all (known or unknown) variables (P), a posterior approximator (A), and training data (D). As an answer to the second question, we propose ten utility dimensions according to which we can evaluate Bayesian models holistically, namely, (1) causal consistency, (2) parameter recoverability, (3) predictive performance, (4) fairness, (5) structural faithfulness, (6) parsimony, (7) interpretability, (8) convergence, (9) estimation speed, and (10) robustness. Further, we propose two example utility decision trees that describe hierarchies and trade-offs between utilities depending on the inferential goals that drive model building and testing
    corecore