1,382 research outputs found

    Feature importance for machine learning redshifts applied to SDSS galaxies

    Full text link
    We present an analysis of importance feature selection applied to photometric redshift estimation using the machine learning architecture Decision Trees with the ensemble learning routine Adaboost (hereafter RDF). We select a list of 85 easily measured (or derived) photometric quantities (or `features') and spectroscopic redshifts for almost two million galaxies from the Sloan Digital Sky Survey Data Release 10. After identifying which features have the most predictive power, we use standard artificial Neural Networks (aNN) to show that the addition of these features, in combination with the standard magnitudes and colours, improves the machine learning redshift estimate by 18% and decreases the catastrophic outlier rate by 32%. We further compare the redshift estimate using RDF with those from two different aNNs, and with photometric redshifts available from the SDSS. We find that the RDF requires orders of magnitude less computation time than the aNNs to obtain a machine learning redshift while reducing both the catastrophic outlier rate by up to 43%, and the redshift error by up to 25%. When compared to the SDSS photometric redshifts, the RDF machine learning redshifts both decreases the standard deviation of residuals scaled by 1/(1+z) by 36% from 0.066 to 0.041, and decreases the fraction of catastrophic outliers by 57% from 2.32% to 0.99%.Comment: 10 pages, 4 figures, updated to match version accepted in MNRA

    Techno-economic heat transfer optimization of large scale latent heat energy storage systems in solar thermal power plants

    Get PDF
    Concentrated solar power plants with integrated storage systems are key technologies for sustainable energy supply systems and reduced anthropogenic CO2-emissions. Developing technologies include direct steam generation in parabolic trough systems, which offer benefits due to higher steam temperatures and, thus, higher electrical efficiencies. However, no large scale energy storage technology is available yet. A promising option is a combined system consisting of a state-of-the art sensible molten salt storage system and a high temperature latent heat thermal energy storage system (LHTESS). This paper discusses the systematic development and optimization of heat transfer structures in LHTESS from a technological and economic point of view. Two evaluation parameters are developed in order to minimize the specific investment costs. First, the specific product costs determine the optimum equipment of the latent heat storage module, i.e. the finned tube. The second parameter reflects the interacting behavior of the LHTESS and the steam turbine during discharge. This behavior is described with a simplified power block model that couples both components

    Tuning target selection algorithms to improve galaxy redshift estimates

    Full text link
    We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare the ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the machine learning methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30% of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and find that some of the ML targeting algorithms are able to obtain the same redshift predictive power with 2-3 times less observing time, as compared to that of the SDSS, or random, target selection algorithms. The reduction in the required follow up resources could allow for a change to the follow-up strategy, for example by obtaining deeper spectroscopy, which could improve ML redshift estimates for deeper test data.Comment: 16 pages, 9 figures, updated to match MNRAS accepted version. Minor text changes, results unchange

    Money In Modern Macro Models: A Review of the Arguments

    Get PDF
    This paper provides an overview of the role of money in modern macro models. In particular, we are focussing on New Keynesian and New Monetarist models to investigate their main findings and most significant shortcomings in considering money properly. As a further step, we ask about the role of financial intermediaries in this respect. In dealing with these issues, we distinguish between narrow and broad monetary aggregates. We conclude that for theoretical as well as practical reasons a periodic review of the definition of monetary aggregates is advisable. Despite the criticism brought forward by the recent New Keynesian literature, we argue that keeping an eye on money is important to monetary policy decision-makers in order to safeguard price stability as well as, as a side-benefit, ensure financial market stability. In a nutshell: money still matters

    Anomaly detection for machine learning redshifts applied to SDSS galaxies

    Full text link
    We present an analysis of anomaly detection for machine learning redshift estimation. Anomaly detection allows the removal of poor training examples, which can adversely influence redshift estimates. Anomalous training examples may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies with one or more poorly measured photometric quantity. We select 2.5 million 'clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730 'anomalous' galaxies with spectroscopic redshift measurements which are flagged as unreliable. We contaminate the clean base galaxy sample with galaxies with unreliable redshifts and attempt to recover the contaminating galaxies using the Elliptical Envelope technique. We then train four machine learning architectures for redshift analysis on both the contaminated sample and on the preprocessed 'anomaly-removed' sample and measure redshift statistics on a clean validation sample generated without any preprocessing. We find an improvement on all measured statistics of up to 80% when training on the anomaly removed sample as compared with training on the contaminated sample for each of the machine learning routines explored. We further describe a method to estimate the contamination fraction of a base data sample.Comment: 13 pages, 8 figures, 1 table, minor text updates to macth MNRAS accepted versio

    Stacking for machine learning redshifts applied to SDSS galaxies

    Full text link
    We present an analysis of a general machine learning technique called 'stacking' for the estimation of photometric redshifts. Stacking techniques can feed the photometric redshift estimate, as output by a base algorithm, back into the same algorithm as an additional input feature in a subsequent learning round. We shown how all tested base algorithms benefit from at least one additional stacking round (or layer). To demonstrate the benefit of stacking, we apply the method to both unsupervised machine learning techniques based on self-organising maps (SOMs), and supervised machine learning methods based on decision trees. We explore a range of stacking architectures, such as the number of layers and the number of base learners per layer. Finally we explore the effectiveness of stacking even when using a successful algorithm such as AdaBoost. We observe a significant improvement of between 1.9% and 21% on all computed metrics when stacking is applied to weak learners (such as SOMs and decision trees). When applied to strong learning algorithms (such as AdaBoost) the ratio of improvement shrinks, but still remains positive and is between 0.4% and 2.5% for the explored metrics and comes at almost no additional computational cost.Comment: 13 pages, 3 tables, 7 figures version accepted by MNRAS, minor text updates. Results and conclusions unchange

    PYRO-NN: Python Reconstruction Operators in Neural Networks

    Full text link
    Purpose: Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the CT reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches are forced to use workarounds for mathematically unambiguously solvable problems. Methods: PYRO-NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state-of-the-art parallel-, fan- and cone-beam projectors and back-projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems. Results: The framework provides all necessary algorithms and tools to design end-to-end neural network pipelines with integrated CT reconstruction algorithms. The high level Python API allows a simple use of the layers as known from Tensorflow. To demonstrate the capabilities of the layers, the framework comes with three baseline experiments showing a cone-beam short scan FDK reconstruction, a CT reconstruction filter learning setup, and a TV regularized iterative reconstruction. All algorithms and tools are referenced to a scientific publication and are compared to existing non deep learning reconstruction frameworks. The framework is available as open-source software at \url{https://github.com/csyben/PYRO-NN}. Conclusions: PYRO-NN comes with the prevalent deep learning framework Tensorflow and allows to setup end-to-end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step towards reproducible researchComment: V1: Submitted to Medical Physics, 11 pages, 7 figure

    Does chemical cross-linking with NHS esters reflect the chemical equilibrium of protein-protein noncovalent interactions in solution?

    Get PDF
    Chemical cross-linking in combination with mass spectrometry has emerged as a powerful tool to study noncovalent protein complexes. Nevertheless, there are still many questions to answer. Does the amount of detected cross-linked complex correlate with the amount of protein complex in solution? In which concentration and affinity range is specific cross-linking possible? To answer these questions, we performed systematic cross-linking studies with two complexes, using the N-hydroxysuccinimidyl ester disuccinimidyl suberate (DSS): (1) NCoA-1 and mutants of the interacting peptide STAT6Y, covering a KD range of 30 nM to >25 μM, and (2) α-thrombin and basic pancreatic trypsin inhibitor (BPTI), a system that shows a buffer-dependent KD value between 100 and 320 μM. Samples were analyzed by matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). For NCoA-1· STAT6Y, a good correlation between the amount of cross-linked species and the calculated fraction of complex present in solution was observed. Thus, chemical cross-linking in combination with MALDI-MS can be used to rank binding affinities. For the mid-affinity range up to about KD ≈ 25 μM, experiments with a nonbinding peptide and studies of the concentration dependence showed that only specific complexes undergo cross-linking with DSS. To study in which affinity range specific cross-linking can be applied, the weak α-thrombin · BPTI complex was investigated. We found that the detected complex is a nonspecifically cross-linked species. Consequently, based on the experimental approach used in this study, chemical cross-linking is not suitable for studying low-affinity complexes with KD ≫ 25 μ

    Near-inertial wave scattering by random flows

    Get PDF
    The impact of a turbulent flow on wind-driven oceanic near-inertial waves is examined using a linearised shallow-water model of the mixed layer. Modelling the flow as a homogeneous and stationary random process with spatial scales comparable to the wavelengths, we derive a transport (or kinetic) equation governing wave-energy transfers in both physical and spectral spaces. This equation describes the scattering of the waves by the flow which results in a redistribution of energy between waves with the same frequency (or, equivalently, with the same wavenumber) and, for isotropic flows, in the isotropisation of the wave field. The time scales for the scattering and isotropisation are obtained explicitly and found to be of the order of tens of days for typical oceanic parameters. The predictions inferred from the transport equation are confirmed by a series of numerical simulations. Two situations in which near-inertial waves are strongly influenced by flow scattering are investigated through dedicated nonlinear shallow-water simulations. In the first, a wavepacket propagating equatorwards as a result from the β\beta-effect is shown to be slowed down and dispersed both zonally and meridionally by scattering. In the second, waves generated by moving cyclones are shown to be strongly disturbed by scattering, leading again to an increased dispersion.Comment: Accepted for publication in Phys. Rev. Fluid

    Literature survey on recent progress in inter-vehicle communication simulations

    Get PDF
    The vehicular ad hoc network (VANET) technology based on the approved IEEE 802.11p standard and the appendant inter-vehicle communication (IVC) has the potential to dramatically change the way transportation systems work. The fundamental idea is to change the individual behavior of each vehicle by exchanging information among traffic participants to realize a cooperative and more efficient transportation system. Certainly, the evaluation of such systems is a comprehensive and challenging task in a real world test bed, therefore, simulation frameworks are a key tool to analyze IVC. Several models are needed to emulate the real behavior of a VANET in all aspects as much realistically as necessary. The intention of this survey is to provide a comprehensive overview of publications concerning IVC simulations of the year 2013 and to see how IVC simulation has changed since 2009. Based on this analysis, we will answer the following questions: What simulation techniques are applied to IVC? Which aspects of IVS have been evaluated? What has changed within five years of IVC simulations? We also take a closer look at commonly used software tools and discuss their functionality and drawbacks. Finally, we present open questions concerning IVC simulations
    • …
    corecore