402 research outputs found
Anomaly detection for machine learning redshifts applied to SDSS galaxies
We present an analysis of anomaly detection for machine learning redshift
estimation. Anomaly detection allows the removal of poor training examples,
which can adversely influence redshift estimates. Anomalous training examples
may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies
with one or more poorly measured photometric quantity. We select 2.5 million
'clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730
'anomalous' galaxies with spectroscopic redshift measurements which are flagged
as unreliable. We contaminate the clean base galaxy sample with galaxies with
unreliable redshifts and attempt to recover the contaminating galaxies using
the Elliptical Envelope technique. We then train four machine learning
architectures for redshift analysis on both the contaminated sample and on the
preprocessed 'anomaly-removed' sample and measure redshift statistics on a
clean validation sample generated without any preprocessing. We find an
improvement on all measured statistics of up to 80% when training on the
anomaly removed sample as compared with training on the contaminated sample for
each of the machine learning routines explored. We further describe a method to
estimate the contamination fraction of a base data sample.Comment: 13 pages, 8 figures, 1 table, minor text updates to macth MNRAS
accepted versio
How Protostellar Outflows Help Massive Stars Form
We consider the effects of an outflow on radiation escaping from the
infalling envelope around a massive protostar. Using numerical radiative
transfer calculations, we show that outflows with properties comparable to
those observed around massive stars lead to significant anisotropy in the
stellar radiation field, which greatly reduces the radiation pressure
experienced by gas in the infalling envelope. This means that radiation
pressure is a much less significant barrier to massive star formation than has
previously been thought.Comment: 4 pages, 2 figures, emulateapj, accepted for publication in ApJ
Letter
Bondi Accretion in the Presence of Vorticity
The classical Bondi-Hoyle formula gives the accretion rate onto a point
particle of a gas with a uniform density and velocity. However, the Bondi-Hoyle
problem considers only gas with no net vorticity, while in a real astrophysical
situation accreting gas invariably has at least a small amount of vorticity. We
therefore consider the related case of accretion of gas with constant
vorticity, for the cases of both small and large vorticity. We confirm the
findings of earlier two dimensional simulations that even a small amount of
vorticity can substantially change both the accretion rate and the morphology
of the gas flow lines. We show that in three dimensions the resulting flow
field is non-axisymmetric and time dependent. The reduction in accretion rate
is due to an accumulation of circulation near the accreting particle. Using a
combination of simulations and analytic treatment, we provide an approximate
formula for the accretion rate of gas onto a point particle as a function of
the vorticity of the surrounding gas.Comment: 34 pages, 10 figures, accepted for publication in Ap
Chemical and physical influences on aerosol activation in liquid clouds: a study based on observations from the Jungfraujoch, Switzerland
A simple statistical model to predict the number of aerosols which activate to form cloud droplets in warm clouds has been established, based on regression analysis of data from four summertime Cloud and Aerosol Characterisation Experiments (CLACE) at the high-altitude site Jungfraujoch (JFJ). It is shown that 79 % of the observed variance in droplet numbers can be represented by a model accounting only for the number of potential cloud condensation nuclei (defined as number of particles larger than 80 nm in diameter), while the mean errors in the model representation may be reduced by the addition of further explanatory variables, such as the mixing ratios of O3, CO, and the height of the measurements above cloud base. The statistical model has a similar ability to represent the observed droplet numbers in each of the individual years, as well as for the two predominant local wind directions at the JFJ (northwest and southeast). Given the central European location of the JFJ, with air masses in summer being representative of the free troposphere with regular boundary layer in-mixing via convection, we expect that this statistical model is generally applicable to warm clouds under conditions where droplet formation is aerosol limited (i.e. at relatively high updraught velocities and/or relatively low aerosol number concentrations). A comparison between the statistical model and an established microphysical parametrization shows good agreement between the two and supports the conclusion that cloud droplet formation at the JFJ is predominantly controlled by the number concentration of aerosol particles
Recommended from our members
Ice nucleation efficiency of natural dust samples in the immersion mode
A total of 12 natural surface dust samples, which were surface-collected on four continents, most of them in dust source regions, were investigated with respect to their ice nucleation activity. Dust collection sites were distributed across Africa, South America, the Middle East, and Antarctica. Mineralogical composition has been determined by means of X-ray diffraction. All samples proved to be mixtures of minerals, with major contributions from quartz, calcite, clay minerals, K-feldspars, and (Na, Ca)-feldspars. Reference samples of these minerals were investigated with the same methods as the natural dust samples. Furthermore, Arizona test dust (ATD) was re-evaluated as a benchmark. Immersion freezing of emulsion and bulk samples was investigated by differential scanning calorimetry. For emulsion measurements, water droplets with a size distribution peaking at about 2 µm, containing different amounts of dust between 0.5 and 50 wt % were cooled until all droplets were frozen. These measurements characterize the average freezing behaviour of particles, as they are sensitive to the average active sites present in a dust sample. In addition, bulk measurements were conducted with one single 2 mg droplet consisting of a 5 wt % aqueous suspension of the dusts/minerals. These measurements allow the investigation of the best ice-nucleating particles/sites available in a dust sample. All natural dusts, except for the Antarctica and ATD samples, froze in a remarkably narrow temperature range with the heterogeneously frozen fraction reaching 10 % between 244 and 250 K, 25 % between 242 and 246 K, and 50 % between 239 and 244 K. Bulk freezing occurred between 255 and 265 K. In contrast to the natural dusts, the reference minerals revealed ice nucleation temperatures with 2–3 times larger scatter. Calcite, dolomite, dolostone, and muscovite can be considered ice nucleation inactive. For microcline samples, a 50 % heterogeneously frozen fraction occurred above 245 K for all tested suspension concentrations, and a microcline mineral showed bulk freezing temperatures even above 270 K. This makes microcline (KAlSi3O8) an exceptionally good ice-nucleating mineral, superior to all other analysed K-feldspars, (Na, Ca)-feldspars, and the clay minerals. In summary, the mineralogical composition can explain the observed freezing behaviour of 5 of the investigated 12 natural dust samples, and partly for 6 samples, leaving the freezing efficiency of only 1 sample not easily explained in terms of its mineral reference components. While this suggests that mineralogical composition is a major determinant of ice-nucleating ability, in practice, most natural samples consist of a mixture of minerals, and this mixture seems to lead to remarkably similar ice nucleation abilities, regardless of their exact composition, so that global models, in a first approximation, may represent mineral dust as a single species with respect to ice nucleation activity. However, more sophisticated representations of ice nucleation by mineral dusts should rely on the mineralogical composition based on a source scheme of dust emissions
Thiol-yne \u27Click\u27 Chemistry As a Route to Functional Lipid Mimetics
Thiol-alkyne \u27click\u27 chemistry is a modular, efficient mechanism to synthesize complex A2B 3-arm star polymers. This general motif is similar to a phospholipid where the A blocks correspond to lypophilic chains and the B block represents the polar head group. In this communication we employ thiol-yne chemistry to produce polypeptide-based A2B lipid mimetics. The utility of the thiol-yne reaction is demonstrated by using a divergent and a convergent approach in the synthesis. These polymers self-assemble in aqueous solution into spherical vesicles with a relatively narrow size distribution independent of block composition over the range studied. Using the thiol-yne convergent synthesis, we envision a modular approach to functionalize proteins or oligopeptides with lipophilic chains that can imbed seamlessly into a cell membrane
Recommended from our members
Modeling multidisciplinary design with multiagent learning
Complex engineered systems design is a collaborative activity. To design a system, experts from the relevant disciplines must work together to create the best overall system from their individual components. This situation is analogous to a multiagent system in which agents solve individual parts of a larger problem in a coordinated way. Current multiagent models of design teams, however, do not capture this distributed aspect of design teams - instead either representing designers as agents which control all variables, measuring organizational outcomes instead of design outcomes, or representing different aspects of distributed design, such as negotiation. This paper presents a new model which captures the distributed nature of complex systems design by decomposing the ability to control design variables to individual computational designers acting on a problem with shared constraints. These designers are represented as a multiagent learning system which is shown to perform similarly to a centralized optimization algorithm on the same domain. When used as a model, this multiagent system is shown to perform better when the level of designer exploration is not decayed but is instead controlled based on the increase of design knowledge, suggesting that designers in multidisciplinary teams should not simply reduce the scope of design exploration over time, but should adapt based on changes in their collective knowledge of the design space. This multiagent system is further shown to produce better-performing designs when computational designers design collaboratively as opposed to independently, confirming the importance of collaboration in complex systems design
Recommended from our members
Quantifying the Resilience-Informed Scenario Cost Sum: A Value-Driven Design Approach for Functional Hazard Assessment
Complex engineered systems can carry risk of high failure consequences, and as a result, resilience-the ability to avoid or quickly recover from faults-is desirable. Ideally, resilience should be designed-in as early in the design process as possible so that designers can best leverage the ability to explore the design space. Toward this end, previous work has developed functional modeling languages which represent the functions which must be performed by a system and function-based fault modeling frameworks have been developed to predict the resulting fault propagation behavior of a given functional model. However, little has been done to formally optimize or compare designs based on these predictions, partially because the effects of these models have not been quantified into an objective function to optimize. The work described herein closes this gap by introducing the resilience-informed scenario cost sum (RISCS), a scoring function which integrates with a fault scenario-based simulation, to enable the optimization and evaluation of functional model resilience. The scoring function accomplishes this by quantifying the expected cost of a design's fault response using probability information, and combining this cost with design and operational costs such that it may be parameterized in terms of designer-specified resilient features. The usefulness and limitations of using this approach in a general optimization and concept selection framework are discussed in general, and demonstrated on a monopropellant system design problem. Using RISCS as an objective for optimization, the algorithm selects the set of resilient features which provides the optimal trade-off between design cost and risk. For concept selection, RISCS is used to judge whether resilient concept variants justify their design costs and make direct comparisons between different model structures
Failure Analysis in Conceptual Phase toward a Robust Design: Case Study in Monopropellant Propulsion System
As a system becomes more complex, the uncertainty in the operating conditions increases. In such a system, implementing a precise failure analysis in early design stage is vital. However, there is a lack of applicable methodology that shows how to implement failure analysis in the early design phase to achieve a robust design. The main purpose of this paper is to present a framework to design a complex engineered system resistant against various factors that may cause failures, when design process is in the conceptual phase and information about detailed system and component is unavailable. Within this framework, we generate a population of feasible designs from a seed functional model, and simulate and classified failure scenarios. We also develop a design selection function to compare robust score for candidate designs, and produce a preference ranking. We implement the proposed method on the design of an aerospace monopropellant propulsion system
- …