1,921 research outputs found

    Simple trees in complex forests: Growing Take The Best by Approximate Bayesian Computation

    Get PDF
    How can heuristic strategies emerge from smaller building blocks? We propose Approximate Bayesian Computation as a computational solution to this problem. As a first proof of concept, we demonstrate how a heuristic decision strategy such as Take The Best (TTB) can be learned from smaller, probabilistically updated building blocks. Based on a self-reinforcing sampling scheme, different building blocks are combined and, over time, tree-like non-compensatory heuristics emerge. This new algorithm, coined Approximately Bayesian Computed Take The Best (ABC-TTB), is able to recover a data set that was generated by TTB, leads to sensible inferences about cue importance and cue directions, can outperform traditional TTB, and allows to trade-off performance and computational effort explicitly

    New spectral classification technique for X-ray sources: quantile analysis

    Full text link
    We present a new technique called "quantile analysis" to classify spectral properties of X-ray sources with limited statistics. The quantile analysis is superior to the conventional approaches such as X-ray hardness ratio or X-ray color analysis to study relatively faint sources or to investigate a certain phase or state of a source in detail, where poor statistics does not allow spectral fitting using a model. Instead of working with predetermined energy bands, we determine the energy values that divide the detected photons into predetermined fractions of the total counts such as median (50%), tercile (33% & 67%), and quartile (25% & 75%). We use these quantiles as an indicator of the X-ray hardness or color of the source. We show that the median is an improved substitute for the conventional X-ray hardness ratio. The median and other quantiles form a phase space, similar to the conventional X-ray color-color diagrams. The quantile-based phase space is more evenly sensitive over various spectral shapes than the conventional color-color diagrams, and it is naturally arranged to properly represent the statistical similarity of various spectral shapes. We demonstrate the new technique in the 0.3-8 keV energy range using Chandra ACIS-S detector response function and a typical aperture photometry involving background subtraction. The technique can be applied in any energy band, provided the energy distribution of photons can be obtained.Comment: 11 pages, 9 figures, accepted for publication in Ap

    Progress toward a Soft X-ray Polarimeter

    Full text link
    We are developing instrumentation for a telescope design capable of measuring linear X-ray polarization over a broad-band using conventional spectroscopic optics. Multilayer-coated mirrors are key to this approach, being used as Bragg reflectors at the Brewster angle. By laterally grading the multilayer mirrors and matching to the dispersion of a spectrometer, one may take advantage of high multilayer reflectivities and achieve modulation factors over 50% over the entire 0.2-0.8 keV band. We present progress on laboratory work to demonstrate the capabilities of an existing laterally graded multilayer coated mirror pair. We also present plans for a suborbital rocket experiment designed to detect a polarization level of 12-17% for an active galactic nucleus in the 0.1-1.0 keV band.Comment: 11 pages, 12 figures, to appear in the proceedings of the SPIE, volume 8861, on Optics for EUV, X-Ray, and Gamma-Ray Astronom

    Better safe than sorry: Risky function exploitation through safe optimization

    Get PDF
    Exploration-exploitation of functions, that is learning and optimizing a mapping between inputs and expected outputs, is ubiquitous to many real world situations. These situations sometimes require us to avoid certain outcomes at all cost, for example because they are poisonous, harmful, or otherwise dangerous. We test participants' behavior in scenarios in which they have to find the optimum of a function while at the same time avoid outputs below a certain threshold. In two experiments, we find that Safe-Optimization, a Gaussian Process-based exploration-exploitation algorithm, describes participants' behavior well and that participants seem to care firstly whether a point is safe and then try to pick the optimal point from all such safe points. This means that their trade-off between exploration and exploitation can be seen as an intelligent, approximate, and homeostasis-driven strategy.Comment: 6 pages, submitted to Cognitive Science Conferenc

    Isolation of three novel rat and mouse papillomaviruses and their genomic characterization.

    Get PDF
    Despite a growing knowledge about the biological diversity of papillomaviruses (PV), only little is known about non-human PV in general and about PV mice models in particular. We cloned and sequenced the complete genomes of two novel PV types from the Norway rat (Rattus norvegicus; RnPV2) and the wood mouse (Apodemus sylvaticus; AsPV1) as well as a novel variant of the recently described MmuPV1 (originally designated as MusPV) from a house mouse (Mus musculus; MmuPV1 variant). In addition, we conducted phylogenetic analyses using a systematically representative set of 79 PV types, including the novel sequences. As inferred from concatenated amino acid sequences of six proteins, MmuPV1 variant and AsPV1 nested within the Beta+Xi-PV super taxon as members of the Pi-PV. RnPV2 is a member of the Iota-PV that has a distant phylogenetic position from Pi-PV. The phylogenetic results support a complex scenario of PV diversification driven by different evolutionary forces including co-divergence with hosts and adaptive radiations to new environments. PV types particularly isolated from mice and rats are the basis for new animal models, which are valuable to study PV induced tumors and new treatment options

    Where do hypotheses come from?

    Get PDF
    Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available result in close to rational inference over the hypothesis space, whereas tasks requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes' rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior. While this approximation will converge to the true posterior in the limit of infinite samples, we take a small number of samples as we expect that the number of samples humans take is limited by time pressure and cognitive resource constraints. We show that this model recreates several well-documented experimental findings such as anchoring and adjustment, subadditivity, superadditivity, the crowd within as well as the self-generation effect, the weak evidence, and the dud alternative effects. Additionally, we confirm the model's prediction that superadditivity and subadditivity can be induced within the same paradigm by manipulating the unpacking and typicality of hypotheses, in 2 experiments.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216

    Surface heat budget in the Southern Ocean from 42 degrees S to the Antarctic marginal ice zone: Four atmospheric reanalyses versus icebreaker Aurora Australis measurements

    Get PDF
    © The Author(s), 2019. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Yu, L., Jin, X., & Schulz, E. W. Surface heat budget in the Southern Ocean from 42 degrees S to the Antarctic marginal ice zone: Four atmospheric reanalyses versus icebreaker Aurora Australis measurements. Polar Research, 38, (2019): 3349, doi:10.33265/polar.v38.3349.Surface heat fluxes from four atmospheric reanalyses in the Southern Ocean are evaluated using air–sea measurements obtained from the Aurora Australis during off-winter seasons in 2010–12. The icebreaker tracked between Hobart, Tasmania (ca. 42°S), and the Antarctic continent, providing in situ benchmarks for the surface energy budget change in the Subantarctic Southern Ocean (58–42°S) and the eastern Antarctic marginal ice zone (MIZ, 68–58°S). We find that the reanalyses show a high-level agreement among themselves, but this agreement reflects a universal bias, not a “truth.” Downward shortwave radiation (SW↓) is overestimated (warm biased) and downward longwave radiation (LW↓) is underestimated (cold biased), an indication that the cloud amount in all models is too low. The ocean surface in both regimes shows a heat gain from the atmosphere when averaged over the seven months (October–April). However, the ocean heat gain in reanalyses is overestimated by 10–36 W m−2 (80–220%) in the MIZ but underestimated by 6–20 W m−2 (7–25%) in the Subantarctic. The biases in SW↓ and LW↓ cancel out each other in the MIZ, causing the surface heat budget to be dictated by the underestimation bias in sensible heat loss. These reanalyses biases affect the surface energy budget in the Southern Ocean by meaningfully affecting the timing of the seasonal transition from net heat gain to net heat loss at the surface and the relative strength of SW↓ at different regimes in summer, when the length-of-day effect can lead to increased SW↓ at high latitudes.The study is supported by the NOAA Climate Observation Division grant NA14OAR4320158 and NOAA Modeling, Analysis, Predictions, and Projections Program’s Climate Reanalysis Task Force through grant no. NA13OAR4310106

    Material Response Analysis of a Titan Entry Heatshield

    Get PDF
    Accurate calculation of thermal protection material response is critical to the vehicle design for missions to the Saturn moon Titan. In this study, Icarus, a three-dimensional, unstructured, finite-volume material response solver under active development at NASA Ames Research Center, is used to compute the in-depth material response of the Huygens spacecraft along its November 11 entry trajectory. The heatshield analyzed in this study consists of a five-layer stack-up of Phenolic Impregnated Carbon Ablator (PICA), aluminum honeycomb, adhesive, and face sheetmaterials. During planetary entry, the PICA outer layer is expected to undergo pyrolysis. A surface energy balance boundary condition that captures both time- and spatial-variance of surface properties during entry is used in the simulation

    Towards a unifying theory of generalization

    Get PDF
    How do humans generalize from observed to unobserved data? How does generalization support inference, prediction, and decision making? I propose that a big part of human generalization can be explained by a powerful mechanism of function learning. I put forward and assess Gaussian Process regression as a model of human function learning that can unify several psychological theories of generalization. Across 14 experiments and using extensive computational modeling, I show that this model generates testable predictions about human preferences over different levels of complexity, provides a window into compositional inductive biases, and --combined with an optimistic yet efficient sampling strategy-- guides human decision making through complex spaces. Chapters 1 and 2 propose that, from a psychological and mathematical perspective, function learning and generalization are close kin. Chapter 3 derives and tests theoretical predictions of participants' preferences over differently complex functions. Chapter 4 develops a compositional theory of generalization and extensively probes this theory using 8 experimental paradigms. During the second half of the thesis, I investigate how function learning guides decision making in complex decision making tasks. In particular, Chapter 5 will look at how people search for rewards in various grid worlds where a spatial correlation of rewards provides a context supporting generalization and decision making. Chapter 6 gauges human behavior in contextual multi-armed bandit problems where a function maps features onto expected rewards. In both Chapter 5 and Chapter 6, I find that the vast majority of subjects are best predicted by a Gaussian Process function learning model combined with an upper confidence bound sampling strategy. Chapter 7 will formally assess the adaptiveness of human generalization in complex decision making tasks using mismatched Bayesian optimization simulations and finds that the empirically observed phenomenon of undergeneralization might rather be a feature than a bug of human behavior. Finally, I summarize the empirical and theoretical lessons learned and lay out a road-map for future research on generalization in Chapter 8
    corecore