4,877 research outputs found

    A Neural Networks Committee for the Contextual Bandit Problem

    Get PDF
    This paper presents a new contextual bandit algorithm, NeuralBandit, which does not need hypothesis on stationarity of contexts and rewards. Several neural networks are trained to modelize the value of rewards knowing the context. Two variants, based on multi-experts approach, are proposed to choose online the parameters of multi-layer perceptrons. The proposed algorithms are successfully tested on a large dataset with and without stationarity of rewards.Comment: 21st International Conference on Neural Information Processin

    Current status of models of Jupiter's magnetosphere in the light of Pioneer data

    Get PDF
    The salient features of the various models of Jupiter's magnetosphere are compared with each other and with the major findings of Pioneer 10 and 11. No single model explains all the major phenomena detected by the Pioneers. A unified model of Jupiter's magnetosphere is proposed

    Learning Contextual Bandits in a Non-stationary Environment

    Full text link
    Multi-armed bandit algorithms have become a reference solution for handling the explore/exploit dilemma in recommender systems, and many other important real-world problems, such as display advertisement. However, such algorithms usually assume a stationary reward distribution, which hardly holds in practice as users' preferences are dynamic. This inevitably costs a recommender system consistent suboptimal performance. In this paper, we consider the situation where the underlying distribution of reward remains unchanged over (possibly short) epochs and shifts at unknown time instants. In accordance, we propose a contextual bandit algorithm that detects possible changes of environment based on its reward estimation confidence and updates its arm selection strategy respectively. Rigorous upper regret bound analysis of the proposed algorithm demonstrates its learning effectiveness in such a non-trivial environment. Extensive empirical evaluations on both synthetic and real-world datasets for recommendation confirm its practical utility in a changing environment.Comment: 10 pages, 13 figures, To appear on ACM Special Interest Group on Information Retrieval (SIGIR) 201

    Revisiting the Core Ontology and Problem in Requirements Engineering

    Full text link
    In their seminal paper in the ACM Transactions on Software Engineering and Methodology, Zave and Jackson established a core ontology for Requirements Engineering (RE) and used it to formulate the "requirements problem", thereby defining what it means to successfully complete RE. Given that stakeholders of the system-to-be communicate the information needed to perform RE, we show that Zave and Jackson's ontology is incomplete. It does not cover all types of basic concerns that the stakeholders communicate. These include beliefs, desires, intentions, and attitudes. In response, we propose a core ontology that covers these concerns and is grounded in sound conceptual foundations resting on a foundational ontology. The new core ontology for RE leads to a new formulation of the requirements problem that extends Zave and Jackson's formulation. We thereby establish new standards for what minimum information should be represented in RE languages and new criteria for determining whether RE has been successfully completed.Comment: Appears in the proceedings of the 16th IEEE International Requirements Engineering Conference, 2008 (RE'08). Best paper awar

    Diffusion Tensor Imaging: on the assessment of data quality - a preliminary bootstrap analysis

    Get PDF
    In the field of nuclear magnetic resonance imaging, diffusion tensor imaging (DTI) has proven an important method for the characterisation of ultrastructural tissue properties. Yet various technical and biological sources of signal uncertainty may prolong into variables derived from diffusion weighted images and thus compromise data validity and reliability. To gain an objective quality rating of real raw data we aimed at implementing the previously described bootstrap methodology (Efron, 1979) and investigating its sensitivity to a selection of extraneous influencing factors. We applied the bootstrap method on real DTI data volumes of six volunteers which were varied by different acquisition conditions, smoothing and artificial noising. In addition a clinical sample group of 46 Multiple Sclerosis patients and 24 healthy controls were investigated. The response variables (RV) extracted from the histogram of the confidence intervals of fractional anisotropy were mean width, peak position and height. The addition of noising showed a significant effect when exceeding about 130% of the original background noise. The application of an edge-preserving smoothing algorithm resulted in an inverse alteration of the RV. Subject motion was also clearly depicted whereas its prevention by use of a vacuum device only resulted in a marginal improvement. We also observed a marked gender-specific effect in a sample of 24 healthy control subjects the causes of which remained unclear. In contrary to this the mere effect of a different signal intensity distribution due to illness (MS) did not alter the response variables

    Dynamic models in fMRI

    Get PDF
    Most statistical methods for assessing activated voxels in fMRI experiments are based on correlation or regression analysis. In this context the main assumptions are that the baseline can be described by a few known basis-functions or variables and that the effect of the stimulus, i.e. the activation, stays constant over time. As these assumptions are in many cases neither necessary nor correct, a new dynamic approach that does not depend on those suppositions will be presented. This allows for simultaneous nonparametric estimation of the baseline as well as the time-varying effect of stimulation. This method of estimating the stimulus related areas of the brain furthermore provides the possibility of an analysis of the temporal and spatial development of the activation within an fMRI-experiment

    Intensity Segmentation of the Human Brain with Tissue dependent Homogenization

    Get PDF
    High-precision segmentation of the human cerebral cortex based on T1-weighted MRI is still a challenging task. When opting to use an intensity based approach, careful data processing is mandatory to overcome inaccuracies. They are caused by noise, partial volume effects and systematic signal intensity variations imposed by limited homogeneity of the acquisition hardware. We propose an intensity segmentation which is free from any shape prior. It uses for the first time alternatively grey (GM) or white matter (WM) based homogenization. This new tissue dependency was introduced as the analysis of 60 high resolution MRI datasets revealed appreciable differences in the axial bias field corrections, depending if they are based on GM or WM. Homogenization starts with axial bias correction, a spatially irregular distortion correction follows and finally a noise reduction is applied. The construction of the axial bias correction is based on partitions of a depth histogram. The irregular bias is modelled by Moody Darken radial basis functions. Noise is eliminated by nonlinear edge preserving and homogenizing filters. A critical point is the estimation of the training set for the irregular bias correction in the GM approach. Because of intensity edges between CSF (cerebro spinal fluid surrounding the brain and within the ventricles), GM and WM this estimate shows an acceptable stability. By this supervised approach a high flexibility and precision for the segmentation of normal and pathologic brains is gained. The precision of this approach is shown using the Montreal brain phantom. Real data applications exemplify the advantage of the GM based approach, compared to the usual WM homogenization, allowing improved cortex segmentation

    Bandit Models of Human Behavior: Reward Processing in Mental Disorders

    Full text link
    Drawing an inspiration from behavioral studies of human decision making, we propose here a general parametric framework for multi-armed bandit problem, which extends the standard Thompson Sampling approach to incorporate reward processing biases associated with several neurological and psychiatric conditions, including Parkinson's and Alzheimer's diseases, attention-deficit/hyperactivity disorder (ADHD), addiction, and chronic pain. We demonstrate empirically that the proposed parametric approach can often outperform the baseline Thompson Sampling on a variety of datasets. Moreover, from the behavioral modeling perspective, our parametric framework can be viewed as a first step towards a unifying computational model capturing reward processing abnormalities across multiple mental conditions.Comment: Conference on Artificial General Intelligence, AGI-1

    Analysis and design of a flat central finned-tube radiator

    Get PDF
    Computer program based on fixed conductance parameter yields minimum weight design. Second program employs variable conductance parameter and variable ratio of fin length to tube outside radius, and is used for radiator designs with geometric limitations. Major outputs of the two programs are given
    corecore