487 research outputs found

    A Stochastic Search on the Line-Based Solution to Discretized Estimation

    Get PDF
    Recently, Oommen and Rueda [11] presented a strategy by which the parameters of a binomial/multinomial distribution can be estimated when the underlying distribution is nonstationary. The method has been referred to as the Stochastic Learning Weak Estimator (SLWE), and is based on the principles of continuous stochastic Learning Automata (LA). In this paper, we consider a new family of stochastic discretized weak estimators pertinent to tracking time-varying binomial distributions. As opposed to the SLWE, our proposed estimator is discretized , i.e., the estimate can assume only a finite number of values. It is well known in the field of LA that discretized schemes achieve faster convergence speed than their corresponding continuous counterparts. By virtue of discretization, our estimator realizes extremely fast adjustments of the running estimates by jumps, and it is thus able to robustly, and very quickly, track changes in the parameters of the distribution after a switch has occurred in the environment. The design principle of our strategy is based on a solution, pioneered by Oommen [7], for the Stochastic Search on the Line (SSL) problem. The SSL solution proposed in [7], assumes the existence of an Oracle which informs the LA whether to go “right” or “left”. In our application domain, in order to achieve efficient estimation, we have to first infer (or rather simulate ) such an Oracle. In order to overcome this difficulty, we rather intelligently construct an “Artificial Oracle” that suggests whether we are to increase the current estimate or to decrease it. The paper briefly reports conclusive experimental results that demonstrate the ability of the proposed estimator to cope with non-stationary environments with a high adaptation rate, and with an accuracy that depends on its resolution. The results which we present are, to the best of our knowledge, the first reported results that resolve the problem of discretized weak estimation using a SSL-based solution

    Modeling a teacher in a tutorial-like system using Learning Automata

    Get PDF
    The goal of this paper is to present a novel approach to model the behavior of a Teacher in a Tutorial- like system. In this model, the Teacher is capable of presenting teaching material from a Socratic-type Domain model via multiple-choice questions. Since this knowledge is stored in the Domain model in chapters with different levels of complexity, the Teacher is able to present learning material of varying degrees of difficulty to the Students. In our model, we propose that the Teacher will be able to assist the Students to learn the more difficult material. In order to achieve this, he provides them with hints that are relative to the difficulty of the learning material presented. This enables the Students to cope with the process of handling more complex knowledge, and to be able to learn it appropriately. To our knowledge, the findings of this study are novel to the field of intelligent adaptation using Learning Automata (LA). The novelty lies in the fact that the learning system has a strategy by which it can deal with increasingly more complex/difficult Environments (or domains from which the learning as to be achieved). In our approach, the convergence of the Student models (represented by LA) is driven not only by the response of the Environment (Teacher), but also by the hints that are provided by the latter. Our proposed Teacher model has been tested against different benchmark Environments, and the results of these simulations have demonstrated the salient aspects of our model. The main conclusion is that Normal and Below-Normal learners benefited significantly from the hints provided by the Teacher, while the benefits to (brilliant) Fast learners were marginal. This seems to be in-line with our subjective understanding of the behavior of real-life Students

    A Learning Automata Based Solution to Service Selection in Stochastic Environments

    Get PDF
    With the abundance of services available in today’s world, identifying those of high quality is becoming increasingly difficult. Reputation systems can offer generic recommendations by aggregating user provided opinions about service quality, however, are prone to ballot stuffing and badmouthing . In general, unfair ratings may degrade the trustworthiness of reputation systems, and changes in service quality over time render previous ratings unreliable. In this paper, we provide a novel solution to the above problems based on Learning Automata (LA), which can learn the optimal action when operating in unknown stochastic environments. Furthermore, they combine rapid and accurate convergence with low computational complexity. In additional to its computational simplicity, unlike most reported approaches, our scheme does not require prior knowledge of the degree of any of the above mentioned problems with reputation systems. Instead, it gradually learns which users provide fair ratings, and which users provide unfair ratings, even when users unintentionally make mistakes. Comprehensive empirical results show that our LA based scheme efficiently handles any degree of unfair ratings (as long as ratings are binary). Furthermore, if the quality of services and/or the trustworthiness of users change, our scheme is able to robustly track such changes over time. Finally, the scheme is ideal for decentralized processing. Accordingly, we believe that our LA based scheme forms a promising basis for improving the performance of reputation systems in general

    Posterior Reconstruction Before Anastomosis Improves the Anastomosis Time During Robot-Assisted Radical Prostatectomy

    Get PDF
    Posterior reconstruction prior to anastomosis decreased anastomotic time for robotic surgeons in training

    Identifying unreliable sensors without a knowledge of the ground truth in deceptive environments

    Get PDF
    This paper deals with the extremely fascinating area of “fusing” the outputs of sensors without any knowledge of the ground truth. In an earlier paper, the present authors had recently pioneered a solution, by mapping it onto the fascinating paradox of trying to identify stochastic liars without any additional information about the truth. Even though that work was significant, it was constrained by the model in which we are living in a world where “the truth prevails over lying”. Couched in the terminology of Learning Automata (LA), this corresponds to the Environment (Since the Environment is treated as an entity in its own right, we choose to capitalize it, rather than refer to it as an “environment”, i.e., as an abstract concept.) being “Stochastically Informative”. However, as explained in the paper, solving the problem under the condition that the Environment is “Stochastically Decepti”, as opposed to informative, is far from trivial. In this paper, we provide a solution to the problem where the Environment is deceptive (We are not aware of any other solution to this problem (within this setting), and so we believe that our solution is both pioneering and novel.), i.e., when we are living in a world where “lying prevails over the truth”

    Learning automaton based on-line discovery and tracking of spatio-temporal event patterns

    Get PDF
    Discovering and tracking of spatio-temporal patterns in noisy sequences of events is a difficult task that has become increasingly pertinent due to recent advances in ubiquitous computing, such as community-based social networking applications. The core activities for applications of this class include the sharing and notification of events, and the importance and usefulness of these functionalites increases as event-sharing expands into larger areas of one's life. Ironical

    Predicting disease risk areas through co-production of spatial models: the example of Kyasanur Forest Disease in India’s forest landscapes

    Get PDF
    Zoonotic diseases affect resource-poor tropical communities disproportionately, and are linked to human use and modification of ecosystems. Disentangling the socio-ecological mechanisms by which ecosystem change precipitates impacts of pathogens is critical for predicting disease risk and designing effective intervention strategies. Despite the global “One Health” initiative, predictive models for tropical zoonotic diseases often focus on narrow ranges of risk factors and are rarely scaled to intervention programs and ecosystem use. This study uses a participatory, co-production approach to address this disconnect between science, policy and implementation, by developing more informative disease models for a fatal tick-borne viral haemorrhagic disease, Kyasanur Forest Disease (KFD), that is spreading across degraded forest ecosystems in India. We integrated knowledge across disciplines to identify key risk factors and needs with actors and beneficiaries across the relevant policy sectors, to understand disease patterns and develop decision support tools. Human case locations (2014–2018) and spatial machine learning quantified the relative role of risk factors, including forest cover and loss, host densities and public health access, in driving landscape-scale disease patterns in a long-affected district (Shivamogga, Karnataka State). Models combining forest metrics, livestock densities and elevation accurately predicted spatial patterns in human KFD cases (2014–2018). Consistent with suggestions that KFD is an “ecotonal” disease, landscapes at higher risk for human KFD contained diverse forest-plantation mosaics with high coverage of moist evergreen forest and plantation, high indigenous cattle density, and low coverage of dry deciduous forest. Models predicted new hotspots of outbreaks in 2019, indicating their value for spatial targeting of intervention. Co-production was vital for: gathering outbreak data that reflected locations of exposure in the landscape; better understanding contextual socio-ecological risk factors; and tailoring the spatial grain and outputs to the scale of forest use, and public health interventions. We argue this inter-disciplinary approach to risk prediction is applicable across zoonotic diseases in tropical settings

    Partitioning of Minimotifs Based on Function with Improved Prediction Accuracy

    Get PDF
    Background: Minimotifs are short contiguous peptide sequences in proteins that are known to have a function in at least one other protein. One of the principal limitations in minimotif prediction is that false positives limit the usefulness of this approach. As a step toward resolving this problem we have built, implemented, and tested a new data-driven algorithm that reduces false-positive predictions. Methodology/Principal Findings: Certain domains and minimotifs are known to be strongly associated with a known cellular process or molecular function. Therefore, we hypothesized that by restricting minimotif predictions to those where the minimotif containing protein and target protein have a related cellular or molecular function, the prediction is more likely to be accurate. This filter was implemented in Minimotif Miner using function annotations from the Gene Ontology. We have also combined two filters that are based on entirely different principles and this combined filter has a better predictability than the individual components. Conclusions/Significance: Testing these functional filters on known and random minimotifs has revealed that they are capable of separating true motifs from false positives. In particular, for the cellular function filter, the percentage of known minimotifs that are not removed by the filter is,4.6 times that of random minimotifs. For the molecular function filter this ratio is,2.9. These results, together with the comparison with the published frequency score filter, strongly suggest tha

    Standard Anatomical and Visual Space for the Mouse Retina: Computational Reconstruction and Transformation of Flattened Retinae with the Retistruct Package

    Get PDF
    The concept of topographic mapping is central to the understanding of the visual system at many levels, from the developmental to the computational. It is important to be able to relate different coordinate systems, e.g. maps of the visual field and maps of the retina. Retinal maps are frequently based on flat-mount preparations. These use dissection and relaxing cuts to render the quasi-spherical retina into a 2D preparation. The variable nature of relaxing cuts and associated tears limits quantitative cross-animal comparisons. We present an algorithm, "Retistruct," that reconstructs retinal flat-mounts by mapping them into a standard, spherical retinal space. This is achieved by: stitching the marked-up cuts of the flat-mount outline; dividing the stitched outline into a mesh whose vertices then are mapped onto a curtailed sphere; and finally moving the vertices so as to minimise a physically-inspired deformation energy function. Our validation studies indicate that the algorithm can estimate the position of a point on the intact adult retina to within 8° of arc (3.6% of nasotemporal axis). The coordinates in reconstructed retinae can be transformed to visuotopic coordinates. Retistruct is used to investigate the organisation of the adult mouse visual system. We orient the retina relative to the nictitating membrane and compare this to eye muscle insertions. To align the retinotopic and visuotopic coordinate systems in the mouse, we utilised the geometry of binocular vision. In standard retinal space, the composite decussation line for the uncrossed retinal projection is located 64° away from the retinal pole. Projecting anatomically defined uncrossed retinal projections into visual space gives binocular congruence if the optical axis of the mouse eye is oriented at 64° azimuth and 22° elevation, in concordance with previous results. Moreover, using these coordinates, the dorsoventral boundary for S-opsin expressing cones closely matches the horizontal meridian
    • 

    corecore