91,286 research outputs found

    Interactive retrieval of video using pre-computed shot-shot similarities

    Get PDF
    A probabilistic framework for content-based interactive video retrieval is described. The developed indexing of video fragments originates from the probability of the user's positive judgment about key-frames of video shots. Initial estimates of the probabilities are obtained from low-level feature representation. Only statistically significant estimates are picked out, the rest are replaced by an appropriate constant allowing efficient access at search time without loss of search quality and leading to improvement in most experiments. With time, these probability estimates are updated from the relevance judgment of users performing searches, resulting in further substantial increases in mean average precision

    Controlling Fairness and Bias in Dynamic Learning-to-Rank

    Full text link
    Rankings are the primary interface through which many online platforms match users to items (e.g. news, products, music, video). In these two-sided markets, not only the users draw utility from the rankings, but the rankings also determine the utility (e.g. exposure, revenue) for the item providers (e.g. publishers, sellers, artists, studios). It has already been noted that myopically optimizing utility to the users, as done by virtually all learning-to-rank algorithms, can be unfair to the item providers. We, therefore, present a learning-to-rank approach for explicitly enforcing merit-based fairness guarantees to groups of items (e.g. articles by the same publisher, tracks by the same artist). In particular, we propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data. The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility, dynamically adapting both as more data becomes available. In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.Comment: First two authors contributed equally. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 202

    Value and prediction error in medial frontal cortex: integrating the single-unit and systems levels of analysis

    Get PDF
    The role of the anterior cingulate cortex (ACC) in cognition has been extensively investigated with several techniques, including single-unit recordings in rodents and monkeys and EEG and fMRI in humans. This has generated a rich set of data and points of view. Important theoretical functions proposed for ACC are value estimation, error detection, error-likelihood estimation, conflict monitoring, and estimation of reward volatility. A unified view is lacking at this time, however. Here we propose that online value estimation could be the key function underlying these diverse data. This is instantiated in the reward value and prediction model (RVPM). The model contains units coding for the value of cues (stimuli or actions) and units coding for the differences between such values and the actual reward (prediction errors). We exposed the model to typical experimental paradigms from single-unit, EEG, and fMRI research to compare its overall behavior with the data from these studies. The model reproduced the ACC behavior of previous single-unit, EEG, and fMRI studies on reward processing, error processing, conflict monitoring, error-likelihood estimation, and volatility estimation, unifying the interpretations of the role performed by the ACC in some aspects of cognition

    FTT:Power : A global model of the power sector with induced technological change and natural resource depletion

    Get PDF
    This work introduces a model of Future Technology Transformations for the power sector (FTT:Power), a representation of global power systems based on market competition, induced technological change (ITC) and natural resource use and depletion. It is the first component of a family of sectoral bottom-up models of future technology transformations, designed to be integrated into the global macroeconometric model E3MG. ITC occurs as a result of technological learning as given by cumulative investment and leads to highly nonlinear, irreversible and path dependent technological transitions. The model makes use of a dynamic coupled set of logistic differential equations. As opposed to traditional bottom-up energy models based on systems optimisation, logistic equations offer an appropriate treatment of the times and rates of change involved in sectoral technology transformations. Resource use and depletion are represented by local cost-supply curves, which give rise to different regional energy landscapes. The model is explored using two simple scenarios, a baseline and a mitigation case where the price of carbon is gradually increased. While a constant price of carbon leads to a stagnant system, mitigation produces successive technology transitions leading towards the gradual decarbonisation of the global power sector.This work was supported by the Three Guineas TrustSubmitted for publication to Energy Polic

    The Neural Particle Filter

    Get PDF
    The robust estimation of dynamically changing features, such as the position of prey, is one of the hallmarks of perception. On an abstract, algorithmic level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing signals based on the history of observations, provides a mathematical framework for dynamic perception in real time. Since the general, nonlinear filtering problem is analytically intractable, particle filters are considered among the most powerful approaches to approximating the solution numerically. Yet, these algorithms prevalently rely on importance weights, and thus it remains an unresolved question how the brain could implement such an inference strategy with a neuronal population. Here, we propose the Neural Particle Filter (NPF), a weight-less particle filter that can be interpreted as the neuronal dynamics of a recurrently connected neural network that receives feed-forward input from sensory neurons and represents the posterior probability distribution in terms of samples. Specifically, this algorithm bridges the gap between the computational task of online state estimation and an implementation that allows networks of neurons in the brain to perform nonlinear Bayesian filtering. The model captures not only the properties of temporal and multisensory integration according to Bayesian statistics, but also allows online learning with a maximum likelihood approach. With an example from multisensory integration, we demonstrate that the numerical performance of the model is adequate to account for both filtering and identification problems. Due to the weightless approach, our algorithm alleviates the 'curse of dimensionality' and thus outperforms conventional, weighted particle filters in higher dimensions for a limited number of particles
    • …
    corecore