1,670 research outputs found

    End-to-End Learning of Video Super-Resolution with Motion Compensation

    Full text link
    Learning approaches have shown great success in the task of super-resolving an image given a low resolution input. Video super-resolution aims for exploiting additionally the information from multiple images. Typically, the images are related via optical flow and consecutive image warping. In this paper, we provide an end-to-end video super-resolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture. We analyze the usage of optical flow for video super-resolution and find that common off-the-shelf image warping does not allow video super-resolution to benefit much from optical flow. We rather propose an operation for motion compensation that performs warping from low to high resolution directly. We show that with this network configuration, video super-resolution can benefit from optical flow and we obtain state-of-the-art results on the popular test sets. We also show that the processing of whole images rather than independent patches is responsible for a large increase in accuracy.Comment: Accepted to GCPR201

    Prelimbic cortex maintains attention to category-relevant information and flexibly updates category representations

    Get PDF
    Category learning groups stimuli according to similarity or function. This involves finding and attending to stimulus features that reliably inform category membership. Although many of the neural mechanisms underlying categorization remain elusive, models of human category learning posit that prefrontal cortex plays a substantial role. Here, we investigated the role of the prelimbic cortex (PL) in rat visual category learning by administering excitotoxic lesions before category training and then evaluating the effects of the lesions with computational modeling. Using a touchscreen apparatus, rats (female and male) learned to categorize distributions of category stimuli that varied along two continuous dimensions. For some rats, categorizing the stimuli encouraged selective attention towards a single stimulus dimension (i.e., 1D tasks). For other rats, categorizing the stimuli required divided attention towards both stimulus dimensions (i.e., 2D tasks). Testing sessions then examined generalization to novel exemplars. PL lesions impaired learning and generalization for the 1D tasks, but not the 2D tasks. Then, a neural network was fit to the behavioral data to examine how the lesions affected categorization. The results suggest that the PL facilitates category learning by maintaining attention to category-relevant information and updating category representations

    Accounting for centre-effects in multicentre trials with a binary outcome - when, why, and how?

    Get PDF
    BACKGROUND: It is often desirable to account for centre-effects in the analysis of multicentre randomised trials, however it is unclear which analysis methods are best in trials with a binary outcome. METHODS: We compared the performance of four methods of analysis (fixed-effects models, random-effects models, generalised estimating equations (GEE), and Mantel-Haenszel) using a re-analysis of a previously reported randomised trial (MIST2) and a large simulation study. RESULTS: The re-analysis of MIST2 found that fixed-effects and Mantel-Haenszel led to many patients being dropped from the analysis due to over-stratification (up to 69% dropped for Mantel-Haenszel, and up to 33% dropped for fixed-effects). Conversely, random-effects and GEE included all patients in the analysis, however GEE did not reach convergence. Estimated treatment effects and p-values were highly variable across different analysis methods. The simulation study found that most methods of analysis performed well with a small number of centres. With a large number of centres, fixed-effects led to biased estimates and inflated type I error rates in many situations, and Mantel-Haenszel lost power compared to other analysis methods in some situations. Conversely, both random-effects and GEE gave nominal type I error rates and good power across all scenarios, and were usually as good as or better than either fixed-effects or Mantel-Haenszel. However, this was only true for GEEs with non-robust standard errors (SEs); using a robust ‘sandwich’ estimator led to inflated type I error rates across most scenarios. CONCLUSIONS: With a small number of centres, we recommend the use of fixed-effects, random-effects, or GEE with non-robust SEs. Random-effects and GEE with non-robust SEs should be used with a moderate or large number of centres

    Selective attention in rat visual category learning

    Get PDF
    A prominent theory of category learning, COVIS, posits that new categories are learned with either a declarative or procedural system, depending on the task. The declarative system uses the prefrontal cortex (PFC) to learn rule-based (RB) category tasks in which there is one relevant sensory dimension that can be used to establish a rule for solving the task, whereas the procedural system uses corticostriatal circuits for information integration (II) tasks in which there are multiple relevant dimensions, precluding use of explicit rules. Previous studies have found faster learning of RB versus II tasks in humans and monkeys but not in pigeons. The absence of a learning rate difference in pigeons has been attributed to their lacking a PFC. A major gap in this comparative analysis, however, is the lack of data from a nonprimate mammalian species, such as rats, that have a PFC but a less differentiated PFC than primates. Here, we investigated RB and II category learning in rats. Similar to pigeons, RB and II tasks were learned at the same rate. After reaching a learning criterion, wider distributions of stimuli were presented to examine generalization. A second experiment found equivalent RB and II learning with wider category distributions. Computational modeling revealed that rats extract and selectively attend to category-relevant information but do not consistently use rules to solve the RB task. These findings suggest rats are on a continuum of PFC function between birds and primates, with selective attention but limited ability to utilize rules relative to primates

    Selective attention in rat visual category learning

    Get PDF
    A prominent theory of category learning, COVIS, posits that new categories are learned with either a declarative or procedural system, depending on the task. The declarative system uses the prefrontal cortex (PFC) to learn rule-based (RB) category tasks in which there is one relevant sensory dimension that can be used to establish a rule for solving the task, whereas the procedural system uses corticostriatal circuits for information integration (II) tasks in which there are multiple relevant dimensions, precluding use of explicit rules. Previous studies have found faster learning of RB versus II tasks in humans and monkeys but not in pigeons. The absence of a learning rate difference in pigeons has been attributed to their lacking a PFC. A major gap in this comparative analysis, however, is the lack of data from a nonprimate mammalian species, such as rats, that have a PFC but a less differentiated PFC than primates. Here, we investigated RB and II category learning in rats. Similar to pigeons, RB and II tasks were learned at the same rate. After reaching a learning criterion, wider distributions of stimuli were presented to examine generalization. A second experiment found equivalent RB and II learning with wider category distributions. Computational modeling revealed that rats extract and selectively attend to category-relevant information but do not consistently use rules to solve the RB task. These findings suggest rats are on a continuum of PFC function between birds and primates, with selective attention but limited ability to utilize rules relative to primates

    A re-randomisation design for clinical trials

    Get PDF
    Background: Recruitment to clinical trials is often problematic, with many trials failing to recruit to their target sample size. As a result, patient care may be based on suboptimal evidence from underpowered trials or non-randomised studies. Methods: For many conditions patients will require treatment on several occasions, for example, to treat symptoms of an underlying chronic condition (such as migraines, where treatment is required each time a new episode occurs), or until they achieve treatment success (such as fertility, where patients undergo treatment on multiple occasions until they become pregnant). We describe a re-randomisation design for these scenarios, which allows each patient to be independently randomised on multiple occasions. We discuss the circumstances in which this design can be used. Results: The re-randomisation design will give asymptotically unbiased estimates of treatment effect and correct type I error rates under the following conditions: (a) patients are only re-randomised after the follow-up period from their previous randomisation is complete; (b) randomisations for the same patient are performed independently; and (c) the treatment effect is constant across all randomisations. Provided the analysis accounts for correlation between observations from the same patient, this design will typically have higher power than a parallel group trial with an equivalent number of observations. Conclusions: If used appropriately, the re-randomisation design can increase the recruitment rate for clinical trials while still providing an unbiased estimate of treatment effect and correct type I error rates. In many situations, it can increase the power compared to a parallel group design with an equivalent number of observations

    A comparison of methods to adjust for continuous covariates in the analysis of randomised trials

    Get PDF
    BACKGROUND: Although covariate adjustment in the analysis of randomised trials can be beneficial, adjustment for continuous covariates is complicated by the fact that the association between covariate and outcome must be specified. Misspecification of this association can lead to reduced power, and potentially incorrect conclusions regarding treatment efficacy. METHODS: We compared several methods of adjustment to determine which is best when the association between covariate and outcome is unknown. We assessed (a) dichotomisation or categorisation; (b) assuming a linear association with outcome; (c) using fractional polynomials with one (FP1) or two (FP2) polynomial terms; and (d) using restricted cubic splines with 3 or 5 knots. We evaluated each method using simulation and through a re-analysis of trial datasets. RESULTS: Methods which kept covariates as continuous typically had higher power than methods which used categorisation. Dichotomisation, categorisation, and assuming a linear association all led to large reductions in power when the true association was non-linear. FP2 models and restricted cubic splines with 3 or 5 knots performed best overall. CONCLUSIONS: For the analysis of randomised trials we recommend (1) adjusting for continuous covariates even if their association with outcome is unknown; (2) keeping covariates as continuous; and (3) using fractional polynomials with two polynomial terms or restricted cubic splines with 3 to 5 knots when a linear association is in doubt

    Science Models as Value-Added Services for Scholarly Information Systems

    Full text link
    The paper introduces scholarly Information Retrieval (IR) as a further dimension that should be considered in the science modeling debate. The IR use case is seen as a validation model of the adequacy of science models in representing and predicting structure and dynamics in science. Particular conceptualizations of scholarly activity and structures in science are used as value-added search services to improve retrieval quality: a co-word model depicting the cognitive structure of a field (used for query expansion), the Bradford law of information concentration, and a model of co-authorship networks (both used for re-ranking search results). An evaluation of the retrieval quality when science model driven services are used turned out that the models proposed actually provide beneficial effects to retrieval quality. From an IR perspective, the models studied are therefore verified as expressive conceptualizations of central phenomena in science. Thus, it could be shown that the IR perspective can significantly contribute to a better understanding of scholarly structures and activities.Comment: 26 pages, to appear in Scientometric

    Identification of Colletotrichum species associated with anthracnose disease of coffee in Vietnam

    Get PDF
    Colletotrichum gloeosporioides, C. acutatum, C. capsici and C. boninense associated with anthracnose disease on coffee (Coffea spp.) in Vietnam were identified based on morphology and DNA analysis. Phylogenetic analysis of DNA sequences from the internal transcribed spacer region of nuclear rDNA and a portion of mitochondrial small subunit rRNA were concordant and allowed good separation of the taxa. We found several Colletotrichum isolates of unknown species and their taxonomic position remains unresolved. The majority of Vietnamese isolates belonged to C. gloeosporioides and they grouped together with the coffee berry disease (CBD) fungus, C. kahawae. However, C. kahawae could be distinguished from the Vietnamese C. gloeosporioides isolates based on ammonium tartrate utilization, growth rate and pathogenictity. C. gloeosporioides isolates were more pathogenic on detached green berries than isolates of the other species, i.e. C. acutatum, C capsici and C. boninense. Some of the C. gloeosporioides isolates produced slightly sunken lesion on green berries resembling CBD symptoms but it did not destroy the bean. We did not find any evidence of the presence of C. kahawae in Vietnam

    Choosing sensitivity analyses for randomised trials: principles

    Get PDF
    Background Sensitivity analyses are an important tool for understanding the extent to which the results of randomised trials depend upon the assumptions of the analysis. There is currently no guidance governing the choice of sensitivity analyses. Discussion We provide a principled approach to choosing sensitivity analyses through the consideration of the following questions: 1) Does the proposed sensitivity analysis address the same question as the primary analysis? 2) Is it possible for the proposed sensitivity analysis to return a different result to the primary analysis? 3) If the results do differ, is there any uncertainty as to which will be believed? Answering all of these questions in the affirmative will help researchers to identify relevant sensitivity analyses. Treating analyses as sensitivity analyses when one or more of the answers are negative can be misleading and confuse the interpretation of studies. The value of these questions is illustrated with several examples. Summary By removing unreasonable analyses that might have been performed, these questions will lead to relevant sensitivity analyses, which help to assess the robustness of trial results
    • …
    corecore