17,612 research outputs found

    Large-eddy simulation of a particle-laden turbulent channel flow

    Get PDF
    Large-eddy simulations of a vertical turbulent channel flow with 420,000 solid particles are performed in order to get insight into fundamental aspects of a riser flow The question is addressed whether collisions between particles are important for the ow statistics. The turbulent channel ow corresponds to a particle volume fraction of 0.013 and a mass load ratio of 18, values that are relatively high compared to recent literature on large-eddy simulation of two-phase ows. In order to simulate this ow, we present a formulation of the equations for compressible ow in a porous medium including particle forces. These equations are solved with LES using a Taylor approximation of the dynamic subgrid-model. The results show that due to particle-uid interactions the boundary layer becomes thinner, leading to a higher skin-friction coefcient. Important effects of the particle collisions are also observed, on the mean uid prole, but even more o on particle properties. The collisions cause a less uniform particle concentration\ud and considerably atten the mean solids velocity prole

    Information-Theoretic Active Learning for Content-Based Image Retrieval

    Full text link
    We propose Information-Theoretic Active Learning (ITAL), a novel batch-mode active learning method for binary classification, and apply it for acquiring meaningful user feedback in the context of content-based image retrieval. Instead of combining different heuristics such as uncertainty, diversity, or density, our method is based on maximizing the mutual information between the predicted relevance of the images and the expected user feedback regarding the selected batch. We propose suitable approximations to this computationally demanding problem and also integrate an explicit model of user behavior that accounts for possible incorrect labels and unnameable instances. Furthermore, our approach does not only take the structure of the data but also the expected model output change caused by the user feedback into account. In contrast to other methods, ITAL turns out to be highly flexible and provides state-of-the-art performance across various datasets, such as MIRFLICKR and ImageNet.Comment: GCPR 2018 paper (14 pages text + 2 pages references + 6 pages appendix

    Towards a Robuster Interpretive Parsing

    Get PDF
    The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this problem. We generalize RIP and suggest replacing the winner candidate with a weighted mean violation of the potential winner candidates. A Boltzmann distribution is introduced on the winner set, and the distribution’s parameter TT is gradually decreased. Finally, we show that GRIP, the Generalized Robust Interpretive Parsing Algorithm significantly improves the learning success rate in a model with standard constraints for metrical stress assignment

    Long-Term Visual Object Tracking Benchmark

    Full text link
    We propose a new long video dataset (called Track Long and Prosper - TLP) and benchmark for single object tracking. The dataset consists of 50 HD videos from real world scenarios, encompassing a duration of over 400 minutes (676K frames), making it more than 20 folds larger in average duration per sequence and more than 8 folds larger in terms of total covered duration, as compared to existing generic datasets for visual tracking. The proposed dataset paves a way to suitably assess long term tracking performance and train better deep learning architectures (avoiding/reducing augmentation, which may not reflect real world behaviour). We benchmark the dataset on 17 state of the art trackers and rank them according to tracking accuracy and run time speeds. We further present thorough qualitative and quantitative evaluation highlighting the importance of long term aspect of tracking. Our most interesting observations are (a) existing short sequence benchmarks fail to bring out the inherent differences in tracking algorithms which widen up while tracking on long sequences and (b) the accuracy of trackers abruptly drops on challenging long sequences, suggesting the potential need of research efforts in the direction of long-term tracking.Comment: ACCV 2018 (Oral

    Using Regular Languages to Explore the Representational Capacity of Recurrent Neural Architectures

    Get PDF
    The presence of Long Distance Dependencies (LDDs) in sequential data poses significant challenges for computational models. Various recurrent neural architectures have been designed to mitigate this issue. In order to test these state-of-the-art architectures, there is growing need for rich benchmarking datasets. However, one of the drawbacks of existing datasets is the lack of experimental control with regards to the presence and/or degree of LDDs. This lack of control limits the analysis of model performance in relation to the specific challenge posed by LDDs. One way to address this is to use synthetic data having the properties of subregular languages. The degree of LDDs within the generated data can be controlled through the k parameter, length of the generated strings, and by choosing appropriate forbidden strings. In this paper, we explore the capacity of different RNN extensions to model LDDs, by evaluating these models on a sequence of SPk synthesized datasets, where each subsequent dataset exhibits a longer degree of LDD. Even though SPk are simple languages, the presence of LDDs does have significant impact on the performance of recurrent neural architectures, thus making them prime candidate in benchmarking tasks.Comment: International Conference of Artificial Neural Networks (ICANN) 201

    Dusty star forming galaxies at high redshift

    Get PDF
    The global star formation rate in high redshift galaxies, based on optical surveys, shows a strong peak at a redshift of z=1.5, which implies that we have already seen most of the formation. High redshift galaxies may, however, emit most of their energy at submillimeter wavelengths if they contain substantial amounts of dust. The dust would absorb the starlight and reradiate it as far-infrared light, which would be redshifted to the submillimeter range. Here we report a deep survey of two blank regions of sky performed at submillimeter wavelengths (450 and 850-micron). If the sources we detect in the 850-micron band are powered by star formation, then each must be converting more than 100 solar masses of gas per year into stars, which is larger than the maximum star formation rates inferred for most optically-selected galaxies. The total amount of high redshift star formation is essentially fixed by the level of background light, but where the peak occurs in redshift for the submillimeter is not yet established. However, the background light contribution from only the sources detected at 850-micron is already comparable to that from the optically-selected sources. Establishing the main epoch of star formation will therefore require a combination of optical and submillimeter studies.Comment: 10 pages + 2 Postscript figures, under embargo at Natur

    Telomere length regulation: coupling DNA end processing to feedback regulation of telomerase

    Get PDF
    The conventional DNA polymerase machinery is unable to fully replicate the ends of linear chromosomes. To surmount this problem, nearly all eukaryotes use the telomerase enzyme, a specialized reverse transcriptase that utizes its own RNA template to add short TG-rich repeats to chromosome ends, thus reversing their gradual erosion occurring at each round of replication. This unique, non-DNA templated mode of telomere replication requires a regulatory mechanism to ensure that telomerase acts at telomeres whose TG tracts are too short, but not at those with long tracts, thus maintaining the protective TG repeat cap at an appropriate average length. The prevailing notion in the field is that telomere length regulation is brought about through a negative feedback mechanism that counts TG repeat-bound protein complexes to generate a signal that regulates telomerase action. This review summarizes experiments leading up to this model and then focuses on more recent experiments, primarily from yeast, that begin to suggest how this counting mechanism might work. The emerging picture is that of a complex interplay between the conventional DNA replication machinery, DNA damage response factors, and a specialized set of proteins that help to recruit and regulate the telomerase enzyme

    Fostering collective intelligence education

    Get PDF
    New educational models are necessary to update learning environments to the digitally shared communication and information. Collective intelligence is an emerging field that already has a significant impact in many areas and will have great implications in education, not only from the side of new methodologies but also as a challenge for education. This paper proposes an approach to a collective intelligence model of teaching using Internet to combine two strategies: idea management and real time assessment in the class. A digital tool named Fabricius has been created supporting these two elements to foster the collaboration and engagement of students in the learning process. As a result of the research we propose a list of KPI trying to measure individual and collective performance. We are conscious that this is just a first approach to define which aspects of a class following a course can be qualified and quantified.Postprint (published version

    Linear-T resistivity and change in Fermi surface at the pseudogap critical point of a high-Tc superconductor

    Full text link
    A fundamental question of high-temperature superconductors is the nature of the pseudogap phase which lies between the Mott insulator at zero doping and the Fermi liquid at high doping p. Here we report on the behaviour of charge carriers near the zero-temperature onset of that phase, namely at the critical doping p* where the pseudogap temperature T* goes to zero, accessed by investigating a material in which superconductivity can be fully suppressed by a steady magnetic field. Just below p*, the normal-state resistivity and Hall coefficient of La1.6-xNd0.4SrxCuO4 are found to rise simultaneously as the temperature drops below T*, revealing a change in the Fermi surface with a large associated drop in conductivity. At p*, the resistivity shows a linear temperature dependence as T goes to zero, a typical signature of a quantum critical point. These findings impose new constraints on the mechanisms responsible for inelastic scattering and Fermi surface transformation in theories of the pseudogap phase.Comment: 24 pages, 6 figures. Published in Nature Physics. Online at http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys1109.htm

    A characteristic particle method for traffic flow simulations on highway networks

    Full text link
    A characteristic particle method for the simulation of first order macroscopic traffic models on road networks is presented. The approach is based on the method "particleclaw", which solves scalar one dimensional hyperbolic conservations laws exactly, except for a small error right around shocks. The method is generalized to nonlinear network flows, where particle approximations on the edges are suitably coupled together at the network nodes. It is demonstrated in numerical examples that the resulting particle method can approximate traffic jams accurately, while only devoting a few degrees of freedom to each edge of the network.Comment: 15 pages, 5 figures. Accepted to the proceedings of the Sixth International Workshop Meshfree Methods for PDE 201
    corecore