2,079 research outputs found

    F1000 recommendations as a new data source for research evaluation: A comparison with citations

    Get PDF
    F1000 is a post-publication peer review service for biological and medical research. F1000 aims to recommend important publications in the biomedical literature, and from this perspective F1000 could be an interesting tool for research evaluation. By linking the complete database of F1000 recommendations to the Web of Science bibliographic database, we are able to make a comprehensive comparison between F1000 recommendations and citations. We find that about 2% of the publications in the biomedical literature receive at least one F1000 recommendation. Recommended publications on average receive 1.30 recommendations, and over 90% of the recommendations are given within half a year after a publication has appeared. There turns out to be a clear correlation between F1000 recommendations and citations. However, the correlation is relatively weak, at least weaker than the correlation between journal impact and citations. More research is needed to identify the main reasons for differences between recommendations and citations in assessing the impact of publications

    The Global Media and Information Literacy Week: Moving Towards MIL Cities

    Get PDF
    The Global Media and Information Literacy Week commemorates the progress in achieving “MIL for all” by aggregating various MIL-related local and international events and actions across different disciplines around the world.The MIL Global Week 2018, 24 to 31 October, was marked by the United Nations Educational, Scientific and Cultural Organization in collaboration with various organizations including the UN Alliance of Civilizations, the Global Alliance for Partnership on MIL, the International Federation of Library Associations, the International Association of School Libraries, and the UNESCO-UNAOC University Cooperation Programme on Media and Information Literacy and Intercultural Dialogue

    Flexual buckling of structural glass columns. Initial geometrical imperfection as a base for Monte Carlo simulation

    Get PDF
    In this paper Monte Carlo simulations of structural glass columns are presented. The simulation was performed according to the analytical second order theory of compressed elastic rods. A previous research on shape and size of initial geometrical imperfections is briefly summarized. An experimental analysis of glass columns that were performed for evaluation of equivalent geometrical imperfections is mentioned too

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    Active classification with comparison queries

    Full text link
    We study an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query). We focus on the class of half spaces, and show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size nn using approximately O(logn)O(\log n) queries. This implies an exponential improvement over classical active learning, where only label queries are allowed. We complement these results by showing that if any of these assumptions is removed then, in the worst case, Ω(n)\Omega(n) queries are required. Our results follow from a new general framework of active learning with additional queries. We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by O(1)O(1) examples (such as comparison queries, each of which is determined by the two compared examples). Our results for half spaces follow by bounding the inference dimension in the cases discussed above.Comment: 23 pages (not including references), 1 figure. The new version contains a minor fix in the proof of Lemma 4.

    Computing a Nonnegative Matrix Factorization -- Provably

    Full text link
    In the Nonnegative Matrix Factorization (NMF) problem we are given an n×mn \times m nonnegative matrix MM and an integer r>0r > 0. Our goal is to express MM as AWA W where AA and WW are nonnegative matrices of size n×rn \times r and r×mr \times m respectively. In some applications, it makes sense to ask instead for the product AWAW to approximate MM -- i.e. (approximately) minimize \norm{M - AW}_F where \norm{}_F denotes the Frobenius norm; we refer to this as Approximate NMF. This problem has a rich history spanning quantum mechanics, probability theory, data analysis, polyhedral combinatorics, communication complexity, demography, chemometrics, etc. In the past decade NMF has become enormously popular in machine learning, where AA and WW are computed using a variety of local search heuristics. Vavasis proved that this problem is NP-complete. We initiate a study of when this problem is solvable in polynomial time: 1. We give a polynomial-time algorithm for exact and approximate NMF for every constant rr. Indeed NMF is most interesting in applications precisely when rr is small. 2. We complement this with a hardness result, that if exact NMF can be solved in time (nm)o(r)(nm)^{o(r)}, 3-SAT has a sub-exponential time algorithm. This rules out substantial improvements to the above algorithm. 3. We give an algorithm that runs in time polynomial in nn, mm and rr under the separablity condition identified by Donoho and Stodden in 2003. The algorithm may be practical since it is simple and noise tolerant (under benign assumptions). Separability is believed to hold in many practical settings. To the best of our knowledge, this last result is the first example of a polynomial-time algorithm that provably works under a non-trivial condition on the input and we believe that this will be an interesting and important direction for future work.Comment: 29 pages, 3 figure

    Permanent and live load model for probabilistic structural fire analysis : a review

    No full text
    Probabilistic analysis is receiving increased attention from fire engineers, assessment bodies and researchers. It is however often unclear which probabilistic models are appropriate for the analysis. For example, in probabilistic structural fire engineering, the models used to describe the permanent and live load differ widely between studies. Through a literature review, it is observed that these diverging load models largely relate to the same underlying datasets and basic methodologies, while differences can be attributed (largely) to specific assumptions in different background papers which have become consolidated through repeated use in application studies by different researchers. Taking into account the uncovered background information, consolidated probabilistic load models are proposed

    Reliability and risk acceptance criteria for civil engineering structures

    Get PDF
    The specification of risk and reliability acceptance criteria is a key issue of reliability verifications of new and existing structures. Current target reliability levels in standards appear to have considerable scatter. Critical review of risk acceptance approaches to societal, economic and environmental risk indicates that an optimal design strategy is mostly dominated by economic aspects while human safety aspects need to be verified only in special cases. It is recommended to specify the target levels considering economic optimisation and the marginal life-saving costs principle, as both these approaches take into account the failure consequences and costs of safety measures

    Probabilistic Modeling of Structural Forces

    Get PDF
    Since forces acting on structures fluctuate widely with time and space during the lifetime of a structure, variations of the forces should be considered by probability distributions. Probabilistic definition of forces is expressed by random field variables including stochastic parameters. Structural forces are simulated by adopting Normal and Gamma probability distribution functions. The basic model given by JCSS (Joint Committee on Structural Safety) code principles is used as model to take into account the variations. In the simulation of the live loads comprised of sustained and intermittent loads, time intervals are assumed to follow a Poisson process and their distributions are defined by exponential distributions. The simulated loads are evaluated in terms of percentiles, correlation effects, reduction factors and extreme values. Results are compared with those of deterministic model as well. It has been observed that probabilistic model is more realistic and the results can be used in the calculation of specific fractiles like load and resistance factor design
    corecore