2,088 research outputs found

    F1000 recommendations as a new data source for research evaluation: A comparison with citations

    Get PDF
    F1000 is a post-publication peer review service for biological and medical research. F1000 aims to recommend important publications in the biomedical literature, and from this perspective F1000 could be an interesting tool for research evaluation. By linking the complete database of F1000 recommendations to the Web of Science bibliographic database, we are able to make a comprehensive comparison between F1000 recommendations and citations. We find that about 2% of the publications in the biomedical literature receive at least one F1000 recommendation. Recommended publications on average receive 1.30 recommendations, and over 90% of the recommendations are given within half a year after a publication has appeared. There turns out to be a clear correlation between F1000 recommendations and citations. However, the correlation is relatively weak, at least weaker than the correlation between journal impact and citations. More research is needed to identify the main reasons for differences between recommendations and citations in assessing the impact of publications

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    Active classification with comparison queries

    Full text link
    We study an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query). We focus on the class of half spaces, and show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size nn using approximately O(logn)O(\log n) queries. This implies an exponential improvement over classical active learning, where only label queries are allowed. We complement these results by showing that if any of these assumptions is removed then, in the worst case, Ω(n)\Omega(n) queries are required. Our results follow from a new general framework of active learning with additional queries. We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by O(1)O(1) examples (such as comparison queries, each of which is determined by the two compared examples). Our results for half spaces follow by bounding the inference dimension in the cases discussed above.Comment: 23 pages (not including references), 1 figure. The new version contains a minor fix in the proof of Lemma 4.

    Permanent and live load model for probabilistic structural fire analysis : a review

    No full text
    Probabilistic analysis is receiving increased attention from fire engineers, assessment bodies and researchers. It is however often unclear which probabilistic models are appropriate for the analysis. For example, in probabilistic structural fire engineering, the models used to describe the permanent and live load differ widely between studies. Through a literature review, it is observed that these diverging load models largely relate to the same underlying datasets and basic methodologies, while differences can be attributed (largely) to specific assumptions in different background papers which have become consolidated through repeated use in application studies by different researchers. Taking into account the uncovered background information, consolidated probabilistic load models are proposed

    Reliability and risk acceptance criteria for civil engineering structures

    Get PDF
    The specification of risk and reliability acceptance criteria is a key issue of reliability verifications of new and existing structures. Current target reliability levels in standards appear to have considerable scatter. Critical review of risk acceptance approaches to societal, economic and environmental risk indicates that an optimal design strategy is mostly dominated by economic aspects while human safety aspects need to be verified only in special cases. It is recommended to specify the target levels considering economic optimisation and the marginal life-saving costs principle, as both these approaches take into account the failure consequences and costs of safety measures

    Computing a Nonnegative Matrix Factorization -- Provably

    Full text link
    In the Nonnegative Matrix Factorization (NMF) problem we are given an n×mn \times m nonnegative matrix MM and an integer r>0r > 0. Our goal is to express MM as AWA W where AA and WW are nonnegative matrices of size n×rn \times r and r×mr \times m respectively. In some applications, it makes sense to ask instead for the product AWAW to approximate MM -- i.e. (approximately) minimize \norm{M - AW}_F where \norm{}_F denotes the Frobenius norm; we refer to this as Approximate NMF. This problem has a rich history spanning quantum mechanics, probability theory, data analysis, polyhedral combinatorics, communication complexity, demography, chemometrics, etc. In the past decade NMF has become enormously popular in machine learning, where AA and WW are computed using a variety of local search heuristics. Vavasis proved that this problem is NP-complete. We initiate a study of when this problem is solvable in polynomial time: 1. We give a polynomial-time algorithm for exact and approximate NMF for every constant rr. Indeed NMF is most interesting in applications precisely when rr is small. 2. We complement this with a hardness result, that if exact NMF can be solved in time (nm)o(r)(nm)^{o(r)}, 3-SAT has a sub-exponential time algorithm. This rules out substantial improvements to the above algorithm. 3. We give an algorithm that runs in time polynomial in nn, mm and rr under the separablity condition identified by Donoho and Stodden in 2003. The algorithm may be practical since it is simple and noise tolerant (under benign assumptions). Separability is believed to hold in many practical settings. To the best of our knowledge, this last result is the first example of a polynomial-time algorithm that provably works under a non-trivial condition on the input and we believe that this will be an interesting and important direction for future work.Comment: 29 pages, 3 figure

    The Power and Politics of Media and Information Literacy

    Get PDF
    We are living in a media-saturated world. Not only we receive information, we have become prosumers and are able to communicate with the ‘world.’ This has been widely reflected in the academic texts. But is there a dark side to this ‘age of information freedom?’ My argument in this paper is that although we have gotten rid of one sort of tyranny and can more freely speak up, a more suppressive and widespread process of control and surveillance is underway. Worse than that, we the users seem to be comfort with that

    Samplers and Extractors for Unbounded Functions

    Get PDF
    Blasiok (SODA\u2718) recently introduced the notion of a subgaussian sampler, defined as an averaging sampler for approximating the mean of functions f from {0,1}^m to the real numbers such that f(U_m) has subgaussian tails, and asked for explicit constructions. In this work, we give the first explicit constructions of subgaussian samplers (and in fact averaging samplers for the broader class of subexponential functions) that match the best known constructions of averaging samplers for [0,1]-bounded functions in the regime of parameters where the approximation error epsilon and failure probability delta are subconstant. Our constructions are established via an extension of the standard notion of randomness extractor (Nisan and Zuckerman, JCSS\u2796) where the error is measured by an arbitrary divergence rather than total variation distance, and a generalization of Zuckerman\u27s equivalence (Random Struct. Alg.\u2797) between extractors and samplers. We believe that the framework we develop, and specifically the notion of an extractor for the Kullback-Leibler (KL) divergence, are of independent interest. In particular, KL-extractors are stronger than both standard extractors and subgaussian samplers, but we show that they exist with essentially the same parameters (constructively and non-constructively) as standard extractors

    Reliability-Based Design of Reinforced Concrete Raft Footings Using Finite Element Method

    Get PDF
    In this study, a FORTRAN-based reliability-based design program was developed for the design of raft footings based on the ultimate and serviceability design requirements of BS8110 (1997). The well-known analysis of plate on elastic foundation using displacement method of analysis was used in conjunction with the design point method. The design point method was adopted for designing to a pre-determined safety level, T. Example of the design of a raft footing is included to demonstrate the simplicity of the procedure. It was found among other findings that there is a saving of about 64% of longitudinal reinforcement applied at the column face using the proposed method as compared with the BS8110 design method. Also, the depth of footing required using the proposed procedure was found to be 47% lower than in the deterministic method using BS8110. Also, considering a target safety index of 3.0 was found to be cheaper than considering a target safety index of 4.0 for the same loading, material and geometrical properties of the footing. It is therefore concluded that the proposed procedure is quite suitable for application
    corecore