2,559,808 research outputs found

    Online peer reviews: A lasting innovation from the COVID pandemic?

    Get PDF
    The COVID pandemic caused a forced transition to online learning as schools were closed to stop the spread of the disease. Schools and teachers coped with this by adapting face-to-face activities to the online environment in innovative ways. This study investigates the effectiveness of conducting peer reviews online and considers whether this innovation should be retained after the pandemic ends. It was conducted in a small, private university in a developing country. Out of 130 students, 34 surveys were collected (26%) that contained useful quantitative and qualitative data. The checklist forms used in the peer reviews were compared to the subsequent draft to see the uptake of the feedback. The results showed that students incorporated 72% of the peer suggestions in the next versions of their report. Overall, the peer reviews were found to be effective, motivating, and to increase confidence as a writer. Students considered the most effective way to improve writing is with a combination of peer and teachergiven feedback. Interestingly, no clear preference for face-to-face peer reviews was indicated. Therefore, it seems that conducting peer reviews online is a valid option for teachers who want to save valuable class time. To enhance the effectiveness of online peer reviews it is suggested that teachers give substantial learner training prior to the peer review, provide structure such as checklists or guidelines, and increase accountability by giving students a chance to meaningfully evaluate the comments and participation of their peer reviewers

    The Impact of Exempting the Pharmaceutical Industry from Patent Reviews

    Get PDF
    This paper analyzes the impact of an amendment to Senate Bill 1137, offered by Senator Thomas Tillis, which would exempt patents related to pharmaceuticals and biological products from the Inter Partes Review (IPR) process. The IPR process was established in the America Invents Act, which was passed and signed into law in 2012. The process is intended to provide a quick and low-cost way in which dubious patent claims can be challenged by those who might be affected. In the first two years in which it was in place, almost one-third of challenged claims were canceled or removed according to data from the United States Patent and Trademark Office (USPTO).Based on this data, the paper argues that the IPR process appears to be an effective mechanism for quickly removing dubious patent claims before they impose major costs on the economy

    NRPA: Neural Recommendation with Personalized Attention

    Full text link
    Existing review-based recommendation methods usually use the same model to learn the representations of all users/items from reviews posted by users towards items. However, different users have different preference and different items have different characteristics. Thus, the same word or similar reviews may have different informativeness for different users and items. In this paper we propose a neural recommendation approach with personalized attention to learn personalized representations of users and items from reviews. We use a review encoder to learn representations of reviews from words, and a user/item encoder to learn representations of users or items from reviews. We propose a personalized attention model, and apply it to both review and user/item encoders to select different important words and reviews for different users/items. Experiments on five datasets validate our approach can effectively improve the performance of neural recommendation.Comment: 4 pages, 4 figure

    The voice of the child: learning lessons from serious case reviews’

    Get PDF
    This report provides an analysis of 67 serious case reviews that Ofsted evaluated between 1 April and 30 September 2010. The main focus of the report is on the importance of listening to the voice of the child. Previous Ofsted reports have analysed serious case reviews and identified this as a recurrent theme which is considered in greater detail here

    Are systematic reviews up-to-date at the time of publication?

    Get PDF
    BACKGROUND: Systematic reviews provide a synthesis of evidence for practitioners, for clinical practice guideline developers, and for those designing and justifying primary research. Having an up-to-date and comprehensive review is therefore important. Our main objective was to determine the recency of systematic reviews at the time of their publication, as measured by the time from last search date to publication. We also wanted to study the time from search date to acceptance, and from acceptance to publication, and measure the proportion of systematic reviews with recorded information on search dates and information sources in the abstract and full text of the review. METHODS: A descriptive analysis of published systematic reviews indexed in Medline in 2009, 2010 and 2011 by three reviewers, independently extracting data. RESULTS: Of the 300 systematic reviews included, 271 (90%) provided the date of search in the full-text article, but only 141 (47%) stated this in the abstract. The median (standard error; minimum to maximum) survival time from last search to acceptance was 5.1 (0.58; 0 to 43.8) months (95% confidence interval = 3.9 to 6.2) and from last search to first publication time was 8.0 (0.35; 0 to 46.7) months (95% confidence interval = 7.3 to 8.7), respectively. Of the 300 reviews, 295 (98%) stated which databases had been searched, but only 181 (60%) stated the databases in the abstract. Most researchers searched three (35%) or four (21%) databases. The top-three most used databases were MEDLINE (79%), Cochrane library (76%), and EMBASE (64%). CONCLUSIONS: Being able to identify comprehensive, up-to-date reviews is important to clinicians, guideline groups, and those designing clinical trials. This study demonstrates that some reviews have a considerable delay between search and publication, but only 47% of systematic review abstracts stated the last search date and 60% stated the databases that had been searched. Improvements in the quality of abstracts of systematic reviews and ways to shorten the review and revision processes to make review publication more rapid are needed

    Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews

    Get PDF
    This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., "subtle nuances") and a negative semantic orientation when it has bad associations (e.g., "very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word "excellent" minus the mutual information between the given phrase and the word "poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews

    From the Reviews

    Get PDF
    From the Reviews bibliography of writings on international law: World Military Confrontations / Law, Policy and War / Law and the Maintenance of Peace International Trade and Finance European Economic Community International Organizations Conflicts Private International La

    Detecting Singleton Review Spammers Using Semantic Similarity

    Full text link
    Online reviews have increasingly become a very important resource for consumers when making purchases. Though it is becoming more and more difficult for people to make well-informed buying decisions without being deceived by fake reviews. Prior works on the opinion spam problem mostly considered classifying fake reviews using behavioral user patterns. They focused on prolific users who write more than a couple of reviews, discarding one-time reviewers. The number of singleton reviewers however is expected to be high for many review websites. While behavioral patterns are effective when dealing with elite users, for one-time reviewers, the review text needs to be exploited. In this paper we tackle the problem of detecting fake reviews written by the same person using multiple names, posting each review under a different name. We propose two methods to detect similar reviews and show the results generally outperform the vectorial similarity measures used in prior works. The first method extends the semantic similarity between words to the reviews level. The second method is based on topic modeling and exploits the similarity of the reviews topic distributions using two models: bag-of-words and bag-of-opinion-phrases. The experiments were conducted on reviews from three different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset (800 reviews).Comment: 6 pages, WWW 201

    Identifying leading indicators of product recalls from online reviews using positive unlabeled learning and domain adaptation

    Full text link
    Consumer protection agencies are charged with safeguarding the public from hazardous products, but the thousands of products under their jurisdiction make it challenging to identify and respond to consumer complaints quickly. From the consumer's perspective, online reviews can provide evidence of product defects, but manually sifting through hundreds of reviews is not always feasible. In this paper, we propose a system to mine Amazon.com reviews to identify products that may pose safety or health hazards. Since labeled data for this task are scarce, our approach combines positive unlabeled learning with domain adaptation to train a classifier from consumer complaints submitted to the U.S. Consumer Product Safety Commission. On a validation set of manually annotated Amazon product reviews, we find that our approach results in an absolute F1 score improvement of 8% over the best competing baseline. Furthermore, we apply the classifier to Amazon reviews of known recalled products; the classifier identifies reviews reporting safety hazards prior to the recall date for 45% of the products. This suggests that the system may be able to provide an early warning system to alert consumers to hazardous products before an official recall is announced
    • …
    corecore