25,651 research outputs found

    Voice and speech functions (B310-B340)

    Get PDF
    The International Classification of Functioning, Disability and Health for Children and Youth (ICF-CY) domain ‘voice and speech functions’ (b3) includes production and quality of voice (b310), articulation functions (b320), fluency and rhythm of speech (b330) and alternative vocalizations (b340, such as making musical sounds and crying, which are not reviewed here)

    A probabilistic threshold model: Analyzing semantic categorization data with the Rasch model

    Get PDF
    According to the Threshold Theory (Hampton, 1995, 2007) semantic categorization decisions come about through the placement of a threshold criterion along a dimension that represents items' similarity to the category representation. The adequacy of this theory is assessed by applying a formalization of the theory, known as the Rasch model (Rasch, 1960; Thissen & Steinberg, 1986), to categorization data for eight natural language categories and subjecting it to a formal test. In validating the model special care is given to its ability to account for inter- and intra-individual differences in categorization and their relationship with item typicality. Extensions of the Rasch model that can be used to uncover the nature of category representations and the sources of categorization differences are discussed

    Please, talk about it! When hotel popularity boosts preferences

    Get PDF
    Many consumers post on-line reviews, affecting the average evaluation of products and services. Yet, little is known about the importance of the number of reviews for consumer decision making. We conducted an on-line experiment (n= 168) to assess the joint impact of the average evaluation, a measure of quality, and the number of reviews, a measure of popularity, on hotel preference. The results show that consumers' preference increases with the number of reviews, independently of the average evaluation being high or low. This is not what one would expect from an informational point of view, and review websites fail to take this pattern into account. This novel result is mediated by demographics: young people, and in particular young males, are less affected by popularity, relying more on quality. We suggest the adoption of appropriate ranking mechanisms to fit consumer preferences. © 2014 Elsevier Ltd

    Engaging Qualities: factors affecting learner attention in online design studios

    Get PDF
    This study looks at the qualities of learner-generated online content, as rated by experts, and how these relate to learners’ engagement through comments and conversations around this content. The work uploaded to an Online Design Studio by students across a Design and Innovation Qualification was rated and analysed quantitatively using the Consensual Assessment Technique (CAT). Correlations of qualities to comments made on this content were considered and a qualitative analysis of the comments was carried out. It was observed that design students do not necessarily pay attention to the same qualities in learner-generated content that experts rate highly, except for a particular quality at the first level of study. The content that students do engage with also changes with increasing levels of study. These findings have implications for the learning design of online design courses and qualifications as well as for design institutions seeking to supplement proximate design studios with Online Social Network Services

    To Disclose or Not to Disclose, That Is the Question: Evidence from TripAdvisor

    Get PDF
    Online consumers may be hesitant to disclose personal information due to potential threats, leading to an impact on their content generation. This, in turn, poses a challenge to the credibility and sustainability of online reviews on digital platforms. To address this issue, our research examines how consumers\u27 self-disclosure affects their rating behaviors and the existence of the positive-negative asymmetry based on negativity bias. Utilizing data from TripAdvisor, our analysis demonstrated that consumers\u27 self-disclosure had a negative impact on rating inconsistency and a stronger herding behavior for those submitting ratings lower than the hotel’s average ratings. Additionally, we found that certain factors, such as more peer disclosure, longer time intervals between check-in and review posting, and greater expertise, can mitigate the negative impact of self-disclosure on rating behavior. Our findings make critical contributions to the extant literature, as well as provide significant managerial implications to participants in the digital platform

    User evaluation of a pilot terminologies server for a distributed multi-scheme environment

    Get PDF
    The present paper reports on a user-centred evaluation of a pilot terminology service developed as part of the High Level Thesaurus (HILT) project at the Centre for Digital Library Research (CDLR) in the University of Strathclyde in Glasgow. The pilot terminology service was developed as an experimental platform to investigate issues relating to mapping between various subject schemes, namely Dewey Decimal Classification (DDC), Library of Congress Subject Headings (LCSH), the Unesco thesaurus, and the MeSH thesaurus, in order to cater for cross-browsing and cross-searching across distributed digital collections and services. The aim of the evaluation reported here was to investigate users' thought processes, perceptions, and attitudes towards the pilot terminology service and to identify user requirements for developing a full-blown pilot terminology service

    What impacts the helpfulness of online multidimensional reviews? A perspective from cross-attribute rating and ranking Inconsistency

    Get PDF
    This paper proposes investigations of the effects of information inconsistency, particularly ranking inconsistency, on the review helpfulness in a multidimensional rating system, based on information diagnosticity and attribution theory. The insight findings of this paper are: (a) The product cross-attribute dispersion has a significant negative impact on review helpfulness, while the overall attribute ranking consistency and the ranking consistency of the product’s best prominent attribute positively impact review helpfulness. (b) The product cross-attribute dispersion negatively impacts the review helpfulness for non-luxury products but it positively impacts that for luxury products, while the cross-attribute rating difference of a single review positively impacts it helpfulness only if the product is non-luxury. (c) The overall attribute ranking consistency significantly impacts the review helpfulness only for luxury products, whereas the ranking consistency of the product\u27s best and worst prominent attributes impact the review helpfulness only for non-luxury products

    Disparity between the Programmatic Views and the User Perceptions of Mobile Apps

    Get PDF
    User perception in any mobile-app ecosystem, is represented as user ratings of apps. Unfortunately, the user ratings are often biased and do not reflect the actual usability of an app. To address the challenges associated with selection and ranking of apps, we need to use a comprehensive and holistic view about the behavior of an app. In this paper, we present and evaluate Trust based Rating and Ranking (TRR) approach. It relies solely on an apps' internal view that uses programmatic artifacts. We compute a trust tuple (Belief, Disbelief, Uncertainty - B, D, U) for each app based on the internal view and use it to rank the order apps offering similar functionality. Apps used for empirically evaluating the TRR approach are collected from the Google Play Store. Our experiments compare the TRR ranking with the user review-based ranking present in the Google Play Store. Although, there are disparities between the two rankings, a slightly deeper investigation indicates an underlying similarity between the two alternatives

    Average Scores Integration in Official Star Rating Scheme

    Get PDF
    Purpose: Evidence suggests that electronic word-of-mouth (eWOM) plays a highly influential role in decision-making when booking hotel rooms. The number of online sources where consumers can obtain information on hotel ratings provided has grown exponentially. Hence, a number of companies have developed average scores to summarize this information and to make it more easily available to consumers. Furthermore, official star rating schemes are starting to provide these commercially developed average scores to complement the information their schemes offer. The purpose of this paper is to examine the robustness of these systems. Design/methodology/approach: Average scores from different systems, and the scores provided by one rating site were collected for 200 hotels and compared. Findings: Findings suggested important differences in the ratings and assigned descriptive word across websites. Research limitations/implications: The results imply that the application of average scores by official organizations is not legitimate and identifies a research gap in the area of consumer and star rating standardization. Originality/value: The paper is of value to the industry and academia related to the examination of rating scales adopted by major online review tourism providers. Evidence of malpractice has been identified and the adoption of this type of scales by official star rating schemes is questioned.Peer reviewe
    • 

    corecore