19,706 research outputs found
Medical Crowdsourcing: Harnessing the “Wisdom of the Crowd” to Solve Medical Mysteries
Medical crowdsourcing offers hope to patients who suffer from complex health conditions that are difficult to diagnose. Such crowdsourcing platforms empower patients to harness the “wisdom of the crowd” by providing access to a vast pool of diverse medical knowledge. Greater participation in crowdsourcing increases the likelihood of encountering a correct solution. However, more participation also leads to increased “noise,” which makes identifying the most likely solution from a broader pool of recommendations (i.e., diagnostic suggestions) difficult. The challenge for medical crowdsourcing platforms is to increase participation of both patients and solution providers, while simultaneously increasing the efficacy and accuracy of solutions. The primary objectives of this study are: (1) to investigate means to enhance the solution pool by increasing participation of solution providers referred to as “medical detectives” or “detectives,” and (2) to explore ways of selecting the most likely diagnosis from a set of alternative possibilities recommended by medical detectives. Our results suggest that our strategy of using multiple methods for evaluating recommendations by detectives leads to better predictions. Furthermore, cases with higher perceived quality and more negative emotional tones (e.g., sadness, fear, and anger) attract more detectives. Our findings have strong implications for research and practice
Innovative business plan: a crowdsourcing medical data annotation platform company
In this business plan, a new crowdsourcing medical data annotation platform company
is proposed. It is to help companies and research institutions which are developing
medical artificial intelligence outsource medical data annotation work to professional
workers.
Crowdsourcing platform, as one of the intermediary platform, is a new Internet
business model to provide service for large-scale enterprises. There is great demand
for crowdsourcing service in medical AI field in China. However, there is no company
in China could offer professional medical data annotation services. Since the
development of medical artificial intelligence in China, most of the companies
engaged in research and development of medical artificial intelligence can only rely
on recruitment or give up research and development. The cost of the workforce and
material resources is very high.
The proposed company's services can better address these issues. On the one hand, the
proposed company can provide more cost-effective and accurate annotation data
quickly through outsourcing. On the other hand, the proposed company can provide
medical professionals with part-time opportunities to increase their income and reduce
unemployment. Through the analysis in the paper, we can predict that the proposed
company can stabilise the profit by collecting commissions and advertising. It will
enable medical AI companies, the proposed companies and medical professionals to
achieve a win-win situation. Therefore, it is attractive for Chinese start-ups to develop
and fill this niche market.Neste plano de negócios, uma nova empresa de plataforma de anotação de dados
médicos de crowdsourcing é proposta. É para ajudar empresas e instituições de
pesquisa que estão desenvolvendo inteligência artificial médica a terceirizar
facilmente o trabalho de anotação de dados médicos para trabalhadores profissionais.
A plataforma de crowdsourcing, como uma das plataformas intermediárias, é um novo
modelo de negócios na Internet para fornecer serviços para empresas de grande escala.
Existe uma grande demanda por serviços de crowdsourcing no campo da IA médica
na China. No entanto, nenhuma empresa na China poderia oferecer serviços
profissionais de anotação de dados médicos. Desde o desenvolvimento da inteligência
artificial médica na China, a maioria das empresas envolvidas em pesquisa e
desenvolvimento de inteligência artificial médica só pode contar com seu próprio
recrutamento ou desistir de pesquisa e desenvolvimento. O custo de mão de obra e
recursos materiais é muito alto.
Os serviços da empresa proposta podem resolver melhor esses problemas. Por um
lado, a empresa proposta pode fornecer dados de anotação mais econômicos e
precisos rapidamente através da terceirização. Por outro lado, a empresa proposta
pode oferecer aos profissionais médicos oportunidades de meio período para aumentar
sua renda e reduzir o desemprego. Através da análise do artigo, podemos prever que a
empresa proposta pode estabilizar o lucro coletando comissões e publicidade. Isso
permitirá que as empresas de IA médica, as empresas propostas e os profissionais
médicos alcancem uma situação em que todos saem ganhando. Portanto, é atraente
para as empresas chinesas desenvolver e preencher esse nicho de mercado
A Review on the Applications of Crowdsourcing in Human Pathology
The advent of the digital pathology has introduced new avenues of diagnostic
medicine. Among them, crowdsourcing has attracted researchers' attention in the
recent years, allowing them to engage thousands of untrained individuals in
research and diagnosis. While there exist several articles in this regard,
prior works have not collectively documented them. We, therefore, aim to review
the applications of crowdsourcing in human pathology in a semi-systematic
manner. We firstly, introduce a novel method to do a systematic search of the
literature. Utilizing this method, we, then, collect hundreds of articles and
screen them against a pre-defined set of criteria. Furthermore, we crowdsource
part of the screening process, to examine another potential application of
crowdsourcing. Finally, we review the selected articles and characterize the
prior uses of crowdsourcing in pathology
Empirical Methodology for Crowdsourcing Ground Truth
The process of gathering ground truth data through human annotation is a
major bottleneck in the use of information extraction methods for populating
the Semantic Web. Crowdsourcing-based approaches are gaining popularity in the
attempt to solve the issues related to volume of data and lack of annotators.
Typically these practices use inter-annotator agreement as a measure of
quality. However, in many domains, such as event detection, there is ambiguity
in the data, as well as a multitude of perspectives of the information
examples. We present an empirically derived methodology for efficiently
gathering of ground truth data in a diverse set of use cases covering a variety
of domains and annotation tasks. Central to our approach is the use of
CrowdTruth metrics that capture inter-annotator disagreement. We show that
measuring disagreement is essential for acquiring a high quality ground truth.
We achieve this by comparing the quality of the data aggregated with CrowdTruth
metrics with majority vote, over a set of diverse crowdsourcing tasks: Medical
Relation Extraction, Twitter Event Identification, News Event Extraction and
Sound Interpretation. We also show that an increased number of crowd workers
leads to growth and stabilization in the quality of annotations, going against
the usual practice of employing a small number of annotators.Comment: in publication at the Semantic Web Journa
Crowdsourcing contests to facilitate community engagement in HIV cure research: a qualitative evaluation of facilitators and barriers of participation.
BACKGROUND: As HIV cure research advances, there is an increasing need for community engagement in health research, especially in low- and middle-income countries with ongoing clinical trials. Crowdsourcing contests provide an innovative bottom-up way to solicit community feedback on clinical trials in order to enhance community engagement. The objective of this study was to identify facilitators and barriers to participating in crowdsourcing contests about HIV cure research in a city with ongoing HIV cure clinical trials. METHODS: We conducted in-depth interviews to evaluate facilitators and barriers to participating in crowdsourcing contests in Guangzhou, China. Contests included the following activities: organizing a call for entries, promoting the call, evaluating entries, celebrating exceptional entries, and sharing entries. We interviewed 31 individuals, including nine HIV cure clinical trial participants, 17 contest participants, and five contest organizers. Our sample included men who have sex with men (20), people living with HIV (14), and people who inject drugs (5). We audio-recorded, transcribed, and thematically analyzed the data using inductive and deductive coding techniques. RESULTS: Facilitators of crowdsourcing contest participation included responsiveness to lived experiences, strong community interest in HIV research, and community trust in medical professionals and related groups. Contests had more participants if they responded to the lived experiences, challenges, and opportunities of living with HIV in China. Strong community interest in HIV research helped to drive the formulation and execution of HIV cure contests, building support and momentum for these activities. Finally, participant trust in medical professionals and related groups (community-based organizations and contest organizers) further strengthened the ties between community members and researchers. Barriers to participating in crowdsourcing contests included persistent HIV stigma and myths about HIV. Stigma associated with discussing HIV made promotion difficult in certain contexts (e.g., city squares and schools). Myths and misperceptions about HIV science confused participants. CONCLUSIONS: Our data identified facilitators and barriers of participation in HIV cure crowdsourcing contests in China. Our findings could complement existing HIV community engagement strategies and help to design HIV contests for community engagement in other settings, particularly in low- and middle-income countries
Recommended from our members
Learning under Distributed Weak Supervision
The availability of training data for supervision is a frequently encountered bottleneck of medical image analysis methods. While typically established by a clinical expert rater, the increase in acquired imaging data renders traditional pixel-wise segmentations less feasible. In this paper, we examine the use of a crowdsourcing platform for the distribution of super-pixel weak annotation tasks and collect such annotations from a crowd of non-expert raters. The crowd annotations are subsequently used for training a fully convolutional neural network to address the problem of fetal brain segmentation in T2-weighted MR images. Using this approach we report encouraging results compared to highly targeted, fully supervised methods and potentially address a frequent problem impeding image analysis research
FEMwiki: crowdsourcing semantic taxonomy and wiki input to domain experts while keeping editorial control: Mission Possible!
Highly specialized professional communities of practice (CoP) inevitably need to operate across geographically dispersed area - members frequently need to interact and share professional content. Crowdsourcing using wiki platforms provides a novel way for a professional community to share ideas and collaborate on content creation, curation, maintenance and sharing. This is the aim of the Field Epidemiological Manual wiki (FEMwiki) project enabling online collaborative content sharing and interaction for field epidemiologists around a growing training wiki resource. However, while user contributions are the driving force for content creation, any medical information resource needs to keep editorial control and quality assurance. This requirement is typically in conflict with community-driven Web 2.0 content creation. However, to maximize the opportunities for the network of epidemiologists actively editing the wiki content while keeping quality and editorial control, a novel structure was developed to encourage crowdsourcing – a support for dual versioning for each wiki page enabling maintenance of expertreviewed pages in parallel with user-updated versions, and a clear navigation between the related versions. Secondly, the training wiki content needs to be organized in a semantically-enhanced taxonomical navigation structure enabling domain experts to find information on a growing site easily. This also provides an ideal opportunity for crowdsourcing. We developed a user-editable collaborative interface crowdsourcing the taxonomy live maintenance to the community of field epidemiologists by embedding the taxonomy in a training wiki platform and generating the semantic navigation hierarchy on the fly. Launched in 2010, FEMwiki is a real world service supporting field epidemiologists in Europe and worldwide. The crowdsourcing success was evaluated by assessing the number and type of changes made by the professional network of epidemiologists over several months and demonstrated that crowdsourcing encourages user to edit existing and create new content and also leads to expansion of the domain taxonomy
- …