9,738 research outputs found
Econometrics meets sentiment : an overview of methodology and applications
The advent of massive amounts of textual, audio, and visual data has spurred the development of econometric methodology to transform qualitative sentiment data into quantitative sentiment variables, and to use those variables in an econometric analysis of the relationships between sentiment and other variables. We survey this emerging research field and refer to it as sentometrics, which is a portmanteau of sentiment and econometrics. We provide a synthesis of the relevant methodological approaches, illustrate with empirical results, and discuss useful software
Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation
Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application
Recommended from our members
Applying concepts of fuzzy cognitive mapping to model IT/IS investment evaluation factors
The justification process is a major concern for many organisations that are considering the adoption of Information Technology (IT) and Information Systems (IS), and is a barrier to its implementation. As a result, the competitive advantage of many companies is being put at risk because of management's inability to evaluate the holistic implication of adopting new technology, both in terms of on the benefit and cost portfolios. This paper identifies a number of well-known project appraisal techniques used in IT/IS investment justification. Furthermore, the concept of multivalent, or fuzzy logic, is used to demonstrate how inter-relationships can be modeled between key dimensions identified in the proposed conceptual evaluation model. This is highlighted using fuzzy cognitive mapping (FCM) as a technique to model each IT/IS evaluation factor (integrating strategic, tactical, operational and investment considerations). The use of an FCM is then shown to be as a complementary tool which can serve to highlight interdependencies between contributory justification factors
Representing archaeological uncertainty in cultural informatics
This thesis sets out to explore, describe, quantify, and visualise uncertainty in a
cultural informatics context, with a focus on archaeological reconstructions. For quite
some time, archaeologists and heritage experts have been criticising the often toorealistic
appearance of three-dimensional reconstructions. They have been highlighting
one of the unique features of archaeology: the information we have on our heritage
will always be incomplete. This incompleteness should be reflected in digitised
reconstructions of the past.
This criticism is the driving force behind this thesis. The research examines archaeological
theory and inferential process and provides insight into computer visualisation.
It describes how these two areas, of archaeology and computer graphics,
have formed a useful, but often tumultuous, relationship through the years.
By examining the uncertainty background of disciplines such as GIS, medicine,
and law, the thesis postulates that archaeological visualisation, in order to mature,
must move towards archaeological knowledge visualisation. Three sequential areas
are proposed through this thesis for the initial exploration of archaeological uncertainty:
identification, quantification and modelling. The main contributions of the
thesis lie in those three areas.
Firstly, through the innovative design, distribution, and analysis of a questionnaire,
the thesis identifies the importance of uncertainty in archaeological interpretation
and discovers potential preferences among different evidence types.
Secondly, the thesis uniquely analyses and evaluates, in relation to archaeological
uncertainty, three different belief quantification models. The varying ways that these
mathematical models work, are also evaluated through simulated experiments. Comparison
of results indicates significant convergence between the models.
Thirdly, a novel approach to archaeological uncertainty and evidence conflict visualisation
is presented, influenced by information visualisation schemes. Lastly, suggestions
for future semantic extensions to this research are presented through the
design and development of new plugins to a search engine
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
- …