2,455 research outputs found

    Formulation of linguistic regression model based on natural words

    Get PDF
    When human experts express their ideas and thoughts, human words are basically employed in these expressions. That is, the experts with much professional experiences are capable of making assessment using their intuition and experiences. The measurements and interpretation of characteristics are taken with uncertainty, because most measured characteristics, analytical result, and field data can be interpreted only intuitively by experts. In such cases, judgments may be expressed using linguistic terms by experts. The difficulty in the direct measurement of certain characteristics makes the estimation of these characteristics imprecise. Such measurements may be dealt with the use of fuzzy set theory. As Professor L. A. Zadeh has placed the stress on the importance of the computation with words, fuzzy sets can take a central role in handling words [12, 13]. In this perspective fuzzy logic approach is offten thought as the main and only useful tool to deal with human words. In this paper we intend to present another approach to handle human words instead of fuzzy reasoning. That is, fuzzy regression analysis enables us treat the computation with words. In order to process linguistic variables, we define the vocabulary translation and vocabulary matching which convert linguistic expressions into membership functions on the interval [0–1] on the basis of a linguistic dictionary, and vice versa. We employ fuzzy regression analysis in order to deal with the assessment process of experts from linguistic variables of features and characteristics of an objective into the linguistic expression of the total assessment. The presented process consists of four portions: (1) vocabulary translation, (2) estimation, (3) vocabulary matching and (4) dictionary. We employed fuzzy quantification theory type 2 for estimating the total assessment in terms of linguistic structural attributes which are obtained from an expert

    Contingent Valuation Method: Valuing Cultural Heritage

    Get PDF
    Cultural heritage is not easy to be valued in a market because it is a very unique product which gives a community (ies), nation(s) an identity and a sense of belonging. Debate on the valuation of cultural heritage surrounds despite growing attention by economists and policy makers. The attention on the estimation of economic values for cultural goods and services has been great by economics throughout the past two decades (Choi, et al., 2009; Kaminski, McLoughlin, & Sodagar, 2007; Navrud & Ready, 2002, Noonan, 2003; Venkatachalam, 2004). The two stated preference methods which are commonly used in valuing non-use goods; i.e. contingent valuation method and choice modelling. Each of these two valuation method has its own strengths and weaknesses and may even complement each other depending on the parameters of the study. However, according to Kaminski et al., 2007; Noonan, 2003, the usage of choice modelling to estimate cultural values has been limited due to the growing usage of contingent valuation. Therefore, this paper will discuss contingent valuation method in valuing amenities and aim to contribute the knowledge on contingent valuation method for nonmarket goods. (Abstract by author

    Non-visualization of lung markings below hemidiaphragm in subtle subpulmonic effusion: an old sign resuscitated

    Get PDF
    To assess the lack of visibility of vascular markings under the hemidiaphragm on a frontal chest radiograph as a sign of pleural effusion, fifteen patients were collected showing this sign. Pleural effusion was diagnosed by ultrasound, comparison with previous or subsequent chest x-ray or computed tomography. Patients in the study group exhibited this sign in the absence of the classical signs of pleural effusion. In the control group, lack of visibility of blood vessels was observed in only 4.2% cases. Non-visualization of vascular markings below the hemidiaphragm should alert the interpreter to the possible presence of pleural effusion and a lateral or decubitus view or ultrasound examination may be carried out to rule out effusion

    Bandgap narrowing in Mn doped GaAs probed by room-temperature photoluminescence

    Full text link
    The electronic band structure of the (Ga,Mn)As system has been one of the most intriguing problems in solid state physics over the past two decades. Determination of the band structure evolution with increasing Mn concentration is a key issue to understand the origin of ferromagnetism. Here we present room temperature photoluminescence and ellipsometry measurements of Ga_{100%-x}Mn_{x}As alloy. The up-shift of the valence-band is proven by the red shift of the room temperature near band gap emission from the Ga_{100%-x}Mn_{x}As alloy with increasing Mn content. It is shown that even a doping by 0.02 at.% of Mn affects the valence-band edge and it merges with the impurity band for a Mn concentration as low as 0.6 at.%. Both X-ray diffraction pattern and high resolution cross-sectional TEM images confirmed full recrystallization of the implanted layer and GaMnAs alloy formation.Comment: 24 pages, 7 figures, accepted at Phys. Rev. B 201

    Coexistent tuberculosis and carcinoma of the colon

    Get PDF

    Relativistic Quantum Games in Noninertial Frames

    Full text link
    We study the influence of Unruh effect on quantum non-zero sum games. In particular, we investigate the quantum Prisoners' Dilemma both for entangled and unentangled initial states and show that the acceleration of the noninertial frames disturbs the symmetry of the game. It is shown that for maximally entangled initial state, the classical strategy C (cooperation) becomes the dominant strategy. Our investigation shows that any quantum strategy does no better for any player against the classical strategies. The miracle move of Eisert et al (1999 Phys. Rev. Lett. 83 3077) is no more a superior move. We show that the dilemma like situation is resolved in favor of one player or the other.Comment: 8 Pages, 2 figures, 2 table

    Robust Online Hamiltonian Learning

    Get PDF
    In this work we combine two distinct machine learning methodologies, sequential Monte Carlo and Bayesian experimental design, and apply them to the problem of inferring the dynamical parameters of a quantum system. We design the algorithm with practicality in mind by including parameters that control trade-offs between the requirements on computational and experimental resources. The algorithm can be implemented online (during experimental data collection), avoiding the need for storage and post-processing. Most importantly, our algorithm is capable of learning Hamiltonian parameters even when the parameters change from experiment-to-experiment, and also when additional noise processes are present and unknown. The algorithm also numerically estimates the Cramer-Rao lower bound, certifying its own performance.Comment: 24 pages, 12 figures; to appear in New Journal of Physic

    A Deep Learning Framework for the Detection and Quantification of Reticular Pseudodrusen and Drusen on Optical Coherence Tomography

    Get PDF
    PURPOSE: The purpose of this study was to develop and validate a deep learning (DL) framework for the detection and quantification of reticular pseudodrusen (RPD) and drusen on optical coherence tomography (OCT) scans. METHODS: A DL framework was developed consisting of a classification model and an out-of-distribution (OOD) detection model for the identification of ungradable scans; a classification model to identify scans with drusen or RPD; and an image segmentation model to independently segment lesions as RPD or drusen. Data were obtained from 1284 participants in the UK Biobank (UKBB) with a self-reported diagnosis of age-related macular degeneration (AMD) and 250 UKBB controls. Drusen and RPD were manually delineated by five retina specialists. The main outcome measures were sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), kappa, accuracy, intraclass correlation coefficient (ICC), and free-response receiver operating characteristic (FROC) curves. RESULTS: The classification models performed strongly at their respective tasks (0.95, 0.93, and 0.99 AUC, respectively, for the ungradable scans classifier, the OOD model, and the drusen and RPD classification models). The mean ICC for the drusen and RPD area versus graders was 0.74 and 0.61, respectively, compared with 0.69 and 0.68 for intergrader agreement. FROC curves showed that the model's sensitivity was close to human performance. CONCLUSIONS: The models achieved high classification and segmentation performance, similar to human performance. TRANSLATIONAL RELEVANCE: Application of this robust framework will further our understanding of RPD as a separate entity from drusen in both research and clinical settings

    Validation and Clinical Applicability of Whole-Volume Automated Segmentation of Optical Coherence Tomography in Retinal Disease Using Deep Learning.

    Get PDF
    IMPORTANCE: Quantitative volumetric measures of retinal disease in optical coherence tomography (OCT) scans are infeasible to perform owing to the time required for manual grading. Expert-level deep learning systems for automatic OCT segmentation have recently been developed. However, the potential clinical applicability of these systems is largely unknown. OBJECTIVE: To evaluate a deep learning model for whole-volume segmentation of 4 clinically important pathological features and assess clinical applicability. DESIGN, SETTING, AND PARTICIPANTS: This diagnostic study used OCT data from 173 patients with a total of 15 558 B-scans, treated at Moorfields Eye Hospital. The data set included 2 common OCT devices and 2 macular conditions: wet age-related macular degeneration (107 scans) and diabetic macular edema (66 scans), covering the full range of severity, and from 3 points during treatment. Two expert graders performed pixel-level segmentations of intraretinal fluid, subretinal fluid, subretinal hyperreflective material, and pigment epithelial detachment, including all B-scans in each OCT volume, taking as long as 50 hours per scan. Quantitative evaluation of whole-volume model segmentations was performed. Qualitative evaluation of clinical applicability by 3 retinal experts was also conducted. Data were collected from June 1, 2012, to January 31, 2017, for set 1 and from January 1 to December 31, 2017, for set 2; graded between November 2018 and January 2020; and analyzed from February 2020 to November 2020. MAIN OUTCOMES AND MEASURES: Rating and stack ranking for clinical applicability by retinal specialists, model-grader agreement for voxelwise segmentations, and total volume evaluated using Dice similarity coefficients, Bland-Altman plots, and intraclass correlation coefficients. RESULTS: Among the 173 patients included in the analysis (92 [53%] women), qualitative assessment found that automated whole-volume segmentation ranked better than or comparable to at least 1 expert grader in 127 scans (73%; 95% CI, 66%-79%). A neutral or positive rating was given to 135 model segmentations (78%; 95% CI, 71%-84%) and 309 expert gradings (2 per scan) (89%; 95% CI, 86%-92%). The model was rated neutrally or positively in 86% to 92% of diabetic macular edema scans and 53% to 87% of age-related macular degeneration scans. Intraclass correlations ranged from 0.33 (95% CI, 0.08-0.96) to 0.96 (95% CI, 0.90-0.99). Dice similarity coefficients ranged from 0.43 (95% CI, 0.29-0.66) to 0.78 (95% CI, 0.57-0.85). CONCLUSIONS AND RELEVANCE: This deep learning-based segmentation tool provided clinically useful measures of retinal disease that would otherwise be infeasible to obtain. Qualitative evaluation was additionally important to reveal clinical applicability for both care management and research

    A Resource Aware MapReduce Based Parallel SVM for Large Scale Image Classifications

    Get PDF
    Machine learning techniques have facilitated image retrieval by automatically classifying and annotating images with keywords. Among them support vector machines (SVMs) are used extensively due to their generalization properties. However, SVM training is notably a computationally intensive process especially when the training dataset is large. This paper presents RASMO, a resource aware MapReduce based parallel SVM algorithm for large scale image classifications which partitions the training data set into smaller subsets and optimizes SVM training in parallel using a cluster of computers. A genetic algorithm based load balancing scheme is designed to optimize the performance of RASMO in heterogeneous computing environments. RASMO is evaluated in both experimental and simulation environments. The results show that the parallel SVM algorithm reduces the training time significantly compared with the sequential SMO algorithm while maintaining a high level of accuracy in classifications.National Basic Research Program (973) of China under Grant 2014CB34040
    corecore