23,052 research outputs found

    Probabilities and health risks: a qualitative approach

    Get PDF
    Health risks, defined in terms of the probability that an individual will suffer a particular type of adverse health event within a given time period, can be understood as referencing either natural entities or complex patterns of belief which incorporate the observer's values and knowledge, the position adopted in the present paper. The subjectivity inherent in judgements about adversity and time frames can be easily recognised, but social scientists have tended to accept uncritically the objectivity of probability. Most commonly in health risk analysis, the term probability refers to rates established by induction, and so requires the definition of a numerator and denominator. Depending upon their specification, many probabilities may be reasonably postulated for the same event, and individuals may change their risks by deciding to seek or avoid information. These apparent absurdities can be understood if probability is conceptualised as the projection of expectation onto the external world. Probabilities based on induction from observed frequencies provide glimpses of the future at the price of acceptance of the simplifying heuristic that statistics derived from aggregate groups can be validly attributed to individuals within them. The paper illustrates four implications of this conceptualisation of probability with qualitative data from a variety of sources, particularly a study of genetic counselling for pregnant women in a U.K. hospital. Firstly, the official selection of a specific probability heuristic reflects organisational constraints and values as well as predictive optimisation. Secondly, professionals and service users must work to maintain the facticity of an established heuristic in the face of alternatives. Thirdly, individuals, both lay and professional, manage probabilistic information in ways which support their strategic objectives. Fourthly, predictively sub-optimum schema, for example the idea of AIDS as a gay plague, may be selected because they match prevailing social value systems

    Finding patterns in student and medical office data using rough sets

    Get PDF
    Data have been obtained from King Khaled General Hospital in Saudi Arabia. In this project, I am trying to discover patterns in these data by using implemented algorithms in an experimental tool, called Rough Set Graphic User Interface (RSGUI). Several algorithms are available in RSGUI, each of which is based in Rough Set theory. My objective is to find short meaningful predictive rules. First, we need to find a minimum set of attributes that fully characterize the data. Some of the rules generated from this minimum set will be obvious, and therefore uninteresting. Others will be surprising, and therefore interesting. Usual measures of strength of a rule, such as length of the rule, certainty and coverage were considered. In addition, a measure of interestingness of the rules has been developed based on questionnaires administered to human subjects. There were bugs in the RSGUI java codes and one algorithm in particular, Inductive Learning Algorithm (ILA) missed some cases that were subsequently resolved in ILA2 but not updated in RSGUI. I solved the ILA issue on RSGUI. So now ILA on RSGUI is running well and gives good results for all cases encountered in the hospital administration and student records data.Master's These

    Three Remarks on “Reflective Equilibrium“

    Get PDF
    John Rawls’ “reflective equilibrium” ranges amongst the most popular conceptions in contemporary ethics when it comes to the basic methodological question of how to justify and trade off different normative positions and attitudes. Even where Rawls’ specific contractualist account is not adhered to, “reflective equilibrium” is readily adopted as the guiding idea of coherentist approaches, seeking moral justification not in a purely deductive or inductive manner, but in some balancing procedure that will eventually procure a stable adjustment of relevant doctrines and standpoints. However, it appears that the widespread use of this idea has led to some considerable deviations from its meaning within Rawls’ original framework and to a critical loss of conceptual cogency as an ethico-hermeneutical tool. This contribution identifies three kinds of “balancing” constellations that are frequently, but inadequately brought forth under the heading of Rawlsian “reflective equilibrium”: balancing theoretical accounts against intuitive convictions; balancing general principles against particular judgements; balancing opposite ethical conceptions or divergent moral statements, respectively. It is argued that each of these applications departs from Rawls’ original construction of “reflective equilibrium” and also deprives the idea of its reliability in clarifying and weighing moral stances

    Preface

    Get PDF

    Survey: Data Mining Techniques in Medical Data Field

    Get PDF
    Now days most of the research area are working on data mining techniques in medical data. Knowledge discovery and data mining have found numerous applications in business and scientific domain. Valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, we briefly examine the potential use of classification based data mining techniques such as Rule based, decision tree, machine learning algorithms like Support Vector Machines, Principle Component Analysis etc., Rough Set Theory and Fuzzy logic. In particular we consider a case study using classification techniques on a medical data set of diabetic patients

    Rough Set Granularity in Mobile Web Pre-Caching

    Get PDF
    Mobile Web pre-caching (Web prefetching and caching) is an explication of performance enhancement and storage limitation ofmobile devices

    Encapsulation of Soft Computing Approaches within Itemset Mining a A Survey

    Get PDF
    Data Mining discovers patterns and trends by extracting knowledge from large databases. Soft Computing techniques such as fuzzy logic, neural networks, genetic algorithms, rough sets, etc. aims to reveal the tolerance for imprecision and uncertainty for achieving tractability, robustness and low-cost solutions. Fuzzy Logic and Rough sets are suitable for handling different types of uncertainty. Neural networks provide good learning and generalization. Genetic algorithms provide efficient search algorithms for selecting a model, from mixed media data. Data mining refers to information extraction while soft computing is used for information processing. For effective knowledge discovery from large databases, both Soft Computing and Data Mining can be merged. Association rule mining (ARM) and Itemset mining focus on finding most frequent item sets and corresponding association rules, extracting rare itemsets including temporal and fuzzy concepts in discovered patterns. This survey paper explores the usage of soft computing approaches in itemset utility mining

    A Rough Set Approach to Dimensionality Reduction for Performance Enhancement in Machine Learning

    Get PDF
    Machine learning uses complex mathematical algorithms to turn data set into a model for a problem domain. Analysing high dimensional data in their raw form usually causes computational overhead because the higher the size of the data, the higher the time it takes to process it. Therefore, there is a need for a more robust dimensionality reduction approach, among other existing methods, for feature projection (extraction) and selection from data set, which can be passed to a machine learning algorithm for optimal performance. This paper presents a generic mathematical approach for transforming data from a high dimensional space to low dimensional space in such a manner that the intrinsic dimension of the original data is preserved using the concept of indiscernibility, reducts, and the core of the rough set theory. The flue detection dataset available on the Kaggle website was used in this research for demonstration purposes. The original and reduced datasets were tested using a logistic regression machine learning algorithm yielding the same accuracy of 97% with a training time of 25 min and 11 min respectively
    corecore