469 research outputs found

    ClustOfVar-based approach for unsupervised learning: Reading of synthetic variables with sociological data

    Get PDF
    This paper proposes an original data mining method for unsupervised learning, replacing traditional factor analysis with a system of variable clustering. Clustering of variables aims to group together variables that are strongly related to each other, i.e. containing the same information. We recently proposed the ClustOfVar method, specifically devoted to variable clustering, regardless of whether the variables are numeric or categorical in nature. It simultaneously provides homogeneous clusters of variables and their corresponding synthetic variables that can be read as a kind of gradient. In this algorithm, the homogeneity criterion of a cluster is defined by the squared Pearson correlation for the numeric variables and by the correlation ratio for the categorical variables. This method was tested on categorical data relating to French farmers and their perception of the environment. The use of synthetic variables provided us with an original approach of identifying the way farmers reconfigured the questions put to them

    New Fundamental Technologies in Data Mining

    Get PDF
    The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. The series of books entitled by "Data Mining" address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters. The contributing authors have highlighted many future research directions that will foster multi-disciplinary collaborations and hence will lead to significant development in the field of data mining

    Knowledge visualization: From theory to practice

    Get PDF
    Visualizations have been known as efficient tools that can help users analyze com- plex data. However, understanding the displayed data and finding underlying knowl- edge is still difficult. In this work, a new approach is proposed based on understanding the definition of knowledge. Although there are many definitions used in different ar- eas, this work focuses on representing knowledge as a part of a visualization and showing the benefit of adopting knowledge representation. Specifically, this work be- gins with understanding interaction and reasoning in visual analytics systems, then a new definition of knowledge visualization and its underlying knowledge conversion processes are proposed. The definition of knowledge is differentiated as either explicit or tacit knowledge. Instead of directly representing data, the value of the explicit knowledge associated with the data is determined based on a cost/benefit analysis. In accordance to its importance, the knowledge is displayed to help the user under- stand the complex data through visual analytical reasoning and discovery

    INFORMATION VISUALIZATION DESIGN FOR MULTIDIMENSIONAL DATA: INTEGRATING THE RANK-BY-FEATURE FRAMEWORK WITH HIERARCHICAL CLUSTERING

    Get PDF
    Interactive exploration of multidimensional data sets is challenging because: (1) it is difficult to comprehend patterns in more than three dimensions, and (2) current systems are often a patchwork of graphical and statistical methods leaving many researchers uncertain about how to explore their data in an orderly manner. This dissertation offers a set of principles and a novel rank-by-feature framework that could enable users to better understand multidimensional and multivariate data by systematically studying distributions in one (1D) or two dimensions (2D), and then discovering relationships, clusters, gaps, outliers, and other features. Users of this rank-by-feature framework can view graphical presentations (histograms, boxplots, and scatterplots), and then choose a feature detection criterion to rank 1D or 2D axis-parallel projections. By combining information visualization techniques (overview, coordination, and dynamic query) with summaries and statistical methods, users can systematically examine the most important 1D and 2D axis-parallel projections. This research provides a number of valuable contributions: Graphics, Ranking, and Interaction for Discovery (GRID) principles- a set of principles for exploratory analysis of multidimensional data, which are summarized as: (1) study 1D, study 2D, then find features (2) ranking guides insight, statistics confirm. GRID principles help users organize their discovery process in an orderly manner so as to produce more thorough analyses and extract deeper insights in any multidimensional data application. Rank-by-feature framework - a user interface framework based on the GRID principles. Interactive information visualization techniques are combined with statistical methods and data mining algorithms to enable users to orderly examine multidimensional data sets using 1D and 2D projections. The design and implementation of the Hierarchical Clustering Explorer (HCE), an information visualization tool available at www.cs.umd.edu/hcil/hce. HCE implements the rank-by-feature framework and supports interactive exploration of hierarchical clustering results to reveal one of the important features - clusters. Validation through case studies and user surveys: Case studies with motivated experts in three research fields and a user survey via emails to a wide range of HCE users demonstrated the efficacy of HCE and the rank-by-feature framework. These studies also revealed potential improvement opportunities in terms of design and implementation

    Use of statistical analysis, data mining, decision analysis and cost effectiveness analysis to analyze medical data : application to comparative effectiveness of lumpectomy and mastectomy for breast cancer.

    Get PDF
    Statistical models have been the first choice for comparative effectiveness in clinical research. Though effective, these models are limited when the data to be analyzed do not fit the assumed distributions; which is mostly the case when the study is not a clinical trial. In this project, data mining, decision analysis and cost effectiveness analysis methods were used to supplement statistical models in comparing lumpectomy to mastectomy for surgical treatment of breast cancer. Mastectomy has been the gold standard for breast cancer treatment for since the 1800s. In the 20th century, an equivalence of mastectomy and lumpectomy was established in terms of long-term survival and disease free survival. However, short term comparative effectiveness in post-operative outcomes has not been fully explored. Studies using administrative data are lacking and no study has used new technologies of self-expression, particularly the internet discussion board. In this study, data used were from the Nationwide Inpatient Sample (NIS) 2005, the Thomson Reuter\u27s MarketScan 2000 - 2001, the medical literature on clinical trials and online individuals\u27 posts in discussion boards on breastcancer.org. The NIS was used to compare lumpectomy to mastectomy in terms of hospital length of stay, total charges and in-hospital death at the time of surgery. MarketScan data was used to evaluate the comparative follow-up outcomes in terms of risk of repeat hospitalization, risk of repeat operation, number of outpatient services, number of prescribed medications, length of stay, and total charges per post-operative hospital admission on a period of eight months average. The MarketScan was also used to construct a simple post-operative hospital admission predictive model and to perform short-term cost-effectiveness analysis. The medical literature was used to analyze long term -10 years- mortality and recurrence for both treatments. The web postings were used to evaluate the comparative cost to improve quality of life in terms of patient satisfaction. In NIS and MarketScan data, International Classification of Disease, 9th revision, Clinical Modification (lCD-9-CM) diagnosis codes were used to extract cases of breast cancer; and ICD-9-CM procedure codes and Current Procedural Terminology, 4th edition procedure codes were used to form groups of treatment. Data were pre-processed and prepared for analysis using data mining techniques such as clustering, sampling and text mining. To clean the data for statistical models, some continuous variables were normalized using methods such as logarithmic transformation. Statistical models such as linear regression, generalized linear models, logistic and proportional hazard (Cox) regressions were used to compare post-operative outcomes of lumpectomy versus mastectomy. Neural networks, decision tree and logistic regression predictive modeling techniques were compared to create a simple predictive model predicting 90-day post-operative hospital re-admission. Cost and effectiveness were compared with the Incremental Cost Effectiveness Ratio (ICER). A simple method to process and analyze online po stings was created and used for patients\u27 input in the comparison of lumpectomy to mastectomy. All statistical analyses were performed in SAS 9.2. Data Mining was performed in SAS Enterprise Miner (EM) 6.1 and SAS Text Miner. Decision analysis and Cost Effectiveness Analysis were performed in TreeAge Pro 2011. A simple comparison of the two procedures using the NIS 2005, a discharge-level data, showed that in general, a lumpectomy surgery is associated with a significantly longer stay and more charges on average. From the MarketScan data, a person-level data where a patient can be followed longitudinally, it was found that for the initial hospitalization, patients who underwent mastectomy had a non-significant longer hospital stay and significantly lower charges. The post-operative number of outpatient services, prescribed medications as well as length of stay and charges for post-operative hospital admissions were not statistically significant. Using the MarketScan data, it was also found that the best model to predict 90-day post-operative hospital admission was logistic regression. A logistic regression revealed that the risk of a hospital re-admission within 90 days after surgery was 65% for a patient who underwent lumpectomy and 48% for a patient who underwent mastectomy. A cost effectiveness analysis using Markov models for up to 100 days after surgery showed that having lumpectomy saved hospital related costs every day with a minimum saving of 33onday10.Intermsoflong−termoutcomes,theuseofdecisionanalysismethodsontheliteraturereviewdatarevealedthat,10−yearsaftersurgery,739recurrencesand84deathswerepreventedamong10,000womenwhohadmastectomyinsteadoflumpectomy.Factoringpatients2˘7preferencesinthecomparisonofthetwoprocedures,itwasfoundthatpatientswhoundergolumpectomyarenon−significantlymoresatisfiedthantheirpeerswhoundergomastectomy.Intermsofcost,itwasfoundthatlumpectomysaves33 on day 10. In terms of long-term outcomes, the use of decision analysis methods on the literature review data revealed that, 10-years after surgery, 739 recurrences and 84 deaths were prevented among 10,000 women who had mastectomy instead of lumpectomy. Factoring patients\u27 preferences in the comparison of the two procedures, it was found that patients who undergo lumpectomy are non-significantly more satisfied than their peers who undergo mastectomy. In terms of cost, it was found that lumpectomy saves 517 for each satisfied individual in comparison to mastectomy. In conclusion, the current project showed how to use data mining, decision analysis and cost effectiveness methods to supplement statistical analysis when using real world nonclinical trial data for a more complete analysis. The application of this combination of methods on the comparative effectiveness of lumpectomy and mastectomy showed that in terms of cost and patients\u27 quality of life measured as satisfaction, lumpectomy was found to be the better choice

    On Practical machine Learning and Data Analysis

    Get PDF
    This thesis discusses and addresses some of the difficulties associated with practical machine learning and data analysis. Introducing data driven methods in e.g industrial and business applications can lead to large gains in productivity and efficiency, but the cost and complexity are often overwhelming. Creating machine learning applications in practise often involves a large amount of manual labour, which often needs to be performed by an experienced analyst without significant experience with the application area. We will here discuss some of the hurdles faced in a typical analysis project and suggest measures and methods to simplify the process. One of the most important issues when applying machine learning methods to complex data, such as e.g. industrial applications, is that the processes generating the data are modelled in an appropriate way. Relevant aspects have to be formalised and represented in a way that allow us to perform our calculations in an efficient manner. We present a statistical modelling framework, Hierarchical Graph Mixtures, based on a combination of graphical models and mixture models. It allows us to create consistent, expressive statistical models that simplify the modelling of complex systems. Using a Bayesian approach, we allow for encoding of prior knowledge and make the models applicable in situations when relatively little data are available. Detecting structures in data, such as clusters and dependency structure, is very important both for understanding an application area and for specifying the structure of e.g. a hierarchical graph mixture. We will discuss how this structure can be extracted for sequential data. By using the inherent dependency structure of sequential data we construct an information theoretical measure of correlation that does not suffer from the problems most common correlation measures have with this type of data. In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. We describe how to create an incremental classification system based on a statistical model that is trained from empirical data, and show how the limited available background information can still be used initially for a functioning diagnosis system. To minimise the effort with which results are achieved within data analysis projects, we need to address not only the models used, but also the methodology and applications that can help simplify the process. We present a methodology for data preparation and a software library intended for rapid analysis, prototyping, and deployment. Finally, we will study a few example applications, presenting tasks within classification, prediction and anomaly detection. The examples include demand prediction for supply chain management, approximating complex simulators for increased speed in parameter optimisation, and fraud detection and classification within a media-on-demand system
    • …
    corecore