1,317 research outputs found

    Data Analytics and Techniques: A Review

    Get PDF
    Big data of different types, such as texts and images, are rapidly generated from the internet and other applications. Dealing with this data using traditional methods is not practical since it is available in various sizes, types, and processing speed requirements. Therefore, data analytics has become an important tool because only meaningful information is analyzed and extracted, which makes it essential for big data applications to analyze and extract useful information. This paper presents several innovative methods that use data analytics techniques to improve the analysis process and data management. Furthermore, this paper discusses how the revolution of data analytics based on artificial intelligence algorithms might provide improvements for many applications. In addition, critical challenges and research issues were provided based on published paper limitations to help researchers distinguish between various analytics techniques to develop highly consistent, logical, and information-rich analyses based on valuable features. Furthermore, the findings of this paper may be used to identify the best methods in each sector used in these publications, assist future researchers in their studies for more systematic and comprehensive analysis and identify areas for developing a unique or hybrid technique for data analysis

    The State of the Art in Cartograms

    Full text link
    Cartograms combine statistical and geographical information in thematic maps, where areas of geographical regions (e.g., countries, states) are scaled in proportion to some statistic (e.g., population, income). Cartograms make it possible to gain insight into patterns and trends in the world around us and have been very popular visualizations for geo-referenced data for over a century. This work surveys cartogram research in visualization, cartography and geometry, covering a broad spectrum of different cartogram types: from the traditional rectangular and table cartograms, to Dorling and diffusion cartograms. A particular focus is the study of the major cartogram dimensions: statistical accuracy, geographical accuracy, and topological accuracy. We review the history of cartograms, describe the algorithms for generating them, and consider task taxonomies. We also review quantitative and qualitative evaluations, and we use these to arrive at design guidelines and research challenges

    Doctor of Philosophy

    Get PDF
    dissertationClinical decision support systems (CDSS) and electronic health records (EHR) have been widely adopted but do not support a high level of reasoning for the clinician. As a result, workflow incongruity and provider frustrations lead to more errors in reasoning. Other successful fields such as defense, aviation, and the military have used task complexity as a key factor in decision support system development. Task complexity arises during the interaction of the user and the tasks. Therefore, in this dissertation I have utilized different human factor methods to explore task complexity factors to understand their utility in health information technology system design. The first study addresses the question of generalizing complexity through a clinical complexity model. In this study, we integrated and validated a patient and task complexity model into a clinical complexity model tailored towards healthcare to serve as the initial framework for data analysis in our subsequent studies. The second study addresses the question of the coping strategies of infectious disease (ID) clinicians while dealing with complex decision tasks. The study concluded that clinicians use multiple cognitive strategies that help them to switch between automatic cognitive processes and analytical processes. The third study identified the complexity contributing factors from the transcripts of the observations conducted in the ID domain. The clinical complexity model developed in the first study guided the research for identifying the prominent complexity iv factors to recommend innovative healthcare technology system design. The fourth study, a pilot exploratory study, demonstrated the feasibility of developing a population information display from querying real complex patient information from an actual clinical database as well as identifying the ideal features of population information display. In summary, this dissertation adds to the knowledge about how clinicians adapt their information environment to deal with complexity. First, it contributes by developing a clinical complexity model that integrates both patient and task complexity. Second, it provides specific design recommendations for future innovative health information technology systems. Last, this dissertation also suggests that understanding task complexity in the healthcare team domain may help to better design of interface system

    A Survey on Visual Analytics of Social Media Data

    Get PDF
    The unprecedented availability of social media data offers substantial opportunities for data owners, system operators, solution providers, and end users to explore and understand social dynamics. However, the exponential growth in the volume, velocity, and variability of social media data prevents people from fully utilizing such data. Visual analytics, which is an emerging research direction, ha..

    Improving process algebra model structure and parameters in infectious disease epidemiology through data mining

    Get PDF
    Computational models are increasingly used to assist decision-making in public health epidemiology, but achieving the best model is a complex task due to the interaction of many components and variability of parameter values causing radically different dynamics. The modelling process can be enhanced through the use of data mining techniques. Here, we demonstrate this by applying association rules and clustering techniques to two stages of mod- elling: identifying pertinent structures in the initial model creation stage, and choosing optimal parameters to match that model to observed data. This is illustrated through application to the study of the circulating mumps virus in Scotland, 2004-2015

    Common Limitations of Image Processing Metrics:A Picture Story

    Get PDF
    While the importance of automatic image analysis is continuously increasing, recent meta-research revealed major flaws with respect to algorithm validation. Performance metrics are particularly key for meaningful, objective, and transparent performance assessment and validation of the used automatic algorithms, but relatively little attention has been given to the practical pitfalls when using specific metrics for a given image analysis task. These are typically related to (1) the disregard of inherent metric properties, such as the behaviour in the presence of class imbalance or small target structures, (2) the disregard of inherent data set properties, such as the non-independence of the test cases, and (3) the disregard of the actual biomedical domain interest that the metrics should reflect. This living dynamically document has the purpose to illustrate important limitations of performance metrics commonly applied in the field of image analysis. In this context, it focuses on biomedical image analysis problems that can be phrased as image-level classification, semantic segmentation, instance segmentation, or object detection task. The current version is based on a Delphi process on metrics conducted by an international consortium of image analysis experts from more than 60 institutions worldwide.Comment: This is a dynamic paper on limitations of commonly used metrics. The current version discusses metrics for image-level classification, semantic segmentation, object detection and instance segmentation. For missing use cases, comments or questions, please contact [email protected] or [email protected]. Substantial contributions to this document will be acknowledged with a co-authorshi
    • …
    corecore