5,671 research outputs found

    Space-Time Diffusion Visualization using Bayesian Inference

    Get PDF
    Retail marketing geography has traditionally employed static gravity models for location analytics based on probabilistic locational consumer demand. However, such retail trade area models provide little insight into the dynamic space-time hierarchical diffusionary processes that aggregate to an eventual market structure equilibrium (Mason et. al., 1994), which gravity models attempt to predict for retail trade areas. In addition, most attempts to display the aggregating dynamic space-time hierarchical diffusionary processes of space, time and attributes of interest, in a geographical information system (GIS), produce visualizations that are overly complex and typically displayed utilizing unfamiliar paradigms. Further, these attempts fail to take into account the extensive body of literature in psychology and brain science that stress the importance of perceptual elements and design in achieving optimum visualization comprehension. In other words, simplicity (three-way factor analysis) and visual familiarity (cognitive fit theory (Vessey, 2006), mere-exposure effect in psychology (Dajonc, 1968). This will provide faster perception and better visuospatial and temporal understanding of objects and trends. In this study we incorporate these elements in our visualization object that we refer to as “Avatar”. A Huff inspired, Bayesian framework of inference for spatial allocation and hypothesis testing allows the Avatar object to display the spatial allocation of the Bass model’s innovators and imitators for sales forecasts of new product diffusion (e.g. a mathematical version of Everett Roger’s adoption concept), thus enabling and supporting faster and improved visuospatial understanding of very large data repositories of unbounded and/or “countably infinite” sized geo-big-data (referred to throughout the rest of this paper as GBD). We then introduce the three steps necessary to create an Avatar object (i.e. a 3-D semaphoric, space-time diffusion visualization object). The Avatar object is designed specifically to visualize determinant attributes (e.g. demographics) for the Bass, Bayes, Berry and Huff integrated ensemble model forming part of an ancillary paper to this study. In this way we display the timed hierarchical diffusion of new innovative products throughout store trade areas and across the ensuing and evolving store networks. In addition, by calculating Bayesian conjugate priors and posterior spatial allocation probabilities for the “smallest units of human settlement” (Christaller, 1966) or in our case statistical demographic units (i.e. Census Blocks), we establish customer (innovator and imitator) spatial distributions for the Bass temporal-only model for the case of the aggregating store level trade area (SLTA) scenario. Our approach is empirically supported by five years of new product diffusion geocoded panel data from the Southern California market. We conclude that our cognitive fit theory validated Avatar space-time diffusion visualization strengthens “location analytics” and “location intelligence” and provides a simple and familiar tool for displaying GBD across a growing domain of varying applications and end-user knowledge and needs

    Why Adverse Outcome Pathways Need to be FAIR

    Get PDF
    Adverse outcome pathways (AOPs) provide evidence for demonstrating and assessing causality between measurable toxicological mechanisms and human or environmental adverse effects. AOPs have gained increasing attention over the past decade and are believed to provide the necessary steppingstone for more effective risk assessment of chemicals and materials and moving beyond the need for animal testing. However, as with all types of data and knowledge today, AOPs need to be reusable by machines, i.e., machine-actionable, in order to reach their full impact potential. Machine-actionability is supported by the FAIR principles, which guide findability, accessibility, interoperability, and reusability of data and knowledge. Here, we describe why AOPs need to be FAIR and touch on aspects such as the improved visibility and the increased trust that FAIRification of AOPs provides.</p

    Statistical analysis of high-dimensional biomedical data: a gentle introduction to analytical goals, common approaches and challenges

    Get PDF
    International audienceBackground: In high-dimensional data (HDD) settings, the number of variables associated with each observation is very large. Prominent examples of HDD in biomedical research include omics data with a large number of variables such as many measurements across the genome, proteome, or metabolome, as well as electronic health records data that have large numbers of variables recorded for each patient. The statistical analysis of such data requires knowledge and experience, sometimes of complex methods adapted to the respective research questions. Methods: Advances in statistical methodology and machine learning methods offer new opportunities for innovative analyses of HDD, but at the same time require a deeper understanding of some fundamental statistical concepts. Topic group TG9 “High-dimensional data” of the STRATOS (STRengthening Analytical Thinking for Observational Studies) initiative provides guidance for the analysis of observational studies, addressing particular statistical challenges and opportunities for the analysis of studies involving HDD. In this overview, we discuss key aspects of HDD analysis to provide a gentle introduction for non-statisticians and for classically trained statisticians with little experience specific to HDD. Results: The paper is organized with respect to subtopics that are most relevant for the analysis of HDD, in particular initial data analysis, exploratory data analysis, multiple testing, and prediction. For each subtopic, main analytical goals in HDD settings are outlined. For each of these goals, basic explanations for some commonly used analysis methods are provided. Situations are identified where traditional statistical methods cannot, or should not, be used in the HDD setting, or where adequate analytic tools are still lacking. Many key references are provided. Conclusions: This review aims to provide a solid statistical foundation for researchers, including statisticians and non-statisticians, who are new to research with HDD or simply want to better evaluate and understand the results of HDD analyses

    Sustainable planning of cross-border cooperation: a strategy for alliances in border cities

    Get PDF
    In recent years, cooperation among nations has become a critical issue towards sustainable development of neighbor cities in border areas. In this regard, sustainable common planning approaches and policies are an increasing reality, particularly in European territories. Considering the significant amount of cross-border cooperation (CBC) projects and strategies within Europe, it is crucial to promote research approaches that are able to identify the most positive approaches towards the establishment of alliances in border territories, serving as pivotal methodologies for achieving success. Contextually, the present study considered direct and indirect research methods and tools, literature reviews, data collection, computer-assisted telephone interviewing (CATI) and computer-assisted web interview (CAWI), all applied over two European border cities: Cieszyn (Poland) and Cesky Tesin (Czech Republic). These methods enabled the assembly of perspectives of local authorities, public and private institutions, non-governmental organizations, and entrepreneurs from the cities under study. Through the analysis of the collected data, five conditions have been identified for the success of strategic alliances in CBC projects: (i) well defining the alliance goals; (ii) ensuring participation in the alliance of various groups of stakeholders; (iii) involvement of both partners with extensive experience in CBC; (iv) ensuring the coherence of the key objective; and (v) guaranteeing the alliance benefits both sides. These conditions might effectively contribute to achieve more successful outputs in CBC projects, highlighting the relevance of previously developed strategies on the definition of future approaches.info:eu-repo/semantics/publishedVersio

    Statistical Graph Quality Analysis of Utah State University Master of Science Thesis Reports

    Get PDF
    Graphical software packages have become increasingly popular in our modern world, but there are concerns within the statistical visualization field about the default settings provided by these packages, which can make it challenging to create good quality graphs that align with standard graph principles. In this thesis, we investigate whether the quality of graphs from Utah State University (USU) Plan A Master of Science (MS) thesis reports from the years 1930 to 2019 was affected by the rise of graphical software packages. We collected all data stored on the USU Digital Commons website since November 2021 to determine the specific group of graphs we wanted to investigate and developed a sampling process to obtain a sample size of 90 graphs evenly distributed over the time range. To accurately judge graph quality, we compiled and condensed good graphic standards from the statistical literature and developed our own set of graph quality criteria, grouped within four distinct categories: Labeling, Clear Understanding, Meaningful, and Scaling and Gridlines. We constructed a scoring system to rate the quality of graphs against these criteria and explored the results by constructing several visualizations and performing various statistical analyses. Our analysis assessed whether the rise of graphical software packages impacted the quality of graphs within the USU Plan A MS thesis reports

    Data Analytics of University Student Records

    Get PDF
    Understanding the proper navigation of a college curriculum is a daunting task for students, faculty, and staff. Collegiate courses offer enough intellectual challenge without the unnecessary confusion caused by course scheduling issues. Administrative faculty who execute curriculum changes need both quantitative data and empirical evidence to support their notions about which courses are cornerstone. Students require clear understanding of paths through their courses and majors that give them the optimal chance of success. In this work, we re-envision the analysis of student records from several decades by opening up these datasets to new ways of interactivity. We represent curricula through a graph of interconnected courses, studying correlations between student grades. This opens up possibilities for discovering intellectual prerequisites not shown in the course catalog. Extending this, we define a similarity metric for majors within the university, performing hierarchical clustering to reveal structure within this graph of majors not even present within the catalog. Lastly, we seek to show the temporal development of majors as the network grows through time. Through these approaches, our work provides improvements to current methods of viewing and interacting with student records

    Driving performance:International studies on performance management of hospital services and health care systems in times of the COVID-19 pandemic and lessons learnt for its aftermath

    Get PDF
    This thesis explored the field of healthcare performance management and provided insights into key drivers contributing to better performance. Performance management supports reaching individual, organisational, or system goals through setting objectives, developing strategies, organising work, monitoring progress, providing feedback, taking corrective actions, and evaluating outcomes. It is inextricably linked to data and involves managerial and collaborative efforts of healthcare workers in the context of healthcare systems and services. Measurement and management policies and practices across diverse healthcare settings were researched on both hospital and healthcare system levels. With an international scope, most of this work was done during the COVID-19 pandemic, which influenced research focus and methods. The crucial role of data, people and collaboration in organisational and system-level decision-making has been identified and highlighted. Contrasting approaches in utilising performance data were revealed between hospital managers in different geographies. Additionally, the research uncovered innovative, collaborative tools and practices emerging during the pandemic alongside persistent challenges such as data silos and governance issues. This work provides insights into healthcare performance management, guiding future research and policy development toward patient-centric, resilient healthcare. Policy implications stress the importance of investing in managerial training, aligning metrics with desired outcomes, and fostering collaboration for resilient healthcare systems
    • …
    corecore