22 research outputs found
2018 SDSU Data Science Symposium Program
Table of Contents:
Letter from SDSU PresidentLetter from SDSU Department of Mathematics and Statistics Dept. HeadSponsorsGeneral InformationKeynote SpeakersInvited SpeakersSunday ScheduleWorkshop InformationMonday ScheduleAbstracts| Invited SpeakersAbstracts | Oral PresentationsPoster PresentationCommittee and Volunteer
Modeling Faceted Browsing with Category Theory for Reuse and Interoperability
Faceted browsing (also called faceted search or faceted navigation) is an exploratory search model where facets assist in the interactive navigation of search results. Facets are attributes that have been assigned to describe resources being explored; a faceted taxonomy is a collection of facets provided by the interface and is often organized as sets, hierarchies, or graphs. Faceted browsing has become ubiquitous with modern digital libraries and online search engines, yet the process is still difficult to abstractly model in a manner that supports the development of interoperable and reusable interfaces. We propose category theory as a theoretical foundation for faceted browsing and demonstrate how the interactive process can be mathematically abstracted in order to support the development of reusable and interoperable faceted systems.
Existing efforts in facet modeling are based upon set theory, formal concept analysis, and light-weight ontologies, but in many regards they are implementations of faceted browsing rather than a specification of the basic, underlying structures and interactions. We will demonstrate that category theory allows us to specify faceted objects and study the relationships and interactions within a faceted browsing system. Resulting implementations can then be constructed through a category-theoretic lens using these models, allowing abstract comparison and communication that naturally support interoperability and reuse.
In this context, reuse and interoperability are at two levels: between discrete systems and within a single system. Our model works at both levels by leveraging category theory as a common language for representation and computation. We will establish facets and faceted taxonomies as categories and will demonstrate how the computational elements of category theory, including products, merges, pushouts, and pullbacks, extend the usefulness of our model. More specifically, we demonstrate that categorical constructions such as the pullback and pushout operations can help organize and reorganize facets; these operations in particular can produce faceted views containing relationships not found in the original source taxonomy. We show how our category-theoretic model of facets relates to database schemas and discuss how this relationship assists in implementing the abstractions presented.
We give examples of interactive interfaces from the biomedical domain to help illustrate how our abstractions relate to real-world requirements while enabling systematic reuse and interoperability. We introduce DELVE (Document ExpLoration and Visualization Engine), our framework for developing interactive visualizations as modular Web-applications in order to assist researchers with exploratory literature search. We show how facets relate to and control visualizations; we give three examples of text visualizations that either contain or interact with facets. We show how each of these visualizations can be represented with our model and demonstrate how our model directly informs implementation.
With our general framework for communicating consistently about facets at a high level of abstraction, we enable the construction of interoperable interfaces and enable the intelligent reuse of both existing and future efforts
Front-Line Physicians' Satisfaction with Information Systems in Hospitals
Day-to-day operations management in hospital units is difficult due to continuously varying situations, several actors involved and a vast number of information systems in use. The aim of this study was to describe front-line physicians' satisfaction with existing information systems needed to support the day-to-day operations management in hospitals. A cross-sectional survey was used and data chosen with stratified random sampling were collected in nine hospitals. Data were analyzed with descriptive and inferential statistical methods. The response rate was 65 % (n = 111). The physicians reported that information systems support their decision making to some extent, but they do not improve access to information nor are they tailored for physicians. The respondents also reported that they need to use several information systems to support decision making and that they would prefer one information system to access important information. Improved information access would better support physicians' decision making and has the potential to improve the quality of decisions and speed up the decision making process.Peer reviewe
Informatics for Health 2017 : advancing both science and practice
Conference report, The Informatics for Health congress, 24-26 April 2017, in Manchester, UK.Introduction : The Informatics for Health congress, 24-26 April 2017, in Manchester, UK, brought together the Medical Informatics Europe (MIE) conference and the Farr Institute International Conference. This special issue of the Journal of Innovation in Health Informatics contains 113 presentation abstracts and 149 poster abstracts from the congress. Discussion : The twin programmes of “Big Data” and “Digital Health” are not always joined up by coherent policy and investment priorities. Substantial global investment in health IT and data science has led to sound progress but highly variable outcomes. Society needs an approach that brings together the science and the practice of health informatics. The goal is multi-level Learning Health Systems that consume and intelligently act upon both patient data and organizational intervention outcomes. Conclusions : Informatics for Health demonstrated the art of the possible, seen in the breadth and depth of our contributions. We call upon policy makers, research funders and programme leaders to learn from this joined-up approach.Publisher PDFPeer reviewe
Recommended from our members
An Evaluation of Computational Methods to Support the Clinical Management of Chronic Disease Populations
Innovative primary care models that deliver comprehensive primary care to address medical and social needs are an established means of improving health outcomes and reducing healthcare costs among persons living with chronic disease. Care management is one such approach that requires providers to monitor their respective patient panels and intervene on patients requiring care. Health information technology (IT) has been established as a critical component of care management and similar care models. While there exist a plethora of health IT systems for facilitating primary care, there is limited research on their ability to support care management and its emphasis on monitoring panels of patients with complex needs. In this dissertation, I advance the understanding of how computational methods can better support clinicians delivering care management, and use the management of human immunodeficiency virus (HIV) as an example scenario of use.
The research described herein is segmented into 3 aims; the first was to understand the processes and barriers associated with care management and assess whether existing IT can support clinicians in this domain. The second and third aim focused on informing potential solutions to the technological shortcomings identified in the first aim. In the studies of the first aim, I conducted interviews and observations in two HIV primary care programs and analyzed the data generated to create a conceptual framework of population monitoring and identify challenges faced by clinicians in delivering care management. In the studies of the second aim, I used computational methods to advance the science of extracting from the patient record social and behavioral determinants of health (SBDH), which are not easily accessible to clinicians and represent an important barrier to care management. In the third aim, I conducted a controlled experimental evaluation to assess whether data visualization can improve clinician’s ability to maintain awareness of their patient panels
Recommended from our members
Simulating drug responses in laboratory test time series with deep generative modeling
Drug effects can be unpredictable and vary widely among patients with environmental, genetic, and clinical factors. Randomized control trials (RCTs) are not sufficient to identify adverse drug reactions (ADRs), and the electronic health record (EHR) along with medical claims have become an important resource for pharmacovigilance. Among all the data collected in hospitals, laboratory tests represent the most documented and reliable data type in the EHR. Laboratory tests are at the core of the clinical decision process and are used for diagnosis, monitoring, screening, and research by physicians. They can be linked to drug effects either directly, with therapeutic drug monitoring (TDM), or indirectly using drug laboratory effects (DLEs) that affect surrogate tests. Unfortunately, very few automated methods use laboratory tests to inform clinical decision making and predict drug effects, partly due to the complexity of these time series that are irregularly sampled, highly dependent on other clinical covariates, and non-stationary.
Deep learning, the branch of machine learning that relies on high-capacity artificial neural networks, has known a renewed popularity this past decade and has transformed fields such as computer vision and natural language processing. Deep learning holds the promise of better performances compared to established machine learning models, although with the necessity for larger training datasets due to their higher degrees of freedom. These models are more flexible with multi-modal inputs and can make sense of large amounts of features without extensive engineering. Both qualities make deep learning models ideal candidate for complex, multi-modal, noisy healthcare datasets.
With the development of novel deep learning methods such as generative adversarial networks (GANs), there is an unprecedented opportunity to learn how to augment existing clinical dataset with realistic synthetic data and increase predictive performances. Moreover, GANs have the potential to simulate effects of individual covariates such as drug exposures by leveraging the properties of implicit generative models.
In this dissertation, I present a body of work that aims at paving the way for next generation laboratory test-based clinical decision support systems powered by deep learning. To this end, I organized my experiments around three building blocks: (1) the evaluation of various deep learning architectures with laboratory test time series and their covariates with a forecasting task; (2) the development of implicit generative models of laboratory test time series using the Wasserstein GAN framework; (3) the inference properties of these models for the simulation of drug effects in laboratory test time series, and their application for data augmentation. Each component has its own evaluation: The forecasting task enabled me to explore the properties and performances of different learning architectures; the Wasserstein GAN models are evaluated with both intrinsic metrics and extrinsic tasks, and I always set baselines to avoid providing results in a "neural-network only" referential. Applied machine learning, and more so with deep learning, is an empirical science. While the datasets used in this dissertation are not publicly available due to patient privacy regulation, I described pre-processing steps, hyper-parameters selection and training processes with reproducibility and transparency in mind.
In the specific context of these studies involving laboratory test time series and their clinical covariates, I found that for supervised tasks, machine learning holds up well against deep learning methods. Complex recurrent architectures like long short-term memory (LSTM) do not perform well on these short time series, while convolutional neural networks (CNNs) and multi-layer perceptrons (MLPs) provide the best performances, at the cost of extensive hyper-parameter tuning. Generative adversarial networks, enabled by deep learning models, were able to generate high-fidelity laboratory test time series, and the quality of the generated samples was increased with conditional models using drug exposures as auxiliary information. Interestingly, forecasting models trained on synthetic data exclusively still retain good performances, confirming the potential of GANs in privacy-oriented applications.
Finally, conditional GANs demonstrated an ability to interpolate samples from drug exposure combinations not seen during training, opening the way for laboratory test simulation with larger auxiliary information spaces. In specific cases, augmenting real training sets with synthetic data improved performances in the forecasting tasks, and could be extended to other applications where rare cases present a high prediction error