724 research outputs found

    A Computer-Based Approach For Identifying Student Conceptual Change

    Get PDF
    Misconceptions are commonly encountered in many areas of science and engineering where a to-be-learned concept conflicts with prior knowledge. Conceptual change is an approach for identifying and repairing the misconceptions. One of the ways to promote student conceptual change is providing students with ontological schema training. However, assessment of conceptual change relies on qualitative analysis of student responses. With the exponential growth of qualitative data in the form of graphical representations or written responses, the process of data analysis relying on human experts has become time-consuming and costly. This study took the advantages of natural language processing and machine learning techniques to analyze the responses effectively. In addition, we identified how students described complex phenomena in thermal and transport science and compared the differences of descriptions between students who took certain training courses to address misconceptions by means of ontological schema training and those who were exposed to a different course about the nature of science. After comparing the effectiveness of three different text classification methods - query-based approach, Naive Bayes classifier, and support vector machine (SVM) for identifying conceptual change, SVM classifier was chosen to assess student responses from a corpus collected by Streveler and her research group in previous studies. Based on the automatic assessment for student conceptual change, this research found that training students with appropriate ontological schema would promote the conceptual change

    EXPLOITING KASPAROV'S LAW: ENHANCED INFORMATION SYSTEMS INTEGRATION IN DOD SIMULATION-BASED TRAINING ENVIRONMENTS

    Get PDF
    Despite recent advances in the representation of logistics considerations in DOD staff training and wargaming simulations, logistics information systems (IS) remain underrepresented. Unlike many command and control (C2) systems, which can be integrated with simulations through common protocols (e.g., OTH-Gold), many logistics ISs require manpower-intensive human-in-the-loop (HitL) processes for simulation-IS (sim-IS) integration. Where automated sim-IS integration has been achieved, it often does not simulate important sociotechnical system (STS) dynamics, such as information latency and human error, presenting decision-makers with an unrealistic representation of logistics C2 capabilities in context. This research seeks to overcome the limitations of conventional sim-IS interoperability approaches by developing and validating a new approach for sim-IS information exchange through robotic process automation (RPA). RPA software supports the automation of IS information exchange through ISsā€™ existing graphical user interfaces. This ā€œoutside-inā€ approach to IS integration mitigates the need for engineering changes in ISs (or simulations) for automated information exchange. In addition to validating the potential for an RPA-based approach to sim-IS integration, this research presents recommendations for a Distributed Simulation Engineering and Execution Process (DSEEP) overlay to guide the engineering and execution of sim-IS environments.Major, United States Marine CorpsApproved for public release. Distribution is unlimited

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Semantic querying of relational data for clinical intelligence: a semantic web services-based approach

    Full text link

    A study assessing the characteristics of big data environments that predict high research impact: application of qualitative and quantitative methods

    Full text link
    BACKGROUND: Big data offers new opportunities to enhance healthcare practice. While researchers have shown increasing interest to use them, little is known about what drives research impact. We explored predictors of research impact, across three major sources of healthcare big data derived from the government and the private sector. METHODS: This study was based on a mixed methods approach. Using quantitative analysis, we first clustered peer-reviewed original research that used data from government sources derived through the Veterans Health Administration (VHA), and private sources of data from IBM MarketScan and Optum, using social network analysis. We analyzed a battery of research impact measures as a function of the data sources. Other main predictors were topic clusters and authorsā€™ social influence. Additionally, we conducted key informant interviews (KII) with a purposive sample of high impact researchers who have knowledge of the data. We then compiled findings of KIIs into two case studies to provide a rich understanding of drivers of research impact. RESULTS: Analysis of 1,907 peer-reviewed publications using VHA, IBM MarketScan and Optum found that the overall research enterprise was highly dynamic and growing over time. With less than 4 years of observation, research productivity, use of machine learning (ML), natural language processing (NLP), and the Journal Impact Factor showed substantial growth. Studies that used ML and NLP, however, showed limited visibility. After adjustments, VHA studies had generally higher impact (10% and 27% higher annualized Google citation rates) compared to MarketScan and Optum (p<0.001 for both). Analysis of co-authorship networks showed that no single social actor, either a community of scientists or institutions, was dominating. Other key opportunities to achieve high impact based on KIIs include methodological innovations, under-studied populations and predictive modeling based on rich clinical data. CONCLUSIONS: Big data for purposes of research analytics has grown within the three data sources studied between 2013 and 2016. Despite important challenges, the research community is reacting favorably to the opportunities offered both by big data and advanced analytic methods. Big data may be a logical and cost-efficient choice to emulate research initiatives where RCTs are not possible

    Systematic identification of pharmacogenomics information from clinical trials

    Get PDF
    AbstractRecent progress in high-throughput genomic technologies has shifted pharmacogenomic research from candidate gene pharmacogenetics to clinical pharmacogenomics (PGx). Many clinical related questions may be asked such as ā€˜what drug should be prescribed for a patient with mutant alleles?ā€™ Typically, answers to such questions can be found in publications mentioning the relationships of the geneā€“drugā€“disease of interest. In this work, we hypothesize that ClinicalTrials.gov is a comparable source rich in PGx related information. In this regard, we developed a systematic approach to automatically identify PGx relationships between genes, drugs and diseases from trial records in ClinicalTrials.gov. In our evaluation, we found that our extracted relationships overlap significantly with the curated factual knowledge through the literature in a PGx database and that most relationships appear on average 5years earlier in clinical trials than in their corresponding publications, suggesting that clinical trials may be valuable for both validating known and capturing new PGx related information in a more timely manner. Furthermore, two human reviewers judged a portion of computer-generated relationships and found an overall accuracy of 74% for our text-mining approach. This work has practical implications in enriching our existing knowledge on PGx geneā€“drugā€“disease relationships as well as suggesting crosslinks between ClinicalTrials.gov and other PGx knowledge bases

    Opening Books and the National Corpus of Graduate Research

    Get PDF
    Virginia Tech University Libraries, in collaboration with Virginia Tech Department of Computer Science and Old Dominion University Department of Computer Science, request $505,214 in grant funding for a 3-year project, the goal of which is to bring computational access to book-length documents, demonstrating that with Electronic Theses and Dissertations (ETDs). The project is motivated by the following library and community needs. (1) Despite huge volumes of book-length documents in digital libraries, there is a lack of models offering effective and efficient computational access to these long documents. (2) Nationwide open access services for ETDs generally function at the metadata level. Much important knowledge and scientific data lie hidden in ETDs, and we need better tools to mine the content and facilitate the identification, discovery, and reuse of these important components. (3) A wide range of audiences can potentially benefit from this research, including but not limited to Librarians, Students, Authors, Educators, Researchers, and other interested readers. We will answer the following key research questions: (1) How can we effectively identify and extract key parts (chapters, sections, tables, figures, citations), in both born digital and page image formats? (2) How can we develop effective automatic classication as well as chapter summarization techniques? (3) How can our ETD digital library most effectively serve stakeholders? In response to these questions, we plan to first compile an ETD corpus consisting of at least 50,000 documents from multiple institutional repositories. We will make the corpus inclusive and diverse, covering a range of degrees (masterā€™s and doctoral), years, graduate programs (STEM and non-STEM), and authors (from HBCUs and non-HBCUs). Testing first with this sample, we will investigate three major research areas (RAs), outlined below. RA 1: Document analysis and extraction, in which we experiment with machine/deep learning models for effective ETD segmentation and subsequent information extraction. Anticipated results of this research include new software tools that can be used and adapted by libraries for automatic extraction of structural metadata and document components (chapters, sections, figures, tables, citations, bibliographies) from ETDs - applied to both page image and born digital documents. RA 2: Adding value, in which we investigate techniques and build machine/deep learning models to automatically summarize and classify ETD chapters. Anticipated results of this research include software implementations of a chapter-level text summarizer that generates paragraph-length summaries of ETD chapters, and a multi-label classifier that assigns subject categories to ETD chapters. Our aim is to develop software that can be adapted or replicated by libraries to add value to their existing ETD services. RA 3: User services, in which we study users to identify and understand their information needs and information seeking behaviors, so that we may establish corresponding requirements for user interface and service components most useful for interacting with ETD content. Basing our design decisions on empirical evidence obtained from user analysis, we will construct a prototype system to demonstrate how these components can improve the user experience with ETD collections, and ultimately increase the capacity of libraries to provide access to ETDs and other long-form document content. Our project brings to bear cutting-edge computer science and machine/deep learning technologies to advance discovery, use, and potential for reuse of the knowledge hidden in the text of books and book-length documents. In addition, by focusing on libraries\u27 ETD collections (where legal restrictions from book publishers generally are not applicable), our research will open this rich corpus of graduate research and scholarship, leverage ETDs to advance further research and education, and allow libraries to achieve greater impact

    A Process Model for Crowdsourcing: Insights from the Literature on Implementation

    Get PDF
    The purpose of the current study is to systematically review the crowdsourcing literature, extract the activities which have been cited, and synthesise these activities into a general process model. For this to happen, we reviewed the related literature on crowdsourcing methods as well as relevant case studies and extracted the activities which they referred to as part of crowdsourcing projects. The systematic review of the related literature and an in-depth analysis of the steps in those papers were followed by a synthesis of the extracted activities resulting in an eleven-phase process model. This process model covers all of the activities suggested by the literature. This paper then briefly discusses activities in each phase and concludes with a number of implications for both academics and practitioners
    • ā€¦
    corecore