136 research outputs found

    The why, when, and how of computing in biology classrooms [version 1; peer review: 2 approved]

    Get PDF
    Many biologists are interested in teaching computing skills or using computing in the classroom, despite not being formally trained in these skills themselves. Thus biologists may find themselves researching how to teach these skills, and therefore many individuals are individually attempting to discover resources and methods to do so. Recent years have seen an expansion of new technologies to assist in delivering course content interactively. Educational research provides insights into how learners absorb and process information during interactive learning. In this review, we discuss the value of teaching foundational computing skills to biologists, and strategies and tools to do so. Additionally, we review the literature on teaching practices to support the development of these skills. We pay special attention to meeting the needs of diverse learners, and consider how different ways of delivering course content can be leveraged to provide a more inclusive classroom experience. Our goal is to enable biologists to teach computational skills and use computing in the classroom successfully

    An Open Web-Based Module Developed to Advance Data-Driven Hydrologic Process Learning

    Get PDF
    The era of ‘big data’ promises to provide new hydrologic insights, and open web-based platforms are being developed and adopted by the hydrologic science community to harness these datasets and data services. This shift accompanies advances in hydrology education and the growth of web-based hydrology learning modules, but their capacity to utilize emerging open platforms and data services to enhance student learning through data-driven activities remains largely untapped. Given that generic equations may not easily translate into local or regional solutions, teaching students to explore how well models or equations work in particular settings or to answer specific problems using real data is essential. This article introduces an open web-based module developed to advance data-driven hydrologic process learning, targeting upper level undergraduate and early graduate students in hydrology and engineering. The module was developed and deployed on the HydroLearn open educational platform, which provides a formal pedagogical structure for developing effective problem-based learning activities. We found that data-driven learning activities utilizing collaborative open web platforms like CUAHSI HydroShare and JupyterHub to store and run computational notebooks allowed students to access and work with datasets for systems of personal interest and promoted critical evaluation of results and assumptions. Initial student feedback was generally positive, but also highlighted challenges including trouble-shooting and future-proofing difficulties and some resistance to programming and new software. Opportunities to further enhance hydrology learning include better articulating the benefits of coding and open web platforms upfront, incorporating additional user-support tools, and focusing methods and questions on implementing and adapting notebooks to explore fundamental processes rather than tools and syntax. The profound shift in the field of hydrology toward big data, open data services and reproducible research practices requires hydrology instructors to rethink traditional content delivery and focus instruction on harnessing these datasets and practices in the preparation of future hydrologists and engineers

    Deriving statistical inference from the application of artificial neural networks to clinical metabolomics data

    Get PDF
    Metabolomics data are complex with a high degree of multicollinearity. As such, multivariate linear projection methods, such as partial least squares discriminant analysis (PLS-DA) have become standard. Non-linear projections methods, typified by Artificial Neural Networks (ANNs) may be more appropriate to model potential nonlinear latent covariance; however, they are not widely used due to difficulty in deriving statistical inference, and thus biological interpretation. Herein, we illustrate the utility of ANNs for clinical metabolomics using publicly available data sets and develop an open framework for deriving and visualising statistical inference from ANNs equivalent to standard PLS-DA methods

    Faculty Of Education UNHI

    Get PDF
    Faculty Of Education UNH

    Neutron-Induced Scintillation in Organics

    Get PDF
    Neutrons are widely used as probes of matter to study materials in a broad range of fields from physics, chemistry and medicine to material sciences. Any application utilizing neutrons needs to employ a well-understood and optimized neutron-detector system. This thesis is centered on fundamental aspects of neutron-detector development, including the establishment of the Source Testing Facility at Lund University, experimental methods for the in-depth characterization of scintillator-based neutron detectors and analytical and computational methods for the precise interpretation of results. It focuses on the response of liquid organic scintillators to fast-neutron and gamma-ray irradiations, specifically for NE 213A, EJ 305, EJ 331 and EJ 321P. A simulation-based method for detector calibration was developed which allowed for the use of polyenergetic gamma-ray sources in this low energy-resolution environment. With an actinide/beryllium neutron source and a time-of-flight setup, beams of energy-tagged neutrons were used to study the energy-dependent behaviour of the intrinsic pulse-shape of NE 213A and EJ 305 scintillators. The results demonstrated the advantages of the neutron-tagging method and how the combination of neutron tagging and pulse-shape discrimination can give deeper insight into backgrounds resulting from inelastic neutron scattering. A comprehensive characterization of the neutron scintillation-light yield for NE 213A, EJ 305, EJ 331 and EJ 321P was also performed. It employed the simulation-based calibrations to confirm existing light-yield parametrizations for NE 213A and EJ 305, and resulted in light-yield parametrizations for EJ 331 and EJ 321P extracted for the first time from data. In addition to the development of a simulation-based framework for the study of neutron-induced scintillation in organic scintillators, the methods and results presented in this thesis lay the foundation for future source-based neutron-tagging efforts and scintillator-detector research and development

    Scholarly Communication Librarianship and Open Knowledge

    Get PDF
    The intersection of scholarly communication librarianship and open education offers a unique opportunity to expand knowledge of scholarly communication topics in both education and practice. Open resources can address the gap in teaching timely and critical scholarly communication topics—copyright in teaching and research environments, academic publishing, emerging modes of scholarship, impact measurement—while increasing access to resources and equitable participation in education and scholarly communication. Scholarly Communication Librarianship and Open Knowledge is an open textbook and practitioner’s guide that collects theory, practice, and case studies from nearly 80 experts in scholarly communication and open education. Divided into three parts: *What is Scholarly Communication? *Scholarly Communication and Open Culture *Voices from the Field: Perspectives, Intersections, and Case Studies The book delves into the economic, social, policy, and legal aspects of scholarly communication as well as open access, open data, open education, and open science and infrastructure. Practitioners provide insight into the relationship between university presses and academic libraries, defining collection development as operational scholarly communication, and promotion and tenure and the challenge for open access. Scholarly Communication Librarianship and Open Knowledge is a thorough guide meant to increase instruction on scholarly communication and open education issues and practices so library workers can continue to meet the changing needs of students and faculty. It is also a political statement about the future to which we aspire and a challenge to the industrial, commercial, capitalistic tendencies encroaching on higher education. Students, readers, educators, and adaptors of this resource can find and embrace these themes throughout the text and embody them in their work

    Send frequency prediction on email marketing

    Get PDF
    O E-mail Marketing é uma forma de marketing direta que utiliza o e-mail como um meio de comunicação comercial pelo que numa perspetiva mais ampla, qualquer e-mail enviado a um potencial subscritor e atuais subscritores também pode ser considerado e-mail marketing. Assim sendo, o subscritor vai receber várias comunicações ao longo do dia, reduzindo a visibilidade dos e-mails mais antigos com a entrada de novas comunicações e consequentemente, reduzindo as taxas de aberturas. Tendo em conta que existem subscritores que preferem abrir e ler as suas comunicações de manhã, outros de tarde e alguns durante a noite, é necessário enviar uma comunicação que proporcione uma maior visibilidade que perpetue maiores taxas de abertura e uma maior captação de interesse do subscritor com a entidade que enviou uma comunicação. Esta tese apresenta uma solução para enviar comunicações de marketing na altura certa aos subscritores ou potenciais subscritores. A sua contribuição consiste num modelo segmentado que utiliza um algoritmo tradicional de clustering baseado na informação trocada entre as empresas e os seus subscritores. O modelo implementa posteriormente uma abordagem de ensemble paralelo utilizando técnicas como simple averaging e stacking com algoritmos de regressão treinados (RF, Linear Regression, KNN e SVR) e com um algoritmo de deep learning (RNNs) para determinar a melhor altura para enviar comunicações de e-mail. A implementação é executada utilizando um dataset fornecido pela empresa E-goi para treinar e testar a abordagem mencionada. Os resultados obtidos nesta tese indicam que o algoritmo KNN é mais adequado para prever o melhor momento para enviar comunicações de e-mail dos algoritmos ML treinados. Das duas técnicas utilizadas para a abordagem do ensemble paralelo, o stacking é o mais adequado para prever o melhor momento para o envio das comunicações de e-mail.Email Marketing is a form of direct marketing that uses email as a means of commercial communication. In a broader perspective, any email sent to a potential subscriber and current subscribers can also be considered email marketing. Therefore, the subscriber will receive several communications throughout the day, reducing the visibility of older emails with the entry of new communications and consequently reducing open rates. Considering that there are subscribers who prefer to open and read their communications in the morning, others in the afternoon, and some at night, it is necessary to send a communication that provides the visibility that leads to higher open rates and capture the subscribers’ interest with the entity that sent the communication. This thesis presents a solution to send marketing communications at the right time to subscribers or potential subscribers. Its contribution consists of a segmented model that uses a traditional clustering algorithm based on the information exchanged between companies and subscribers. The model then implements a parallel ensemble approach using simple averaging and stacking techniques with trained regression algorithms (RF, Linear Regression, KNN, and SVR) and a deep learning algorithm (RNNs) to determine the best time to send email communications. The implementation is executed using a dataset provided by the company E-goi to train and test the mentioned approach. The results obtained in this thesis indicate that the KNN algorithm is better suited to predict the best time to send email communications of all the trained ML algorithms. Stacking is the most suitable for predicting the best time to send email communications of the two techniques used for the parallel ensemble approach

    Qualitative Analyse und Modellierung des wissenschaftlichen Arbeitens

    Get PDF
    Diese Masterarbeitet bietet einen Überblick der bestehenden Literatur zum Stand der Digitalisierung des geisteswissenschaflichen Arbeitens und den Stellenwert des Exzerpierens und Notierens während der Forschung. Die Erkentnisse aus der Literatur werden durch eine Interviewreihe, ausgewertet auf Basis der Grounded Theory, bestätigt. Basierend auf elf Interviews mit Promovierenden und Masterstudierenden wird ein informelles Aktitivätenmodell des (geistes)wissenschafltichen Arbeitens erstellt. Unter Miteinbeziehung des Forschungsstands auf dem Gebiet des Personal Information Management wird anschließend ein Concurrent Task Tree Modell für digitale Assistenz im Rahmen geisteswissenschaftlicher Forschung vorgestellt. Basierend darauf wurde ein Prototyp zur Evaluierung einer stillen Ausführungs- und Übersetzungsassistenz entwickelt, der im Labor getestet wurde. Die Nutzung des Prototypen führte entgegen der Erwartung zu keiner Effizienzsteigerung beim Zusammenfassen einer Textquelle. Gleichzeitig konnet aber bestätigt werden, dass die Nutzung eines Eye-Trackers und einer Webcam die Verortung von Papiernotizen im digitalen Quelltext ermöglicht. Bei die Auswertung der Interviews wurden zudem zwei Typen der Literaturverwaltung beobachtet, die den Stellenwert von Exzerpten unterstreichen und die zukünftige Entwicklung von Literaturverwaltungssoftware für Geisteswissenschaftler beeinflussen sollten
    corecore