325 research outputs found

    Ethical, economic and efficient sector : is it a gamble? The case of New Zealand Health

    Full text link
    The New Zealand public sector has gone through major reform as a result of fiscal deficit in 1984 (KettI, 1997; Schwartz, 1997), resulting in shift of emphasis from quality service provision to establishing financial supremacy (Kettl, 1997). This raises concern as to how public sector employees are attaining balance between their service objectives with financial ones and how is the ethics negotiated in this process. Following this concern, this paper focuses on determining the organisational variables consisting of organisational policies in the District Health Boards (DHBs) and hospitals of New Zealand on ethical behaviours of managers and the ethical climate of these departments. The aim of this study is to increase our understanding of the ethical climate of the public health. Our findings suggest that little emphasis has been provided to the aspect of ethics in New Zealand health sector. There is no reward for employees who exhibit exemplary ethical behaviour, no hot line to consult/report about ethics, any detailed guidelines and policies, and not enough ethics-related training provided to staff.<br /

    The Molecular Basis of Peanut Allergy

    Get PDF
    Peanut allergens can trigger a potent and sometimes dangerous immune response in an increasing number of people. The molecular structures of these allergens form the basis for understanding this response. This review describes the currently known peanut allergen structures and discusses how modifications both enzymatic and non-enzymatic affect digestion, innate immune recognition, and IgE interactions. The allergen structures help explain cross-reactivity among allergens from different sources, which is useful in improving patient diagnostics. Surprisingly, it was recently noted that similar short peptide sequences among unrelated peanut allergens could also be a source of cross-reactivity. The molecular features of peanut allergens continue to inform predictions and provide new research directions in the study of allergic disease

    The Molecular Basis of Peanut Allergy

    Get PDF
    Peanut allergens can trigger a potent and sometimes dangerous immune response in an increasing number of people. The molecular structures of these allergens form the basis for understanding this response. This review describes the currently known peanut allergen structures and discusses how modifications both enzymatic and non-enzymatic affect digestion, innate immune recognition, and IgE interactions. The allergen structures help explain cross-reactivity among allergens from different sources, which is useful in improving patient diagnostics. Surprisingly, it was recently noted that similar short peptide sequences among unrelated peanut allergens could also be a source of cross-reactivity. The molecular features of peanut allergens continue to inform predictions and provide new research directions in the study of allergic disease

    Dynamic inter-treatment information sharing for heterogeneous treatment effects estimation

    Get PDF
    Existing heterogeneous treatment effects learners, also known as conditional average treatment effects (CATE) learners, lack a general mechanism for end-to-end inter-treatment information sharing, and data have to be split among potential outcome functions to train CATE learners which can lead to biased estimates with limited observational datasets. To address this issue, we propose a novel deep learning-based framework to train CATE learners that facilitates dynamic end-to-end information sharing among treatment groups. The framework is based on \textit{soft weight sharing} of \textit{hypernetworks}, which offers advantages such as parameter efficiency, faster training, and improved results. The proposed framework complements existing CATE learners and introduces a new class of uncertainty-aware CATE learners that we refer to as \textit{HyperCATE}. We develop HyperCATE versions of commonly used CATE learners and evaluate them on IHDP, ACIC-2016, and Twins benchmarks. Our experimental results show that the proposed framework improves the CATE estimation error via counterfactual inference, with increasing effectiveness for smaller datasets

    A brief review of hypernetworks in deep learning

    Get PDF
    Hypernetworks, or hypernets in short, are neural networks that generate weights for another neural network, known as the target network. They have emerged as a powerful deep learning technique that allows for greater flexibility, adaptability, dynamism, faster training, information sharing, and model compression etc. Hypernets have shown promising results in a variety of deep learning problems, including continual learning, causal inference, transfer learning, weight pruning, uncertainty quantification, zero-shot learning, natural language processing, and reinforcement learning etc. Despite their success across different problem settings, currently, there is no review available to inform the researchers about the developments and to help in utilizing hypernets. To fill this gap, we review the progress in hypernets. We present an illustrative example to train deep neural networks using hypernets and propose categorizing hypernets based on five design criteria as inputs, outputs, variability of inputs and outputs, and architecture of hypernets. We also review applications of hypernets across different deep learning problem settings, followed by a discussion of general scenarios where hypernets can be effectively employed. Finally, we discuss the challenges and future directions that remain under-explored in the field of hypernets. We believe that hypernetworks have the potential to revolutionize the field of deep learning. They offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks. Through this review, we aim to inspire further advancements in deep learning through hypernetworks

    A brief review of hypernetworks in deep learning

    Get PDF
    Hypernetworks, or hypernets for short, are neural networks that generate weights for another neural network, known as the target network. They have emerged as a powerful deep learning technique that allows for greater flexibility, adaptability, dynamism, faster training, information sharing, and model compression. Hypernets have shown promising results in a variety of deep learning problems, includ- ing continual learning, causal inference, transfer learning, weight pruning, uncertainty quantification, zero-shot learning, natural language processing, and reinforcement learning. Despite their success across different problem settings, there is currently no comprehensive review available to inform researchers about the latest developments and to assist in utilizing hypernets. To fill this gap, we review the progress in hypernets. We present an illustrative example of training deep neural networks using hypernets and propose categorizing hypernets based on five design criteria: inputs, outputs, variability of inputs and outputs, and the architecture of hypernets. We also review applications of hypernets across different deep learning problem settings, followed by a discussion of general scenarios where hypernets can be effectively employed. Finally, we discuss the challenges and future directions that remain underexplored in the field of hypernets. We believe that hypernetworks have the potential to revolutionize the field of deep learning. They offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks. Through this review, we aim to inspire further advancements in deep learning through hypernetworks

    Enhancing the Performance of Automated Grade Prediction in MOOC using Graph Representation Learning

    Full text link
    In recent years, Massive Open Online Courses (MOOCs) have gained significant traction as a rapidly growing phenomenon in online learning. Unlike traditional classrooms, MOOCs offer a unique opportunity to cater to a diverse audience from different backgrounds and geographical locations. Renowned universities and MOOC-specific providers, such as Coursera, offer MOOC courses on various subjects. Automated assessment tasks like grade and early dropout predictions are necessary due to the high enrollment and limited direct interaction between teachers and learners. However, current automated assessment approaches overlook the structural links between different entities involved in the downstream tasks, such as the students and courses. Our hypothesis suggests that these structural relationships, manifested through an interaction graph, contain valuable information that can enhance the performance of the task at hand. To validate this, we construct a unique knowledge graph for a large MOOC dataset, which will be publicly available to the research community. Furthermore, we utilize graph embedding techniques to extract latent structural information encoded in the interactions between entities in the dataset. These techniques do not require ground truth labels and can be utilized for various tasks. Finally, by combining entity-specific features, behavioral features, and extracted structural features, we enhance the performance of predictive machine learning models in student assignment grade prediction. Our experiments demonstrate that structural features can significantly improve the predictive performance of downstream assessment tasks. The code and data are available in \url{https://github.com/DSAatUSU/MOOPer_grade_prediction

    Incremental Fermi Large Area Telescope fourth source catalog

    Full text link
    Artículo escrito por un elevado número de autores, solo se referencian el que aparece en primer lugar, el nombre del grupo de colaboración, si le hubiere, y los autores pertenecientes a la UAMWe present an incremental version (4FGL-DR3, for Data Release 3) of the fourth Fermi Large Area Telescope (LAT) catalog of 3-ray sources. Based on the first 12 years of science data in the energy range from 50 MeV to 1 TeV, it contains 6658 sources. The analysis improves on that used for the 4FGL catalog over eight years of data: more sources are fit with curved spectra, we introduce a more robust spectral parameterization for pulsars, and we extend the spectral points to 1 TeV. The spectral parameters, spectral energy distributions, and associations are updated for all sources. Light curves are rebuilt for all sources with 1 yr intervals (not 2 month intervals). Among the 5064 original 4FGL sources, 16 were deleted, 112 are formally below the detection threshold over 12 yr (but are kept in the list), while 74 are newly associated, 10 have an improved association, and seven associations were withdrawn. Pulsars are split explicitly between young and millisecond pulsars. Pulsars and binaries newly detected in LAT sources, as well as more than 100 newly classified blazars, are reported. We add three extended sources and 1607 new point sources, mostly just above the detection threshold, among which eight are considered identified, and 699 have a plausible counterpart at other wavelengths. We discuss the degree-scale residuals to the global sky model and clusters of soft unassociated point sources close to the Galactic plane, which are possibly related to limitations of the interstellar emission model and missing extended source

    Most published meta-regression analyses based on aggregate data suffer from methodological pitfalls: a meta-epidemiological study.

    Get PDF
    BACKGROUND Due to clinical and methodological diversity, clinical studies included in meta-analyses often differ in ways that lead to differences in treatment effects across studies. Meta-regression analysis is generally recommended to explore associations between study-level characteristics and treatment effect, however, three key pitfalls of meta-regression may lead to invalid conclusions. Our aims were to determine the frequency of these three pitfalls of meta-regression analyses, examine characteristics associated with the occurrence of these pitfalls, and explore changes between 2002 and 2012. METHODS A meta-epidemiological study of studies including aggregate data meta-regression analysis in the years 2002 and 2012. We assessed the prevalence of meta-regression analyses with at least 1 of 3 pitfalls: ecological fallacy, overfitting, and inappropriate methods to regress treatment effects against the risk of the analysed outcome. We used logistic regression to investigate study characteristics associated with pitfalls and examined differences between 2002 and 2012. RESULTS Our search yielded 580 studies with meta-analyses, of which 81 included meta-regression analyses with aggregated data. 57 meta-regression analyses were found to contain at least one pitfall (70%): 53 were susceptible to ecological fallacy (65%), 14 had a risk of overfitting (17%), and 5 inappropriately regressed treatment effects against the risk of the analysed outcome (6%). We found no difference in the prevalence of meta-regression analyses with methodological pitfalls between 2002 and 2012, nor any study-level characteristic that was clearly associated with the occurrence of any of the pitfalls. CONCLUSION The majority of meta-regression analyses based on aggregate data contain methodological pitfalls that may result in misleading findings
    • …
    corecore