283 research outputs found

    Maintaining good working experiences in the context of NCEA changes: Enablers and influences

    Get PDF
    Based on findings from The National Survey of Schools project, this study aimed to examine the interactions between schoolsā€™ professional learning and development cultures, teachersā€™ general attitudes towards NCEA changes, their equity-related attitudes towards NCEA changes, and their working experiences (morale and workload views). The participants were 749 teachers from Years 9-13 and Years 7-13 English medium secondary schools who completed our national surveys. Data were analysed quantitatively through descriptive and exploratory techniques. Results suggested a positive association between a perceived culture of ongoing PLD in schools, and teachersā€™ general attitudes towards NCEA changes. Teachers who reported positive attitudes towards the NCEA changes in general, were more likely to understand how these changes can improve outcomes for Māori learners, Pacific learners, and those with disabilities and who need learning support. In addition, a strong culture of ongoing PLD was also positively associated with teachersā€™ morale and workload views. The study has practical implications by indicating how teachers can be better supported to enact educational changes in Aotearoa New Zealand

    Mixed displacement-pressure-phase field framework for finite strain fracture of nearly incompressible hyperelastic materials

    Full text link
    The favored phase field method (PFM) has encountered challenges in the finite strain fracture modeling of nearly or truly incompressible hyperelastic materials. We identified that the underlying cause lies in the innate contradiction between incompressibility and smeared crack opening. Drawing on the stiffness-degradation idea in PFM, we resolved this contradiction through loosening incompressible constraint of the damaged phase without affecting the incompressibility of intact material. By modifying the perturbed Lagrangian approach, we derived a novel mixed formulation. In numerical aspects, the finite element discretization uses the classical Q1/P0 and high-order P2/P1 schemes, respectively. To ease the mesh distortion at large strains, an adaptive mesh deletion technology is also developed. The validity and robustness of the proposed mixed framework are corroborated by four representative numerical examples. By comparing the performance of Q1/P0 and P2/P1, we conclude that the Q1/P0 formulation is a better choice for finite strain fracture in nearly incompressible cases. Moreover, the numerical examples also show that the combination of the proposed framework and methodology has vast potential in simulating complex peeling and tearing problem

    FAIRER: Fairness as Decision Rationale Alignment

    Full text link
    Deep neural networks (DNNs) have made significant progress, but often suffer from fairness issues, as deep models typically show distinct accuracy differences among certain subgroups (e.g., males and females). Existing research addresses this critical issue by employing fairness-aware loss functions to constrain the last-layer outputs and directly regularize DNNs. Although the fairness of DNNs is improved, it is unclear how the trained network makes a fair prediction, which limits future fairness improvements. In this paper, we investigate fairness from the perspective of decision rationale and define the parameter parity score to characterize the fair decision process of networks by analyzing neuron influence in various subgroups. Extensive empirical studies show that the unfair issue could arise from the unaligned decision rationales of subgroups. Existing fairness regularization terms fail to achieve decision rationale alignment because they only constrain last-layer outputs while ignoring intermediate neuron alignment. To address the issue, we formulate the fairness as a new task, i.e., decision rationale alignment that requires DNNs' neurons to have consistent responses on subgroups at both intermediate processes and the final prediction. To make this idea practical during optimization, we relax the naive objective function and propose gradient-guided parity alignment, which encourages gradient-weighted consistency of neurons across subgroups. Extensive experiments on a variety of datasets show that our method can significantly enhance fairness while sustaining a high level of accuracy and outperforming other approaches by a wide margin

    A Survey on Fairness in Large Language Models

    Full text link
    Large language models (LLMs) have shown powerful performance and development prospect and are widely deployed in the real world. However, LLMs can capture social biases from unprocessed training data and propagate the biases to downstream tasks. Unfair LLM systems have undesirable social impacts and potential harms. In this paper, we provide a comprehensive review of related research on fairness in LLMs. First, for medium-scale LLMs, we introduce evaluation metrics and debiasing methods from the perspectives of intrinsic bias and extrinsic bias, respectively. Then, for large-scale LLMs, we introduce recent fairness research, including fairness evaluation, reasons for bias, and debiasing methods. Finally, we discuss and provide insight on the challenges and future directions for the development of fairness in LLMs.Comment: 12 pages, 2 figures, 101 reference
    • ā€¦
    corecore