2,455 research outputs found

    Measuring Moral Reasoning using Moral Dilemmas: Evaluating Reliability, Validity, and Differential Item Functioning of the Behavioral Defining Issues Test (bDIT)

    Get PDF
    We evaluated the reliability, validity, and differential item functioning (DIF) of a shorter version of the Defining Issues Test-1 (DIT-1), the behavioral DIT (bDIT), measuring the development of moral reasoning. 353 college students (81 males, 271 females, 1 not reported; age M = 18.64 years, SD = 1.20 years) who were taking introductory psychology classes at a public University in a suburb area in the Southern United States participated in the present study. First, we examined the reliability of the bDIT using Cronbach’s α and its concurrent validity with the original DIT-1 using disattenuated correlation. Second, we compared the test duration between the two measures. Third, we tested the DIF of each question between males and females. Findings reported that first, the bDIT showed acceptable reliability and good concurrent validity. Second, the test duration could be significantly shortened by employing the bDIT. Third, DIF results indicated that the bDIT items did not favour any gender. Practical implications of the present study based on the reported findings are discussed

    Validity study using factor analyses on the Defining Issues Test-2 in undergraduate populations

    Get PDF
    Introduction The Defining Issues Test (DIT) aimed to measure one’s moral judgment development in terms of moral reasoning. The Neo-Kohlbergian approach, which is an elaboration of Kohlbergian theory, focuses on the continuous development of postconventional moral reasoning, which constitutes the theoretical basis of the DIT. However, very few studies have directly tested the internal structure of the DIT, which would indicate its construct validity. Objectives Using the DIT-2, a later revision of the DIT, we examined whether a bi-factor model or 3-factor CFA model showed a better model fit. The Neo-Kohlbergian theory of moral judgment development, which constitutes the theoretical basis for the DIT-2, proposes that moral judgment development occurs continuously and that it can be better explained with a soft-stage model. Given these assertions, we assumed that the bi-factor model, which considers the Schema-General Moral Judgment (SGMJ), might be more consistent with Neo-Kohlbergian theory. Methods We analyzed a large dataset collected from undergraduate students. We performed confirmatory factor analysis (CFA) via weighted least squares. A 3-factor CFA based on the DIT-2 manual and a bi-factor model were compared for model fit. The three factors in the 3-factor CFA were labeled as moral development schemas in Neo-Kohlbergian theory (i.e., personal interests, maintaining norms, and postconventional schemas). The bi-factor model included the SGMJ in addition to the three factors. Results In general, the bi-factor model showed a better model fit compared with the 3-factor CFA model although both models reported acceptable model fit indices. Conclusion We found that the DIT-2 scale is a valid measure of the internal structure of moral reasoning development using both CFA and bi-factor models. In addition, we conclude that the soft-stage model, posited by the Neo-Kohlbergian approach to moral judgment development, can be better supported with the bi-factor model that was tested in the present study

    ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers

    Full text link
    Quantization techniques are pivotal in reducing the memory and computational demands of deep neural network inference. Existing solutions, such as ZeroQuant, offer dynamic quantization for models like BERT and GPT but overlook crucial memory-bounded operators and the complexities of per-token quantization. Addressing these gaps, we present a novel, fully hardware-enhanced robust optimized post-training W8A8 quantization framework, ZeroQuant-HERO. This framework uniquely integrates both memory bandwidth and compute-intensive operators, aiming for optimal hardware performance. Additionally, it offers flexibility by allowing specific INT8 modules to switch to FP16/BF16 mode, enhancing accuracy.Comment: 8 pages, 2 figure

    Translocation of phospholipase A2α to apoplasts is modulated by developmental stages and bacterial infection in Arabidopsis

    Get PDF
    Phospholipase A2 (PLA2) hydrolyzes phospholipids at the sn-2 position to yield lysophospholipids and free fatty acids. Of the four paralogs expressed in Arabidopsis, the cellular functions of PLA2α in planta are poorly understood. The present study shows that PLA2α possesses unique characteristics in terms of spatiotemporal subcellular localization, as compared with the other paralogs that remain in the ER and/or Golgi apparatus during secretory processes. Only PLA2α is secreted out to extracellular spaces, and its secretion to apoplasts is modulated according to the developmental stages of plant tissues. Observation of PLA2α-RFP transgenic plants suggests that PLA2α localizes mostly at the Golgi bodies in actively growing leaf tissues, but is gradually translocated to apoplasts as the leaves become mature. When Pseudomonas syringae pv.~tomato DC3000 carrying the avirulent factor avrRpm1 infects the apoplasts of host plants, PLA2α rapidly translocates to the apoplasts where bacteria attempt to become established. PLA2α promoter::GUS assays show that PLA2α gene expression is controlled in a developmental stage- and tissue-specific manner. It would be interesting to investigate if PLA2α functions in plant defense responses at apoplasts where secreted PLA2α confronts with invading pathogens

    Acceptability and feasibility of a multidomain harmonized data collection protocol in youth mental health

    Get PDF
    Objective To develop targeted treatment for young people experiencing mental illness, a better understanding of the biological, psychological, and social changes is required, particularly during the early stages of illness. To do this, large datasets need to be collected using standardized methods. A harmonized data collection protocol was tested in a youth mental health research setting to determine its acceptability and feasibility. Method Eighteen participants completed the harmonization protocol, including a clinical interview, self-report measures, neurocognitive measures, and mock assessments of magnetic resonance imaging (MRI) and blood. The feasibility of the protocol was assessed by recording recruitment rates, study withdrawals, missing data, and protocol deviations. Subjective responses from participant surveys and focus groups were used to examine the acceptability of the protocol. Results Twenty-eight young people were approached, 18 consented, and four did not complete the study. Most participants reported positive subjective impressions of the protocol as a whole and showed interest in participating in the study again, if given the opportunity. Participants generally perceived the MRI and neurocognitive tasks as interesting and suggested that the assessment of clinical presentation could be shortened. Conclusion Overall, the harmonized data collection protocol appeared to be feasible and generally well-accepted by participants. With a majority of participants finding the assessment of clinical presentation too long and repetitive, the authors have made suggestions to shorten the self-reports. The broader implementation of this protocol could allow researchers to create large datasets and better understand how psychopathological and neurobiological changes occur in young people with mental ill-health

    Evaluation of the cobas Cdiff Test for Detection of Toxigenic Clostridium difficile in Stool Samples

    Get PDF
    Nucleic acid amplification tests (NAATs) are reliable tools for the detection of toxigenic Clostridium difficile from unformed (liquid or soft) stool samples. The objective of this study was to evaluate performance of the cobas Cdiff test on the cobas 4800 system using prospectively collected stool specimens from patients suspected of having C. difficile infection (CDI). The performance of the cobas Cdiff test was compared to the results of combined direct and broth-enriched toxigenic culture methods in a large, multicenter clinical trial. Additional discrepancy analysis was performed by using the Xpert C. difficile Epi test. Sample storage was evaluated by using contrived and fresh samples before and after storage at -20°C. Testing was performed on samples from 683 subjects (306 males and 377 females); 113 (16.5%) of 683 subjects were positive for toxigenic C. difficile by direct toxigenic culture, and 141 of 682 subjects were positive by using the combined direct and enriched toxigenic culture method (reference method), for a prevalence rate of 20.7%. The sensitivity and specificity of the cobas Cdiff test compared to the combined direct and enriched culture method were 92.9% (131/141; 95% confidence interval [CI], 87.4% to 96.1%) and 98.7% (534/541; 95% CI, 97.4% to 99.4%), respectively. Discrepancy analysis using results for retested samples from a second NAAT (Xpert C. difficile/Epi test; Cepheid, Sunnyvale, CA) found no false-negative and 4 false-positive cobas Cdiff test results. There was no difference in positive and negative results in comparisons of fresh and stored samples. These results support the use of the cobas Cdiff test as a robust aid in the diagnosis of CDI

    ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks

    Full text link
    This study examines 4-bit quantization methods like GPTQ in large language models (LLMs), highlighting GPTQ's overfitting and limited enhancement in Zero-Shot tasks. While prior works merely focusing on zero-shot measurement, we extend task scope to more generative categories such as code generation and abstractive summarization, in which we found that INT4 quantization can significantly underperform. However, simply shifting to higher precision formats like FP6 has been particularly challenging, thus overlooked, due to poor performance caused by the lack of sophisticated integration and system acceleration strategies on current AI hardware. Our results show that FP6, even with a coarse-grain quantization scheme, performs robustly across various algorithms and tasks, demonstrating its superiority in accuracy and versatility. Notably, with the FP6 quantization, \codestar-15B model performs comparably to its FP16 counterpart in code generation, and for smaller models like the 406M it closely matches their baselines in summarization. Neither can be achieved by INT4. To better accommodate various AI hardware and achieve the best system performance, we propose a novel 4+2 design for FP6 to achieve similar latency to the state-of-the-art INT4 fine-grain quantization. With our design, FP6 can become a promising solution to the current 4-bit quantization methods used in LLMs

    High-resolution temporal profiling of transcripts during Arabidopsis leaf senescence reveals a distinct chronology of processes and regulation

    Get PDF
    Leaf senescence is an essential developmental process that impacts dramatically on crop yields and involves altered regulation of thousands of genes and many metabolic and signaling pathways, resulting in major changes in the leaf. The regulation of senescence is complex, and although senescence regulatory genes have been characterized, there is little information on how these function in the global control of the process. We used microarray analysis to obtain a highresolution time-course profile of gene expression during development of a single leaf over a 3-week period to senescence. A complex experimental design approach and a combination of methods were used to extract high-quality replicated data and to identify differentially expressed genes. The multiple time points enable the use of highly informative clustering to reveal distinct time points at which signaling and metabolic pathways change. Analysis of motif enrichment, as well as comparison of transcription factor (TF) families showing altered expression over the time course, identify clear groups of TFs active at different stages of leaf development and senescence. These data enable connection of metabolic processes, signaling pathways, and specific TF activity, which will underpin the development of network models to elucidate the process of senescence

    Uncertainty, evidence and irrecoverable costs: informing approval, pricing and research decisions for health technologies

    Get PDF
    The general issue of balancing the value of evidence about the performance of a technology and the value of access to a technology can be seen as central to a number of policy questions. Establishing the key principles of what assessments are needed, as well as how they should be made, will enable them to be addressed in an explicit and transparent manner. This report presents the key finding from MRC and NHIR funded research which aimed to: i) Establish the key principles of what assessments are needed to inform an only in research (OIR) or Approval with Research (AWR) recommendation. ii) Evaluate previous NICE guidance where OIR or AWR recommendations were made or considered. iii) Evaluate a range of alternative options to establish the criteria, additional information and/or analysis which could be made available to help the assessment needed to inform an OIR or AWR recommendation. iv) Provide a series of final recommendations, with the involvement of key stakeholders, establishing both the key principles and associated criteria that might guide OIR and AWR recommendations, identifying what, if any, additional information or analysis might be included in the Technology Appraisal process and how such recommendations might be more likely to be implemented through publicly funded and sponsored research. The key principles and the assessments and judgments required are discussed in Section 2. The sequence of assessment and judgment is represented as an algorithm, which can also be summarised as a simple set of explicit criteria or a seven point checklist of assessments. The application of the check list of assessment to a series of four case studies in Section 3 can inform considerations of whether such assessments can be made based on existing information and analysis in current NICE appraisal and in what circumstances could additional information and/or analysis be useful. In Section 4, some of the implications that this more explicit assessment of OIR and AWR might have for policy (e.g., NICE guidance and drug pricing), the process of appraisal (e.g., greater involvement of research commissioners) and methods of appraisal (e.g., should additional information, evidence and analysis be required) are drawn together
    corecore