1,714 research outputs found

    Is deck C an advantageous deck in the Iowa Gambling Task?

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Dunn <it>et al.</it> performed a critical review identifying some problems in the Somatic Marker Hypothesis (SMH). Most of the arguments presented by Dunn focused on the insufficiencies for replication of skin conductance responses and somatic brain loops, but the study did not carefully reassess the core-task of SMH. In a related study, Lin and Chiu et al. identified a serious problem, namely the "prominent deck B phenomenon" in the original IGT. Building on this observation, Lin and Chiu also posited that deck C rather than deck A was preferred by normal decision makers due to good gain-loss frequency rather than good final-outcome. To verify this hypothesis, a modified IGT was designed that possessed high contrast of gain-loss value in each trial, with the aim of achieving a balance between decks A and C in terms of gain-loss frequency. Based on the basic assumption of IGT, participants should prefer deck C to deck A based on consideration of final-outcome. In contrast, based on the prediction of gain-loss frequency, participants should have roughly equal preferences for decks A and C.</p> <p>Methods</p> <p>This investigation recruited 48 college students (24 males and 24 females) as participants. Two-stage IGT with high-contrast gain-loss value was launched to examine the deck C argument. Each participant completed the modified IGT twice and immediately afterwards was administered a questionnaire to assess their consciousness and final preferences following the game.</p> <p>Results</p> <p>The experimental results supported the predictions regarding gain-loss frequency participants choose the deck C with nearly identical frequency to deck A, despite deck C having a better final outcome than deck A. The "sunken deck C" phenomenon is clearly identified in this version of IGT which achieves a balance in gain-loss frequency. Moreover, the "sunken deck C" phenomenon not only appears during the first stage, but also during the second stage of IGT. In addition, questionnaires indicated that normal decision makers disliked deck C at the consciousness (explicit) levels.</p> <p>Conclusion</p> <p>In the modified version of IGT, deck C was no longer preferred by normal decision makers, despite having a better long-term outcome than deck A. This study identified two problems in the original IGT. First, the gain-loss frequency between decks A and C is pseudo-balanced. Second, the covered phenomenon leads to most IGT related studies misinterpreting the effect of gain-loss frequency in situations involving long-term outcomes, and even leads to overstatement of the foresight of normal decision makers.</p

    Is deck B a disadvantageous deck in the Iowa Gambling Task?

    Get PDF
    BACKGROUND: The Iowa gambling task is a popular test for examining monetary decision behavior under uncertainty. According to Dunn et al. review article, the difficult-to-explain phenomenon of "prominent deck B" was revealed, namely that normal decision makers prefer bad final-outcome deck B to good final-outcome decks C or D. This phenomenon was demonstrated especially clearly by Wilder et al. and Toplak et al. The "prominent deck B" phenomenon is inconsistent with the basic assumption in the IGT; however, most IGT-related studies utilized the "summation" of bad decks A and B when presenting their data, thereby avoiding the problems associated with deck B. METHODS: To verify the "prominent deck B" phenomenon, this study launched a two-stage simple version IGT, namely, an AACC and BBDD version, which possesses a balanced gain-loss structure between advantageous and disadvantageous decks and facilitates monitoring of participant preferences after the first 100 trials. RESULTS: The experimental results suggested that the "prominent deck B" phenomenon exists in the IGT. Moreover, participants cannot suppress their preference for deck B under the uncertain condition, even during the second stage of the game. Although this result is incongruent with the basic assumption in IGT, an increasing number of studies are finding similar results. The results of the AACC and BBDD versions can be congruent with the decision literatures in terms of gain-loss frequency. CONCLUSION: Based on the experimental findings, participants can apply the "gain-stay, loss-shift" strategy to overcome situations involving uncertainty. This investigation found that the largest loss in the IGT did not inspire decision makers to avoid choosing bad deck B

    Managing cardiac arrest with refractory ventricular fibrillation in the emergency department: Conventional cardiopulmonary resuscitation versus extracorporeal cardiopulmonary resuscitation

    Get PDF
    AbstractAimRefractory ventricular fibrillation, resistant to conventional cardiopulmonary resuscitation (CPR), is a life threatening rhythm encountered in the emergency department. Although previous reports suggest the use of extracorporeal CPR can improve the clinical outcomes in patients with prolonged cardiac arrest, the effectiveness of this novel strategy for refractory ventricular fibrillation is not known. We aimed to compare the clinical outcomes of patients with refractory ventricular fibrillation managed with conventional CPR or extracorporeal CPR in our institution.MethodThis is a retrospective chart review study from an emergency department in a tertiary referral medical center. We identified 209 patients presenting with cardiac arrest due to ventricular fibrillation between September 2011 and September 2013. Of these, 60 patients were enrolled with ventricular fibrillation refractory to resuscitation for more than 10min. The clinical outcome of patients with ventricular fibrillation received either conventional CPR, including defibrillation, chest compression, and resuscitative medication (C-CPR, n=40) or CPR plus extracorporeal CPR (E-CPR, n=20) were compared.ResultsThe overall survival rate was 35%, and 18.3% of patients were discharged with good neurological function. The mean duration of CPR was longer in the E-CPR group than in the C-CPR group (69.90±49.6min vs 34.3±17.7min, p=0.0001). Patients receiving E-CPR had significantly higher rates of sustained return of spontaneous circulation (95.0% vs 47.5%, p=0.0009), and good neurological function at discharge (40.0% vs 7.5%, p=0.0067). The survival rate in the E-CPR group was higher (50% vs 27.5%, p=0.1512) at discharge and (50% vs 20%, p=0. 0998) at 1 year after discharge.ConclusionsThe management of refractory ventricular fibrillation in the emergency department remains challenging, as evidenced by an overall survival rate of 35% in this study. Patients with refractory ventricular fibrillation receiving E-CPR had a trend toward higher survival rates and significantly improved neurological outcomes than those receiving C-CPR

    Subspace decomposition and critical phase selection based cumulative quality analysis for multiphase batch processes

    Get PDF
    Quality analysis and prediction have been of great significance to ensure consistent and high product quality for chemical engineering processes. However, previous methods have rarely analyzed the cumulative quality effect which is of typical nature for batch processes. That is, with time development, the process variation will determine the final product quality in a cumulative manner. Besides, they can not get an early sense of the quality nature. In this paper, a quantitative index is defined which can check ahead of time whether the product quality result from accumulation or the addition of successive process variations and cumulative quality effect will be addressed for quality analysis and prediction of batch processes. Several crucial issues will be solved to explore the cumulative quality effect. First, a quality-relevant sequential phase partition method is proposed to separate multiple phases from batch processes by using fast search and find of density peaks clustering (FSFDP) algorithm. Second, after phase partition, a phase-wise cumulative quality analysis method is proposed based on subspace decomposition which can explore the non-repetitive quality-relevant information (NRQRI) from the process variation at each time within each phase. NRQRI refers to the quality-relevant process variations at each time that are orthogonal to those of previous time and thus represents complementary quality information which is the key index to cumulatively explain quality variations time-wise. Third, process-wise cumulative quality analysis is conducted where a critical phase selection strategy is developed to identify critical-to-cumulative-quality phases and quality predictions from critical phases are integrated to exclude influences of uncritical phases. By the two-level cumulative quality analysis (i.e., phase-wise and process-wise), it is feasible to judge whether the quality has the cumulative effect in advance and thus proper quality prediction model can be developed by identifying critical-to-cumulative-quality phases. The feasibility and performance of the proposed algorithm are illustrated by a typical chemical engineering process, injection molding

    Protective Effect of Caffeic Acid on Paclitaxel Induced Anti-Proliferation and Apoptosis of Lung Cancer Cells Involves NF-κB Pathway

    Get PDF
    Caffeic acid (CA), a natural phenolic compound, is abundant in medicinal plants. CA possesses multiple biological effects such as anti-bacterial and anti-cancer growth. CA was also reported to induce fore stomach and kidney tumors in a mouse model. Here we used two human lung cancer cell lines, A549 and H1299, to clarify the role of CA in cancer cell proliferation. The growth assay showed that CA moderately promoted the proliferation of the lung cancer cells. Furthermore, pre-treatment of CA rescues the proliferation inhibition induced by a sub-IC50 dose of paclitaxel (PTX), an anticancer drug. Western blot showed that CA up-regulated the pro-survival proteins survivin and Bcl-2, the down-stream targets of NF-κB. This is consistent with the observation that CA induced nuclear translocation of NF-κB p65. Our study suggested that the pro-survival effect of CA on PTX-treated lung cancer cells is mediated through a NF-κB signaling pathway. This may provide mechanistic insights into the chemoresistance of cancer calls

    Is the Clinical Version of the Iowa Gambling Task Relevant for Assessing Choice Behavior in Cases of Internet Addiction?

    Get PDF
    Objective: A critical issue in research related to the Iowa gambling task (IGT) is the use of the alternative factors expected value and gain–loss frequency to distinguish between clinical cases and control groups. When the IGT has been used to examine cases of Internet addiction (IA), the literature reveals inconsistencies in the results. However, few studies have utilized the clinical version of IGT (cIGT) to examine IA cases. The present study aims to resolve previous inconsistencies and to examine the validity of the cIGT by comparing performances of controls with cases of Internet gaming disorder (IGD), a subtype of IA defined by the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders.Methods: The study recruited 23 participants with clinically diagnosed IGD and 38 age-matched control participants. Based on the basic assumptions of IGT and the gain–loss frequency viewpoint, a dependent variables analysis was carried out.Results: The results showed no statistical difference between the two groups in most performance indices and therefore support the findings of most IGT-IA studies; in particular, expected value and gain–loss frequency did not distinguish between the IGD cases and controls. However, the participants in both groups were influenced by the gain–loss frequency, revealing the existence of the prominent deck B phenomenon.Conclusion: The findings provide two possible interpretations. The first is that choice behavior deficits do not constitute a characteristic feature of individuals who have been diagnosed with IGD/IA. The second is that, as the cIGT was unable to distinguish the choice behavior of the IGD/IA participants from that of controls, the cIGT may not be relevant for assessing IGD based on the indices provided by the expected value and gain–loss frequency perspectives in the standard administration of IGT

    Theta dependence of SU(N) gauge theories in the presence of a topological term

    Full text link
    We review results concerning the theta dependence of 4D SU(N) gauge theories and QCD, where theta is the coefficient of the CP-violating topological term in the Lagrangian. In particular, we discuss theta dependence in the large-N limit. Most results have been obtained within the lattice formulation of the theory via numerical simulations, which allow to investigate the theta dependence of the ground-state energy and the spectrum around theta=0 by determining the moments of the topological charge distribution, and their correlations with other observables. We discuss the various methods which have been employed to determine the topological susceptibility, and higher-order terms of the theta expansion. We review results at zero and finite temperature. We show that the results support the scenario obtained by general large-N scaling arguments, and in particular the Witten-Veneziano mechanism to explain the U(1)_A problem. We also compare with results obtained by other approaches, especially in the large-N limit, where the issue has been also addressed using, for example, the AdS/CFT correspondence. We discuss issues related to theta dependence in full QCD: the neutron electric dipole moment, the dependence of the topological susceptibility on the quark masses, the U(1)_A symmetry breaking at finite temperature. We also consider the 2D CP(N) model, which is an interesting theoretical laboratory to study issues related to topology. We review analytical results in the large-N limit, and numerical results within its lattice formulation. Finally, we discuss the main features of the two-point correlation function of the topological charge density.Comment: A typo in Eq. (3.9) has been corrected. An additional subsection (5.2) has been inserted to demonstrate the nonrenormalizability of the relevant theta parameter in the presence of massive fermions, which implies that the continuum (a -> 0) limit must be taken keeping theta fixe

    DeepDyve: Dynamic Verification for Deep Neural Networks

    Full text link
    Deep neural networks (DNNs) have become one of the enabling technologies in many safety-critical applications, e.g., autonomous driving and medical image analysis. DNN systems, however, suffer from various kinds of threats, such as adversarial example attacks and fault injection attacks. While there are many defense methods proposed against maliciously crafted inputs, solutions against faults presented in the DNN system itself (e.g., parameters and calculations) are far less explored. In this paper, we develop a novel lightweight fault-tolerant solution for DNN-based systems, namely DeepDyve, which employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification. The key to enabling such lightweight checking is that the smaller neural network only needs to produce approximate results for the initial task without sacrificing fault coverage much. We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve. Experimental results show that DeepDyve can reduce 90% of the risks at around 10% overhead

    Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector

    Get PDF
    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente
    corecore