1,234 research outputs found

    Is deck C an advantageous deck in the Iowa Gambling Task?

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Dunn <it>et al.</it> performed a critical review identifying some problems in the Somatic Marker Hypothesis (SMH). Most of the arguments presented by Dunn focused on the insufficiencies for replication of skin conductance responses and somatic brain loops, but the study did not carefully reassess the core-task of SMH. In a related study, Lin and Chiu et al. identified a serious problem, namely the "prominent deck B phenomenon" in the original IGT. Building on this observation, Lin and Chiu also posited that deck C rather than deck A was preferred by normal decision makers due to good gain-loss frequency rather than good final-outcome. To verify this hypothesis, a modified IGT was designed that possessed high contrast of gain-loss value in each trial, with the aim of achieving a balance between decks A and C in terms of gain-loss frequency. Based on the basic assumption of IGT, participants should prefer deck C to deck A based on consideration of final-outcome. In contrast, based on the prediction of gain-loss frequency, participants should have roughly equal preferences for decks A and C.</p> <p>Methods</p> <p>This investigation recruited 48 college students (24 males and 24 females) as participants. Two-stage IGT with high-contrast gain-loss value was launched to examine the deck C argument. Each participant completed the modified IGT twice and immediately afterwards was administered a questionnaire to assess their consciousness and final preferences following the game.</p> <p>Results</p> <p>The experimental results supported the predictions regarding gain-loss frequency participants choose the deck C with nearly identical frequency to deck A, despite deck C having a better final outcome than deck A. The "sunken deck C" phenomenon is clearly identified in this version of IGT which achieves a balance in gain-loss frequency. Moreover, the "sunken deck C" phenomenon not only appears during the first stage, but also during the second stage of IGT. In addition, questionnaires indicated that normal decision makers disliked deck C at the consciousness (explicit) levels.</p> <p>Conclusion</p> <p>In the modified version of IGT, deck C was no longer preferred by normal decision makers, despite having a better long-term outcome than deck A. This study identified two problems in the original IGT. First, the gain-loss frequency between decks A and C is pseudo-balanced. Second, the covered phenomenon leads to most IGT related studies misinterpreting the effect of gain-loss frequency in situations involving long-term outcomes, and even leads to overstatement of the foresight of normal decision makers.</p

    On the Impossibility of General Parallel Fast-Forwarding of Hamiltonian Simulation

    Get PDF
    Hamiltonian simulation is one of the most important problems in the field of quantum computing. There have been extended efforts on designing algorithms for faster simulation, and the evolution time T for the simulation greatly affect algorithm runtime as expected. While there are some specific types of Hamiltonians that can be fast-forwarded, i.e., simulated within time o(T), for some large classes of Hamiltonians (e.g., all local/sparse Hamiltonians), existing simulation algorithms require running time at least linear in the evolution time T. On the other hand, while there exist lower bounds of ?(T) circuit size for some large classes of Hamiltonian, these lower bounds do not rule out the possibilities of Hamiltonian simulation with large but "low-depth" circuits by running things in parallel. As a result, physical systems with system size scaling with T can potentially do a fast-forwarding simulation. Therefore, it is intriguing whether we can achieve fast Hamiltonian simulation with the power of parallelism. In this work, we give a negative result for the above open problem in various settings. In the oracle model, we prove that there are time-independent sparse Hamiltonians that cannot be simulated via an oracle circuit of depth o(T). In the plain model, relying on the random oracle heuristic, we show that there exist time-independent local Hamiltonians and time-dependent geometrically local Hamiltonians on n qubits that cannot be simulated via an oracle circuit of depth o(T/n^c), where the Hamiltonians act on n qubits, and c is a constant. Lastly, we generalize the above results and show that any simulators that are geometrically local Hamiltonians cannot do the simulation much faster than parallel quantum algorithms

    Is deck B a disadvantageous deck in the Iowa Gambling Task?

    Get PDF
    BACKGROUND: The Iowa gambling task is a popular test for examining monetary decision behavior under uncertainty. According to Dunn et al. review article, the difficult-to-explain phenomenon of "prominent deck B" was revealed, namely that normal decision makers prefer bad final-outcome deck B to good final-outcome decks C or D. This phenomenon was demonstrated especially clearly by Wilder et al. and Toplak et al. The "prominent deck B" phenomenon is inconsistent with the basic assumption in the IGT; however, most IGT-related studies utilized the "summation" of bad decks A and B when presenting their data, thereby avoiding the problems associated with deck B. METHODS: To verify the "prominent deck B" phenomenon, this study launched a two-stage simple version IGT, namely, an AACC and BBDD version, which possesses a balanced gain-loss structure between advantageous and disadvantageous decks and facilitates monitoring of participant preferences after the first 100 trials. RESULTS: The experimental results suggested that the "prominent deck B" phenomenon exists in the IGT. Moreover, participants cannot suppress their preference for deck B under the uncertain condition, even during the second stage of the game. Although this result is incongruent with the basic assumption in IGT, an increasing number of studies are finding similar results. The results of the AACC and BBDD versions can be congruent with the decision literatures in terms of gain-loss frequency. CONCLUSION: Based on the experimental findings, participants can apply the "gain-stay, loss-shift" strategy to overcome situations involving uncertainty. This investigation found that the largest loss in the IGT did not inspire decision makers to avoid choosing bad deck B

    Attribute-Based Encryption for Circuits of Unbounded Depth from Lattices: Garbled Circuits of Optimal Size, Laconic Functional Evaluation, and More

    Get PDF
    Although we have known about fully homomorphic encryption (FHE) from circular security assumptions for over a decade [Gentry, STOC \u2709; Brakerski–Vaikuntanathan, FOCS \u2711], there is still a significant gap in understanding related homomorphic primitives supporting all *unrestricted* polynomial-size computations. One prominent example is attribute-based encryption (ABE). The state-of-the-art constructions, relying on the hardness of learning with errors (LWE) [Gorbunov–Vaikuntanathan–Wee, STOC \u2713; Boneh et al., Eurocrypt \u2714], only accommodate circuits up to a *predetermined* depth, akin to leveled homomorphic encryption. In addition, their components (master public key, secret keys, and ciphertexts) have sizes polynomial in the maximum circuit depth. Even in the simpler setting where a single key is published (or a single circuit is involved), the depth dependency persists, showing up in constructions of 1-key ABE and related primitives, including laconic function evaluation (LFE), 1-key functional encryption (FE), and reusable garbling schemes. So far, the only approach of eliminating depth dependency relies on indistinguishability obfuscation. An interesting question that has remained open for over a decade is whether the circular security assumptions enabling FHE can similarly benefit ABE. In this work, we introduce new lattice-based techniques to overcome the depth-dependency limitations: - Relying on a circular security assumption, we construct LFE, 1-key FE, 1-key ABE, and reusable garbling schemes capable of evaluating circuits of unbounded depth and size. - Based on the *evasive circular* LWE assumption, a stronger variant of the recently proposed *evasive* LWE assumption [Wee, Eurocrypt \u2722; Tsabary, Crypto \u2722], we construct a full-fledged ABE scheme for circuits of unbounded depth and size. Our LFE, 1-key FE, and reusable garbling schemes achieve optimal succinctness (up to polynomial factors in the security parameter). Their ciphertexts and input encodings have sizes linear in the input length, while function digest, secret keys, and garbled circuits have constant sizes independent of circuit parameters (for Boolean outputs). In fact, this gives the first constant-size garbled circuits without relying on indistinguishability obfuscation. Our ABE schemes offer short components, with master public key and ciphertext sizes linear in the attribute length and secret key being constant-size

    A General Framework for Lattice-Based ABE Using Evasive Inner-Product Functional Encryption

    Get PDF
    We present a general framework for constructing attribute-based encryption (ABE) schemes for arbitrary function class based on lattices from two ingredients, i) a noisy linear secret sharing scheme for the class and ii) a new type of inner-product functional encryption (IPFE) scheme, termed *evasive* IPFE, which we introduce in this work. We propose lattice-based evasive IPFE schemes and establish their security under simple conditions based on variants of evasive learning with errors (LWE) assumption recently proposed by Wee [EUROCRYPT ’22] and Tsabary [CRYPTO ’22]. Our general framework is modular and conceptually simple, reducing the task of constructing ABE to that of constructing noisy linear secret sharing schemes, a more lightweight primitive. The versatility of our framework is demonstrated by three new ABE schemes based on variants of the evasive LWE assumption. - We obtain two ciphertext-policy ABE schemes for all polynomial-size circuits with a predetermined depth bound. One of these schemes has *succinct* ciphertexts and secret keys, of size polynomial in the depth bound, rather than the circuit size. This eliminates the need for tensor LWE, another new assumption, from the previous state-of-the-art construction by Wee [EUROCRYPT ’22]. - We develop ciphertext-policy and key-policy ABE schemes for deterministic finite automata (DFA) and logspace Turing machines (L\mathsf{L}). They are the first lattice-based public-key ABE schemes supporting uniform models of computation. Previous lattice-based schemes for uniform computation were limited to the secret-key setting or offered only weaker security against bounded collusion. Lastly, the new primitive of evasive IPFE serves as the lattice-based counterpart of pairing-based IPFE, enabling the application of techniques developed in pairing-based ABE constructions to lattice-based constructions. We believe it is of independent interest and may find other applications

    A Virtual Environment System for the Comparison of Dome and HMD Systems

    Get PDF
    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system for the comparison of Dome and head-mounted display (HMD) systems. In particular, we address interactions techniques and playback environments

    Effectiveness of influenza vaccination in patients with end-stage renal disease receiving hemodialysis: a population-based study.

    Get PDF
    BackgroundLittle is known on the effectiveness of influenza vaccine in ESRD patients. This study compared the incidence of hospitalization, morbidity, and mortality in end-stage renal disease (ESRD) patients undergoing hemodialysis (HD) between cohorts with and without influenza vaccination.MethodsWe used the insurance claims data from 1998 to 2009 in Taiwan to determine the incidence of these events within one year after influenza vaccination in the vaccine (N = 831) and the non-vaccine (N = 3187) cohorts. The vaccine cohort to the non-vaccine cohort incidence rate ratio and hazard ratio (HR) of morbidities and mortality were measured.ResultsThe age-specific analysis showed that the elderly in the vaccine cohort had lower hospitalization rate (100.8 vs. 133.9 per 100 person-years), contributing to an overall HR of 0.81 (95% confidence interval (CI) 0.72-0.90). The vaccine cohort also had an adjusted HR of 0.85 [95% CI 0.75-0.96] for heart disease. The corresponding incidence of pneumonia and influenza was 22.4 versus 17.2 per 100 person-years, but with an adjusted HR of 0.80 (95% CI 0.64-1.02). The vaccine cohort had lowered risks than the non-vaccine cohort for intensive care unit (ICU) admission (adjusted HR 0.20, 95% CI 0.12-0.33) and mortality (adjusted HR 0.50, 95% CI 0.41-0.60). The time-dependent Cox model revealed an overall adjusted HR for mortality of 0.30 (95% CI 0.26-0.35) after counting vaccination for multi-years.ConclusionsESRD patients with HD receiving the influenza vaccination could have reduced risks of pneumonia/influenza and other morbidities, ICU stay, hospitalization and death, particularly for the elderly
    corecore