4,628 research outputs found

    Human brain evolution and the "Neuroevolutionary Time-depth Principle:" Implications for the Reclassification of fear-circuitry-related traits in DSM-V and for studying resilience to warzone-related posttraumatic stress disorder.

    Get PDF
    The DSM-III, DSM-IV, DSM-IV-TR and ICD-10 have judiciously minimized discussion of etiologies to distance clinical psychiatry from Freudian psychoanalysis. With this goal mostly achieved, discussion of etiological factors should be reintroduced into the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V). A research agenda for the DSM-V advocated the "development of a pathophysiologically based classification system". The author critically reviews the neuroevolutionary literature on stress-induced and fear circuitry disorders and related amygdala-driven, species-atypical fear behaviors of clinical severity in adult humans. Over 30 empirically testable/falsifiable predictions are presented. It is noted that in DSM-IV-TR and ICD-10, the classification of stress and fear circuitry disorders is neither mode-of-acquisition-based nor brain-evolution-based. For example, snake phobia (innate) and dog phobia (overconsolidational) are clustered together. Similarly, research on blood-injection-injury-type-specific phobia clusters two fears different in their innateness: 1) an arguably ontogenetic memory-trace-overconsolidation-based fear (hospital phobia) and 2) a hardwired (innate) fear of the sight of one's blood or a sharp object penetrating one's skin. Genetic architecture-charting of fear-circuitry-related traits has been challenging. Various, non-phenotype-based architectures can serve as targets for research. In this article, the author will propose one such alternative genetic architecture. This article was inspired by the following: A) Nesse's "Smoke-Detector Principle", B) the increasing suspicion that the "smooth" rather than "lumpy" distribution of complex psychiatric phenotypes (including fear-circuitry disorders) may in some cases be accounted for by oligogenic (and not necessarily polygenic) transmission, and C) insights from the initial sequence of the chimpanzee genome and comparison with the human genome by the Chimpanzee Sequencing and Analysis Consortium published in late 2005. Neuroevolutionary insights relevant to fear circuitry symptoms that primarily emerge overconsolidationally (especially Combat related Posttraumatic Stress Disorder) are presented. Also introduced is a human-evolution-based principle for clustering innate fear traits. The "Neuroevolutionary Time-depth Principle" of innate fears proposed in this article may be useful in the development of a neuroevolution-based taxonomic re-clustering of stress-triggered and fear-circuitry disorders in DSM-V. Four broad clusters of evolved fear circuits are proposed based on their time-depths: 1) Mesozoic (mammalian-wide) circuits hardwired by wild-type alleles driven to fixation by Mesozoic selective sweeps; 2) Cenozoic (simian-wide) circuits relevant to many specific phobias; 3) mid Paleolithic and upper Paleolithic (Homo sapiens-specific) circuits (arguably resulting mostly from mate-choice-driven stabilizing selection); 4) Neolithic circuits (arguably mostly related to stabilizing selection driven by gene-culture co-evolution). More importantly, the author presents evolutionary perspectives on warzone-related PTSD, Combat-Stress Reaction, Combat-related Stress, Operational-Stress, and other deployment-stress-induced symptoms. The Neuroevolutionary Time-depth Principle presented in this article may help explain the dissimilar stress-resilience levels following different types of acute threat to survival of oneself or one's progency (aka DSM-III and DSM-V PTSD Criterion-A events). PTSD rates following exposure to lethal inter-group violence (combat, warzone exposure or intentionally caused disasters such as terrorism) are usually 5-10 times higher than rates following large-scale natural disasters such as forest fires, floods, hurricanes, volcanic eruptions, and earthquakes. The author predicts that both intentionally-caused large-scale bioevent-disasters, as well as natural bioevents such as SARS and avian flu pandemics will be an exception and are likely to be followed by PTSD rates approaching those that follow warzone exposure. During bioevents, Amygdala-driven and locus-coeruleus-driven epidemic pseudosomatic symptoms may be an order of magnitude more common than infection-caused cytokine-driven symptoms. Implications for the red cross and FEMA are discussed. It is also argued that hospital phobia as well as dog phobia, bird phobia and bat phobia require re-taxonomization in DSM-V in a new "overconsolidational disorders" category anchored around PTSD. The overconsolidational spectrum category may be conceptualized as straddling the fear circuitry spectrum disorders and the affective spectrum disorders categories, and may be a category for which Pitman's secondary prevention propranolol regimen may be specifically indicated as a "morning after pill" intervention. Predictions are presented regarding obsessive-compulsive disorder (OCD) (e.g., female-pattern hoarding vs. male-pattern hoarding) and "culture-bound" acute anxiety symptoms (taijin-kyofusho, koro, shuk yang, shook yong, suo yang, rok-joo, jinjinia-bemar, karoshi, gwarosa, Voodoo death). Also discussed are insights relevant to pseudoneurological symptoms and to the forthcoming Dissociative-Conversive disorders category in DSM-V, including what the author terms fright-triggered acute pseudo-localized symptoms (i.e., pseudoparalysis, pseudocerebellar imbalance, psychogenic blindness, pseudoseizures, and epidemic sociogenic illness). Speculations based on studies of the human abnormal-spindle-like, microcephaly-associated (ASPM) gene, the microcephaly primary autosomal recessive (MCPH) gene, and the forkhead box p2 (FOXP2) gene are made and incorporated into what is termed "The pre-FOXP2 Hypothesis of Blood-Injection-Injury Phobia." Finally, the author argues for a non-reductionistic fusion of "distal (evolutionary) neurobiology" with clinical "proximal neurobiology," utilizing neurological heuristics. It is noted that the value of re-clustering fear traits based on behavioral ethology, human-phylogenomics-derived endophenotypes and on ontogenomics (gene-environment interactions) can be confirmed or disconfirmed using epidemiological or twin studies and psychiatric genomics

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes

    Custom Integrated Circuits

    Get PDF
    Contains reports on ten research projects.Analog Devices, Inc.IBM CorporationNational Science Foundation/Defense Advanced Research Projects Agency Grant MIP 88-14612Analog Devices Career Development Assistant ProfessorshipU.S. Navy - Office of Naval Research Contract N0014-87-K-0825AT&TDigital Equipment CorporationNational Science Foundation Grant MIP 88-5876

    Implementing Grover Oracles for Quantum Key Search on AES and LowMC

    Get PDF
    Grover's search algorithm gives a quantum attack against block ciphers by searching for a key that matches a small number of plaintext-ciphertext pairs. This attack uses O(N)O(\sqrt{N}) calls to the cipher to search a key space of size NN. Previous work in the specific case of AES derived the full gate cost by analyzing quantum circuits for the cipher, but focused on minimizing the number of qubits. In contrast, we study the cost of quantum key search attacks under a depth restriction and introduce techniques that reduce the oracle depth, even if it requires more qubits. As cases in point, we design quantum circuits for the block ciphers AES and LowMC. Our circuits give a lower overall attack cost in both the gate count and depth-times-width cost models. In NIST's post-quantum cryptography standardization process, security categories are defined based on the concrete cost of quantum key search against AES. We present new, lower cost estimates for each category, so our work has immediate implications for the security assessment of post-quantum cryptography. As part of this work, we release Q# implementations of the full Grover oracle for AES-128, -192, -256 and for the three LowMC instantiations used in Picnic, including unit tests and code to reproduce our quantum resource estimates. To the best of our knowledge, these are the first two such full implementations and automatic resource estimations.Comment: 36 pages, 8 figures, 14 table

    Cost modelling and concurrent engineering for testable design

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.As integrated circuits and printed circuit boards increase in complexity, testing becomes a major cost factor of the design and production of the complex devices. Testability has to be considered during the design of complex electronic systems, and automatic test systems have to be used in order to facilitate the test. This fact is now widely accepted in industry. Both design for testability and the usage of automatic test systems aim at reducing the cost of production testing or, sometimes, making it possible at all. Many design for testability methods and test systems are available which can be configured into a production test strategy, in order to achieve high quality of the final product. The designer has to select from the various options for creating a test strategy, by maximising the quality and minimising the total cost for the electronic system. This thesis presents a methodology for test strategy generation which is based on consideration of the economics during the life cycle of the electronic system. This methodology is a concurrent engineering approach which takes into account all effects of a test strategy on the electronic system during its life cycle by evaluating its related cost. This objective methodology is used in an original test strategy planning advisory system, which allows for test strategy planning for VLSI circuits as well as for digital electronic systems. The cost models which are used for evaluating the economics of test strategies are described in detail and the test strategy planning system is presented. A methodology for making decisions which are based on estimated costing data is presented. Results of using the cost models and the test strategy planning system for evaluating the economics of test strategies for selected industrial designs are presented

    The Quantum PCP Conjecture

    Full text link
    The classical PCP theorem is arguably the most important achievement of classical complexity theory in the past quarter century. In recent years, researchers in quantum computational complexity have tried to identify approaches and develop tools that address the question: does a quantum version of the PCP theorem hold? The story of this study starts with classical complexity and takes unexpected turns providing fascinating vistas on the foundations of quantum mechanics, the global nature of entanglement and its topological properties, quantum error correction, information theory, and much more; it raises questions that touch upon some of the most fundamental issues at the heart of our understanding of quantum mechanics. At this point, the jury is still out as to whether or not such a theorem holds. This survey aims to provide a snapshot of the status in this ongoing story, tailored to a general theory-of-CS audience.Comment: 45 pages, 4 figures, an enhanced version of the SIGACT guest column from Volume 44 Issue 2, June 201

    Streaming Property Testing of Visibly Pushdown Languages

    Get PDF
    In the context of language recognition, we demonstrate the superiority of streaming property testers against streaming algorithms and property testers, when they are not combined. Initiated by Feigenbaum et al., a streaming property tester is a streaming algorithm recognizing a language under the property testing approximation: it must distinguish inputs of the language from those that are ε\varepsilon-far from it, while using the smallest possible memory (rather than limiting its number of input queries). Our main result is a streaming ε\varepsilon-property tester for visibly pushdown languages (VPL) with one-sided error using memory space poly((logn)/ε)\mathrm{poly}((\log n) / \varepsilon). This constructions relies on a (non-streaming) property tester for weighted regular languages based on a previous tester by Alon et al. We provide a simple application of this tester for streaming testing special cases of instances of VPL that are already hard for both streaming algorithms and property testers. Our main algorithm is a combination of an original simulation of visibly pushdown automata using a stack with small height but possible items of linear size. In a second step, those items are replaced by small sketches. Those sketches relies on a notion of suffix-sampling we introduce. This sampling is the key idea connecting our streaming tester algorithm to property testers.Comment: 23 pages. Major modifications in the presentatio

    On the Cryptographic Hardness of Local Search

    Get PDF
    We show new hardness results for the class of Polynomial Local Search problems (PLS): - Hardness of PLS based on a falsifiable assumption on bilinear groups introduced by Kalai, Paneth, and Yang (STOC 2019), and the Exponential Time Hypothesis for randomized algorithms. Previous standard model constructions relied on non-falsifiable and non-standard assumptions. - Hardness of PLS relative to random oracles. The construction is essentially different than previous constructions, and in particular is unconditionally secure. The construction also demonstrates the hardness of parallelizing local search. The core observation behind the results is that the unique proofs property of incrementally-verifiable computations previously used to demonstrate hardness in PLS can be traded with a simple incremental completeness property
    corecore