5,306 research outputs found

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page

    Stochastic thermodynamics of computation

    Full text link
    One of the major resource requirements of computers - ranging from biological cells to human brains to high-performance (engineered) computers - is the energy used to run them. Those costs of performing a computation have long been a focus of research in physics, going back to the early work of Landauer. One of the most prominent aspects of computers is that they are inherently nonequilibrium systems. However, the early research was done when nonequilibrium statistical physics was in its infancy, which meant the work was formulated in terms of equilibrium statistical physics. Since then there have been major breakthroughs in nonequilibrium statistical physics, which are allowing us to investigate the myriad aspects of the relationship between statistical physics and computation, extending well beyond the issue of how much work is required to erase a bit. In this paper I review some of this recent work on the `stochastic thermodynamics of computation'. After reviewing the salient parts of information theory, computer science theory, and stochastic thermodynamics, I summarize what has been learned about the entropic costs of performing a broad range of computations, extending from bit erasure to loop-free circuits to logically reversible circuits to information ratchets to Turing machines. These results reveal new, challenging engineering problems for how to design computers to have minimal thermodynamic costs. They also allow us to start to combine computer science theory and stochastic thermodynamics at a foundational level, thereby expanding both.Comment: 111 pages, no figures. arXiv admin note: text overlap with arXiv:1901.0038

    Perceptual lossless medical image coding

    Get PDF
    A novel perceptually lossless coder is presented for the compression of medical images. Built on the JPEG 2000 coding framework, the heart of the proposed coder is a visual pruning function, embedded with an advanced human vision model to identify and to remove visually insignificant/irrelevant information. The proposed coder offers the advantages of simplicity and modularity with bit-stream compliance. Current results have shown superior compression ratio gains over that of its information lossless counterparts without any visible distortion. In addition, a case study consisting of 31 medical experts has shown that no perceivable difference of statistical significance exists between the original images and the images compressed by the proposed coder

    Radiation Hardness Assurance: Evolving for NewSpace

    Get PDF
    During the past decade, numerous small satellites have been launched into space, with dramatically expanded dependence on advanced commercial-off-the-shelf (COTS) technologies and systems required for mission success. While the radiation effects vulnerabilities of small satellites are the same as those of their larger, traditional relatives, revised approaches are needed for risk management because of differences in technical requirements and programmatic resources. While moving to COTS components and systems may reduce direct costs and procurement lead times, it undermines many cost-reduction strategies used for conventional radiation hardness assurance (RHA). Limited resources are accompanied by a lack of radiation testing and analysis, which can pose significant risksor worse, be neglected altogether. Small satellites have benefited from short mission durations in low Earth orbits with respect to their radiation response, but as mission objectives grow and become reliant on advanced technologies operating for longer and in harsher environments, requirements need to reflect the changing scope without hindering developers that provide new capabilities

    Thermodynamic Computing

    Get PDF
    The hardware and software foundations laid in the first half of the 20th Century enabled the computing technologies that have transformed the world, but these foundations are now under siege. The current computing paradigm, which is the foundation of much of the current standards of living that we now enjoy, faces fundamental limitations that are evident from several perspectives. In terms of hardware, devices have become so small that we are struggling to eliminate the effects of thermodynamic fluctuations, which are unavoidable at the nanometer scale. In terms of software, our ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains. In terms of systems, currently five percent of the power generated in the US is used to run computing systems - this astonishing figure is neither ecologically sustainable nor economically scalable. Economically, the cost of building next-generation semiconductor fabrication plants has soared past $10 billion. All of these difficulties - device scaling, software complexity, adaptability, energy consumption, and fabrication economics - indicate that the current computing paradigm has matured and that continued improvements along this path will be limited. If technological progress is to continue and corresponding social and economic benefits are to continue to accrue, computing must become much more capable, energy efficient, and affordable. We propose that progress in computing can continue under a united, physically grounded, computational paradigm centered on thermodynamics. Herein we propose a research agenda to extend these thermodynamic foundations into complex, non-equilibrium, self-organizing systems and apply them holistically to future computing systems that will harness nature's innate computational capacity. We call this type of computing "Thermodynamic Computing" or TC.Comment: A Computing Community Consortium (CCC) workshop report, 36 page

    Detection of amblyopia utilizing generated retinal reflexes

    Get PDF
    Investigation confirmed that GRR images can be consistently obtained and that these images contain information required to detect the optical inequality of one eye compared to the fellow eye. Digital analyses, electro-optical analyses, and trained observers were used to evaluate the GRR images. Two and three dimensional plots were made from the digital analyses results. These plotted data greatly enhanced the GRR image content, and it was possible for nontrained observers to correctly identify normal vs abnormal ocular status by viewing the plots. Based upon the criteria of detecting equality or inequality of ocular status of a person's eyes, the trained observer correctly identified the ocular status of 90% of the 232 persons who participated in this program

    Physical Foundations of Landauer's Principle

    Full text link
    We review the physical foundations of Landauer's Principle, which relates the loss of information from a computational process to an increase in thermodynamic entropy. Despite the long history of the Principle, its fundamental rationale and proper interpretation remain frequently misunderstood. Contrary to some misinterpretations of the Principle, the mere transfer of entropy between computational and non-computational subsystems can occur in a thermodynamically reversible way without increasing total entropy. However, Landauer's Principle is not about general entropy transfers; rather, it more specifically concerns the ejection of (all or part of) some correlated information from a controlled, digital form (e.g., a computed bit) to an uncontrolled, non-computational form, i.e., as part of a thermal environment. Any uncontrolled thermal system will, by definition, continually re-randomize the physical information in its thermal state, from our perspective as observers who cannot predict the exact dynamical evolution of the microstates of such environments. Thus, any correlations involving information that is ejected into and subsequently thermalized by the environment will be lost from our perspective, resulting directly in an irreversible increase in total entropy. Avoiding the ejection and thermalization of correlated computational information motivates the reversible computing paradigm, although the requirements for computations to be thermodynamically reversible are less restrictive than frequently described, particularly in the case of stochastic computational operations. There are interesting possibilities for the design of computational processes that utilize stochastic, many-to-one computational operations while nevertheless avoiding net entropy increase that remain to be fully explored.Comment: 42 pages, 15 figures, extended postprint of a paper published in the 10th Conf. on Reversible Computation (RC18), Leicester, UK, Sep. 201

    A walk in the statistical mechanical formulation of neural networks

    Full text link
    Neural networks are nowadays both powerful operational tools (e.g., for pattern recognition, data mining, error correction codes) and complex theoretical models on the focus of scientific investigation. As for the research branch, neural networks are handled and studied by psychologists, neurobiologists, engineers, mathematicians and theoretical physicists. In particular, in theoretical physics, the key instrument for the quantitative analysis of neural networks is statistical mechanics. From this perspective, here, we first review attractor networks: starting from ferromagnets and spin-glass models, we discuss the underlying philosophy and we recover the strand paved by Hopfield, Amit-Gutfreund-Sompolinky. One step forward, we highlight the structural equivalence between Hopfield networks (modeling retrieval) and Boltzmann machines (modeling learning), hence realizing a deep bridge linking two inseparable aspects of biological and robotic spontaneous cognition. As a sideline, in this walk we derive two alternative (with respect to the original Hebb proposal) ways to recover the Hebbian paradigm, stemming from ferromagnets and from spin-glasses, respectively. Further, as these notes are thought of for an Engineering audience, we highlight also the mappings between ferromagnets and operational amplifiers and between antiferromagnets and flip-flops (as neural networks -built by op-amp and flip-flops- are particular spin-glasses and the latter are indeed combinations of ferromagnets and antiferromagnets), hoping that such a bridge plays as a concrete prescription to capture the beauty of robotics from the statistical mechanical perspective.Comment: Contribute to the proceeding of the conference: NCTA 2014. Contains 12 pages,7 figure

    A 3D Framework for Characterizing Microstructure Evolution of Li-Ion Batteries

    Get PDF
    Lithium-ion batteries are commonly found in many modern consumer devices, ranging from portable computers and mobile phones to hybrid- and fully-electric vehicles. While improving efficiencies and increasing reliabilities are of critical importance for increasing market adoption of the technology, research on these topics is, to date, largely restricted to empirical observations and computational simulations. In the present study, it is proposed to use the modern technique of X-ray microscopy to characterize a sample of commercial 18650 cylindrical Li-ion batteries in both their pristine and aged states. By coupling this approach with 3D and 4D data analysis techniques, the present study aimed to create a research framework for characterizing the microstructure evolution leading to capacity fade in a commercial battery. The results indicated the unique capabilities of the microscopy technique to observe the evolution of these batteries under aging conditions, successfully developing a workflow for future research studies
    corecore