8 research outputs found

    AI-Completeness: Using Deep Learning to Eliminate the Human Factor

    Get PDF
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A solution for the so-called NP-complete problems will also be a solution for any other such problems. Its artificial-intelligence analogue is the class of AI-complete problems, for which a complete mathematical formalization still does not exist. In this chapter we will focus on analysing computational classes to better understand possible formalizations of AI-complete problems, and to see whether a universal algorithm, such as a Turing test, could exist for all AI-complete problems. In order to better observe how modern computer science tries to deal with computational complexity issues, we present several different deep-learning strategies involving optimization methods to see that the inability to exactly solve a problem from a higher order computational class does not mean there is not a satisfactory solution using state-of-the-art machine-learning techniques. Such methods are compared to philosophical issues and psychological research regarding human abilities of solving analogous NP-complete problems, to fortify the claim that we do not need to have an exact and correct way of solving AI-complete problems to nevertheless possibly achieve the notion of strong AI

    Human Computation and Convergence

    Full text link
    Humans are the most effective integrators and producers of information, directly and through the use of information-processing inventions. As these inventions become increasingly sophisticated, the substantive role of humans in processing information will tend toward capabilities that derive from our most complex cognitive processes, e.g., abstraction, creativity, and applied world knowledge. Through the advancement of human computation - methods that leverage the respective strengths of humans and machines in distributed information-processing systems - formerly discrete processes will combine synergistically into increasingly integrated and complex information processing systems. These new, collective systems will exhibit an unprecedented degree of predictive accuracy in modeling physical and techno-social processes, and may ultimately coalesce into a single unified predictive organism, with the capacity to address societies most wicked problems and achieve planetary homeostasis.Comment: Pre-publication draft of chapter. 24 pages, 3 figures; added references to page 1 and 3, and corrected typ

    The more the merrier? Increasing group size may be detrimental to decision-making performance in nominal groups

    Get PDF
    <div><p>Demonstrability—the extent to which group members can recognize a correct solution to a problem—has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors—the difficulty of solving a problem and the difficulty of verifying the correctness of a solution—on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually <i>decrease</i> group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making.</p></div

    Recycling Gene Carrier with High Efficiency and Low Toxicity Mediated by L-Cystine-Bridged Bis(ÎČ-cyclodextrin)s

    No full text
    Constructing safe and effective gene delivery carriers is becoming highly desirable for gene therapy. Herein, a series of supramolecular crosslinking system were prepared through host-guest binding of adamantyl-modified low molecular weight of polyethyleneimine with L-cystine-bridged bis(ÎČ-cyclodextrin)s and characterized by (1)H NMR titration, electron microscopy, zeta potential, dynamic light-scattering, gel electrophoresis, flow cytometry and confocal fluorescence microscopy. The results showed that these nanometersized supramolecular crosslinking systems exhibited higher DNA transfection efficiencies and lower cytotoxicity than the commercial DNA carrier gold standard (25 kDa bPEI) for both normal cells and cancer cells, giving a very high DNA transfection efficiency up to 54% for 293T cells. Significantly, this type of supramolecular crosslinking system possesses a number of enzyme-responsive disulfide bonds, which can be cleaved by reductive enzyme to promote the DNA release but recovered by oxidative enzyme to make the carrier renewable. These results demonstrate that these supramolecular crosslinking systems can be used as promising gene carriers
    corecore