29,551 research outputs found

    Predictive Analysis of Lottery Outcomes Using Deep Learning and Time Series Analysis

    Get PDF
    Abstract: Lotteries have long been a source of fascination and intrigue, offering the tantalizing prospect of unexpected fortunes. In this research paper, we delve into the world of lottery predictions, employing cutting-edge AI techniques to unlock the secrets of lottery outcomes. Our dataset, obtained from Kaggle, comprises historical lottery draws, and our goal is to develop predictive models that can anticipate future winning numbers. This study explores the use of deep learning and time series analysis to achieve this elusive feat. Through rigorous experimentation and data-driven approaches, we seek to determine the viability of AI in the realm of lottery predictions. Our findings reveal both the promise and limitations of AI in this context, shedding light on the complexities of lottery data and the potential need for quantum computing as a last resort

    Artificial Stupidity

    Full text link
    Artificial intelligence is everywhere. And yet, the experts tell us, it is not yet actually anywhere. This is because we are yet to achieve artificial general intelligence, or artificially intelligent systems that are capable of thinking for themselves and adapting to their circumstances. Instead, all the AI hype—and it is constant—concerns narrower, weaker forms of artificial intelligence, which are confined to performing specific, narrow tasks. The promise of true artificial general intelligence thus remains elusive. Artificial stupidity reigns supreme. What is the best set of policies to achieve more general, stronger forms of artificial intelligence? Surprisingly, scholars have paid little attention to this question. Scholars have spent considerable time assessing a number of important legal questions relating to artificial intelligence, including privacy, bias, tort, and intellectual property issues. But little effort has been devoted to exploring what set of policies is best suited to helping artificial intelligence developers achieve greater levels of innovation. And examining such issues is not some niche exercise, because artificial intelligence has already or soon will affect every sector of society. Hence, the question goes to the heart of future technological innovation policy more broadly. This Article examines this question by exploring how well intellectual property rights promote innovation in artificial intelligence. I focus on intellectual property rights because they are often viewed as the most important piece of United States innovation policy. Overall, I argue that intellectual property rights, particularly patents, are ill-suited to promote more radical forms of artificial intelligence innovation. And even the intellectual property types that are a better fit for artificial intelligence innovators, such as trade secrecy, come with problems of their own. In fact, the poor fit of patents in particular may contribute to heavy industry consolidation in the AI field, and heavy consolidation in an industry is typically associated with lower than ideal levels of innovation. I conclude by arguing, however, that neither strengthening AI patent rights nor looking to other forms of law, such as antitrust, holds much promise in achieving more general forms of artificial intelligence. Instead, as with many earlier radical innovations, significant government backing, coupled with an engaged entrepreneurial sector, is at least one key to avoiding enduring artificial stupidity

    Nat Chem Biol

    Get PDF
    Broad-spectrum antiviral drugs targeting host processes could potentially treat a wide range of viruses while reducing the likelihood of emergent resistance. Despite great promise as therapeutics, such drugs remain largely elusive. Here we used parallel genome-wide high-coverage short hairpin RNA (shRNA) and clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 screens to identify the cellular target and mechanism of action of GSK983, a potent broad-spectrum antiviral with unexplained cytotoxicity. We found that GSK983 blocked cell proliferation and dengue virus replication by inhibiting the pyrimidine biosynthesis enzyme dihydroorotate dehydrogenase (DHODH). Guided by mechanistic insights from both genomic screens, we found that exogenous deoxycytidine markedly reduced GSK983 cytotoxicity but not antiviral activity, providing an attractive new approach to improve the therapeutic window of DHODH inhibitors against RNA viruses. Our results highlight the distinct advantages and limitations of each screening method for identifying drug targets, and demonstrate the utility of parallel knockdown and knockout screens for comprehensive probing of drug activity.K99 CA181494/CA/NCI NIH HHS/United StatesDP2 HD084069/HD/NICHD NIH HHS/United StatesR00 CA181494/CA/NCI NIH HHS/United StatesDP2 AI104557/AI/NIAID NIH HHS/United StatesT32 GM008284/GM/NIGMS NIH HHS/United StatesT32 GM007618/GM/NIGMS NIH HHS/United StatesK99/R00 CA181494/CA/NCI NIH HHS/United States1DP2HD084069-01/DP/NCCDPHP CDC HHS/United StatesT32 EB009383/EB/NIBIB NIH HHS/United StatesU19 AI109662/AI/NIAID NIH HHS/United StatesU19-AI109662/AI/NIAID NIH HHS/United States2016-09-28T00:00:00Z27018887PMC48369737487vault:1854

    Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning

    Full text link
    The recent surge of generative AI has been fueled by the generative power of diffusion probabilistic models and the scalable capabilities of large language models. Despite their potential, it remains elusive whether diffusion language models can solve general language tasks comparable to their autoregressive counterparts. This paper demonstrates that scaling diffusion models w.r.t. data, sizes, and tasks can effectively make them strong language learners. We build competent diffusion language models at scale by first acquiring knowledge from massive data via masked language modeling pretraining thanks to their intrinsic connections. We then reprogram pretrained masked language models into diffusion language models via diffusive adaptation, wherein task-specific finetuning and instruction finetuning are explored to unlock their versatility in solving general language tasks. Experiments show that scaling diffusion language models consistently improves performance across downstream language tasks. We further discover that instruction finetuning can elicit zero-shot and few-shot in-context learning abilities that help tackle many unseen tasks by following natural language instructions, and show promise in advanced and challenging abilities such as reasoning.Comment: added reference

    Globalization, Media Culture and Socio-Economic Security in Nigeria

    Get PDF

    SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents

    Full text link
    Humans are social beings; we pursue social goals in our daily interactions, which is a crucial aspect of social intelligence. Yet, AI systems' abilities in this realm remain elusive. We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and evaluate their social intelligence. In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals. We simulate the role-play interaction between LLM-based agents and humans within this task space and evaluate their performance with a holistic evaluation framework called SOTOPIA-Eval. With SOTOPIA, we find significant differences between these models in terms of their social intelligence, and we identify a subset of SOTOPIA scenarios, SOTOPIA-hard, that is generally challenging for all models. We find that on this subset, GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills. These findings demonstrate SOTOPIA's promise as a general platform for research on evaluating and improving social intelligence in artificial agents.Comment: Preprint, 43 pages. The first two authors contribute equall

    Computational Complexity in Electronic Structure

    Get PDF
    In quantum chemistry, the price paid by all known efficient model chemistries is either the truncation of the Hilbert space or uncontrolled approximations. Theoretical computer science suggests that these restrictions are not mere shortcomings of the algorithm designers and programmers but could stem from the inherent difficulty of simulating quantum systems. Extensions of computer science and information processing exploiting quantum mechanics has led to new ways of understanding the ultimate limitations of computational power. Interestingly, this perspective helps us understand widely used model chemistries in a new light. In this article, the fundamentals of computational complexity will be reviewed and motivated from the vantage point of chemistry. Then recent results from the computational complexity literature regarding common model chemistries including Hartree-Fock and density functional theory are discussed.Comment: 14 pages, 2 figures, 1 table. Comments welcom

    Trying again to fail-first

    Get PDF
    For constraint satisfaction problems (CSPs), Haralick and Elliott [1] introduced the Fail-First Principle and defined in it terms of minimizing branch depth. By devising a range of variable ordering heuristics, each in turn trying harder to fail first, Smith and Grant [2] showed that adherence to this strategy does not guarantee reduction in search effort. The present work builds on Smith and Grant. It benefits from the development of a new framework for characterizing heuristic performance that defines two policies, one concerned with enhancing the likelihood of correctly extending a partial solution, the other with minimizing the effort to prove insolubility. The Fail-First Principle can be restated as calling for adherence to the second, fail-first policy, while discounting the other, promise policy. Our work corrects some deficiencies in the work of Smith and Grant, and goes on to confirm their finding that the Fail-First Principle, as originally defined, is insufficient. We then show that adherence to the fail-first policy must be measured in terms of size of insoluble subtrees, not branch depth. We also show that for soluble problems, both policies must be considered in evaluating heuristic performance. Hence, even in its proper form the Fail-First Principle is insufficient. We also show that the “FF” series of heuristics devised by Smith and Grant is a powerful tool for evaluating heuristic performance, including the subtle relations between heuristic features and adherence to a policy
    corecore