188 research outputs found

    Parallelism in Constraint Programming

    Get PDF
    Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the norm, not the exception. Soon, parallel processors will be standard even in cell phones. Without drastic changes in hardware development, all software must be parallelized to its fullest extent. Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a dual-core processor than a single core processor running at twice the speed. Halving the speed of a processor can reduce the power consumption up to four times. Hence, parallelism gives more performance per unit of power to efficient programs. In order to make use of parallel hardware, we need to overcome the difficulties of parallel programming. To many programmers, it is easier to learn a handful of small domain-specific programming languages than to learn efficient parallel programming. The frameworks for these languages can then automatically parallelize the program. Automatically parallelizing traditional programs is usually much more difficult. In this thesis, we study and present parallelism in constraint programming (CP). We have developed the first constraint framework that automatically parallelizes both the consistency and the search of the solving process. This allows programmers to avoid the difficult issues of parallel programming. We also study distributed CP with independent agents and propose solutions to this problem. Our results show that automatic parallelism in CP can provide very good performance. Our parallel consistency scales very well for problems with many large constraints. We also manage to combine parallel consistency and parallel search with a performance increase. The communication and load-balancing schemes we developed increase the scalability of parallel search. Our model for distributed CP is orders of magnitude faster than traditional approaches. As far as we know, it is the first to solve standard benchmark scheduling problems

    Distributed constraint programming with agents

    Get PDF
    Many combinatorial optimization problems lend themselves to be modeled as distributed constraint optimization problems (DisCOP). Problems such as job shop scheduling have an intuitive matching between agents and machines. In distributed constraint problems, agents control variables and are connected via constraints. We have equipped these agents with a full constraint solver. This makes it possible to use global constraint and advanced search schemes. By empowering the agents with their own solver, we overcome the low performance that often haunts distributed constraint satisfaction problems (DisCSP). By using global constraints, we achieve far greater pruning than traditional DisCSP models. Hence, we dramatically reduce communication between agents. Our experiments show that both global constraints and advanced search schemes are necessary to optimize job shop schedules using DisCSP

    Parallel consistency in constraint programming

    Get PDF
    Program parallelization becomes increasingly important when new multi-core architectures provide ways to improve performance. One of the greatest challenges of this development lies in programming parallel applications. Using declarative languages, such as constraint programming, can make the transition to parallelism easier by hiding the parallelization details in a framework. Automatic parallelization in constraint programming has previously focused on data parallelism. In this paper, we look at task parallelism, specifically the case of parallel consistency. We have developed two models of parallel consistency, one that shares intermediate results and one that does not. We evaluate which model is better in our experiments. Our results show that parallelizing consistency can provide the programmer with a robust scalability for regular problems with global constraints

    Combining parallel search and parallel consistency in constraint programming

    Get PDF
    Program parallelization becomes increasingly important when new multi-core architectures provide ways to improve performance. One of the greatest challenges of this development lies in programming parallel applications. Declarative languages, such as constraint programming, can make the transition to parallelism easier by hiding the parallelization details in a framework. Automatic parallelization in constraint programming has mostly focused on parallel search. While search and consistency are intrinsically linked, the consistency part of the solving process is often more time-consuming. We have previously looked at parallel consistency and found it to be quite promising. In this paper we investigate how to combine parallel search with parallel consistency. We evaluate which problems are suitable and which are not. Our results show that parallelizing the entire solving process in constraint programming is a major challenge as parallel search and parallel consistency typically suit different types of problems

    Laboratory instructions as a cause of student dissonance

    Get PDF
    Improving the quality of education is the goal of all pedagogical research. By using student surveys and course evaluations problem areas can be identified in most courses offered by universities. In this paper we perform a large-scale student survey in order to find the causes of, and remedies to, a widespread student dissonance in a mandatory course with over 100 students at Lund University. Our research shows that aiming for deeper learning, without providing time and a stimulating environment, can be worse than settling for expository learning. This problem has persisted for years despite attempts by the course administrators to solve the problem. We propose that major improvements can be achieved, both in learning and pass rates, primarily by improving the lab instructions but also by using a more intellectually stimulating lab equipment

    Laboratory Instructions as a Cause of Student Dissonance

    Get PDF
    Abstract-Improving the quality of education is the goal of all pedagogical research. By using student surveys and course evaluations problem areas can be identified in most courses offered by universities. In this paper we perform a large-scale student survey in order to find the causes of, and remedies to, a widespread student dissonance in a mandatory course with over 100 students at Lund University. Our research shows that aiming for deeper learning, without providing time and a stimulating environment, can be worse than settling for expository learning. This problem has persisted for years despite attempts by the course administrators to solve the problem. We propose that major improvements can be achieved, both in learning and pass rates, primarily by improving the lab instructions but also by using a more intellectually stimulating lab equipment

    Defining the Critical Hurdles in Cancer Immunotherapy

    Get PDF
    ABSTRACT: Scientific discoveries that provide strong evidence of antitumor effects in preclinical models often encounter significant delays before being tested in patients with cancer. While some of these delays have a scientific basis, others do not. We need to do better. Innovative strategies need to move into early stage clinical trials as quickly as it is safe, and if successful, these therapies should efficiently obtain regulatory approval and widespread clinical application. In late 2009 and 2010 the Society for Immunotherapy of Cancer (SITC), convened an "Immunotherapy Summit" with representatives from immunotherapy organizations representing Europe, Japan, China and North America to discuss collaborations to improve development and delivery of cancer immunotherapy. One of the concepts raised by SITC and defined as critical by all parties was the need to identify hurdles that impede effective translation of cancer immunotherapy. With consensus on these hurdles, international working groups could be developed to make recommendations vetted by the participating organizations. These recommendations could then be considered by regulatory bodies, governmental and private funding agencies, pharmaceutical companies and academic institutions to facilitate changes necessary to accelerate clinical translation of novel immune-based cancer therapies. The critical hurdles identified by representatives of the collaborating organizations, now organized as the World Immunotherapy Council, are presented and discussed in this report. Some of the identified hurdles impede all investigators, others hinder investigators only in certain regions or institutions or are more relevant to specific types of immunotherapy or first-in-humans studies. Each of these hurdles can significantly delay clinical translation of promising advances in immunotherapy yet be overcome to improve outcomes of patients with cancer

    Defining the critical hurdles in cancer immunotherapy

    Get PDF
    Scientific discoveries that provide strong evidence of antitumor effects in preclinical models often encounter significant delays before being tested in patients with cancer. While some of these delays have a scientific basis, others do not. We need to do better. Innovative strategies need to move into early stage clinical trials as quickly as it is safe, and if successful, these therapies should efficiently obtain regulatory approval and widespread clinical application. In late 2009 and 2010 the Society for Immunotherapy of Cancer (SITC), convened an "Immunotherapy Summit" with representatives from immunotherapy organizations representing Europe, Japan, China and North America to discuss collaborations to improve development and delivery of cancer immunotherapy. One of the concepts raised by SITC and defined as critical by all parties was the need to identify hurdles that impede effective translation of cancer immunotherapy. With consensus on these hurdles, international working groups could be developed to make recommendations vetted by the participating organizations. These recommendations could then be considered by regulatory bodies, governmental and private funding agencies, pharmaceutical companies and academic institutions to facilitate changes necessary to accelerate clinical translation of novel immune-based cancer therapies. The critical hurdles identified by representatives of the collaborating organizations, now organized as the World Immunotherapy Council, are presented and discussed in this report. Some of the identified hurdles impede all investigators; others hinder investigators only in certain regions or institutions or are more relevant to specific types of immunotherapy or first-in-humans studies. Each of these hurdles can significantly delay clinical translation of promising advances in immunotherapy yet if overcome, have the potential to improve outcomes of patients with cancer

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements

    TRY plant trait database – enhanced coverage and open access

    Get PDF
    Plant traits - the morphological, anatomical, physiological, biochemical and phenological characteristics of plants - determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait‐based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits - almost complete coverage for ‘plant growth form’. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait–environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives
    • 

    corecore