20,391 research outputs found

    Categories of insight and their correlates: An exploration of relationships among classic-type insight problems, rebus puzzles, remote associates and esoteric analogies.

    Get PDF
    A central question in creativity concerns how insightful ideas emerge. Anecdotal examples of insightful scientific and technical discoveries include Goodyear's discovery of the vulcanization of rubber, and Mendeleev's realization that there may be gaps as he tried to arrange the elements into the Periodic Table. Although most people would regard these discoveries as insightful, cognitive psychologists have had difficulty in agreeing on whether such ideas resulted from insights or from conventional problem solving processes. One area of wide agreement among psychologists is that insight involves a process of restructuring. If this view is correct, then understanding insight and its role in problem solving will depend on a better understanding of restructuring and the characteristics that describe it. This article proposes and tests a preliminary classification of insight problems based on several restructuring characteristics: the need to redefine spatial assumptions, the need to change defined forms, the degree of misdirection involved, the difficulty in visualizing a possible solution, the number of restructuring sequences in the problem, and the requirement for figure-ground type reversals. A second purpose of the study was to compare performance on classic spatial insight problems with two types of verbal tests that may be related to insight, the Remote Associates Test (RAT), and rebus puzzles. In doing so, we report on the results of a survey of 172 business students at the University of Waikato in New Zealand who completed classic-type insight, RAT and rebus problems

    solveME: fast and reliable solution of nonlinear ME models.

    Get PDF
    BackgroundGenome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints.ResultsHere, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints.ConclusionsJust as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields

    The Process of Thinking by Prospective Teachers of Mathematics in Making Arguments

    Full text link
    This study aimed to describe the process of thinking by prospective teachers of mathematics in making arguments. It was a qualitative research involving the mathematics students of STKIP PGRI Jombang as the subject of the study. Test and task-based semi structural interview were conducted for data collection. The result showed that 163 of 260 mathematics students argued using inductive and deductive warrants. The process of thinking by the prospective teachers of mathematics in making arguments had begun since they constructed their very first idea by figuring out some objects to make a conclusion. However, they also found a rebuttal from that conclusion, though they did not further describe what such rebuttal was. Therefore, they decided to construct the second ideas in order to verify the first ones through some pieces of definition

    Solving Linux Upgradeability Problems Using Boolean Optimization

    Full text link
    Managing the software complexity of package-based systems can be regarded as one of the main challenges in software architectures. Upgrades are required on a short time basis and systems are expected to be reliable and consistent after that. For each package in the system, a set of dependencies and a set of conflicts have to be taken into account. Although this problem is computationally hard to solve, efficient tools are required. In the best scenario, the solutions provided should also be optimal in order to better fulfill users requirements and expectations. This paper describes two different tools, both based on Boolean satisfiability (SAT), for solving Linux upgradeability problems. The problem instances used in the evaluation of these tools were mainly obtained from real environments, and are subject to two different lexicographic optimization criteria. The developed tools can provide optimal solutions for many of the instances, but a few challenges remain. Moreover, it is our understanding that this problem has many similarities with other configuration problems, and therefore the same techniques can be used in other domains.Comment: In Proceedings LoCoCo 2010, arXiv:1007.083

    Data optimizations for constraint automata

    Get PDF
    Constraint automata (CA) constitute a coordination model based on finite automata on infinite words. Originally introduced for modeling of coordinators, an interesting new application of CAs is implementing coordinators (i.e., compiling CAs into executable code). Such an approach guarantees correctness-by-construction and can even yield code that outperforms hand-crafted code. The extent to which these two potential advantages materialize depends on the smartness of CA-compilers and the existence of proofs of their correctness. Every transition in a CA is labeled by a "data constraint" that specifies an atomic data-flow between coordinated processes as a first-order formula. At run-time, compiler-generated code must handle data constraints as efficiently as possible. In this paper, we present, and prove the correctness of two optimization techniques for CA-compilers related to handling of data constraints: a reduction to eliminate redundant variables and a translation from (declarative) data constraints to (imperative) data commands expressed in a small sequential language. Through experiments, we show that these optimization techniques can have a positive impact on performance of generated executable code

    From Gladiators to Problem-Solvers: Connecting Conversations About Women, the Academy, and the Legal Profession

    Get PDF
    The UHF band between 470-790 MHz, currently occupied by digital ter- restrial TV (DTT) distribution in Europe, is widely regarded as a premium spectrum band for providing mobile coverage. With the exponential increase in wireless data traffic in recent years, there has been growing interests in gaining access to this spectrum band for wireless broadband services. The secondary access in TV White Space is considered as one cost-effective way to reuse the spectrum unoccupied by the primary DTT network. On the other hand, the declining influence of DTT and the converging trends of video con- sumption on TV and mobile platforms are new incentives for the regulator to reconsider the optimal utilization of the UHF broadcast band. The proposal to re-farm the UHF band for a converged content distribution network was born under theses circumstances. This thesis intends to develop a methodology for evaluating the technical performance of these options for utilizing UHF broadcast band and quantify- ing their gains in terms of achievable extra capacity and spectrum savings. For the secondary access in TV white space, our study indicates a considerable po- tential for low power secondary, which is mostly limited by the adjacent chan- nel interference generated from the densely deployed secondary devices due to the cumulative effect of multichannel interference. On the other hand, this potential does not translate directly into capacity for a WiFi-like secondary system based on CSMA/CA protocol, as the network congestion and self- interference within the secondary system has a greater impact on the network throughput than the primary interference constraint. Our study on the cellular content distribution network reveals more po- tential benefits for re-farming the UHF broadcast band and reallocating it for a converged platform. This platform is based on cellular infrastructure and can provide TV service with the same level of quality requirement as DTT by delivering the video content via either broadcast or unicast as the situa- tion dictates. We have developed a resource manage framework to minimize its spectrum requirement for providing TV service and identified a significant amount of spectrum that can be reused by the converged platform to provide extra mobile broadband capacity in urban and sparsely populated rural areas. Overall, we have arrived at the conclusion that the concept of cellular con- tent distribution in a re-farmed UHF band shows a more promising prospect than the secondary access in TV white space in the long run. Nevertheless, low power secondary is still considered as a flexible and low-cost way to exploit the underutilized spectrum in the short term, despite its uncertainty in future availability. On the other hand, the re-farming of UHF broadcast band is a long and difficult regulation process with substantial opposition from the in- cumbent.The results from this study could serve as input for future regulatory decisions on the UHF band allocation and cost-benefit analysis for deploying new systems to access this spectrum band. QC 20140609EU FP7 QUASAREU FP7 METI

    Quantifier-Free Interpolation of a Theory of Arrays

    Get PDF
    The use of interpolants in model checking is becoming an enabling technology to allow fast and robust verification of hardware and software. The application of encodings based on the theory of arrays, however, is limited by the impossibility of deriving quantifier- free interpolants in general. In this paper, we show that it is possible to obtain quantifier-free interpolants for a Skolemized version of the extensional theory of arrays. We prove this in two ways: (1) non-constructively, by using the model theoretic notion of amalgamation, which is known to be equivalent to admit quantifier-free interpolation for universal theories; and (2) constructively, by designing an interpolating procedure, based on solving equations between array updates. (Interestingly, rewriting techniques are used in the key steps of the solver and its proof of correctness.) To the best of our knowledge, this is the first successful attempt of computing quantifier- free interpolants for a variant of the theory of arrays with extensionality
    corecore