223,333 research outputs found

    Barriers to Scientific Contributions: The Author’s Formula

    Get PDF
    Recently I completed a review of the empirical research on scientific journals (Armstrong 1982). This review provided evidence for an “author’s formula,” a set of rules that authors can use to increase the likelihood and speed of acceptance of their manuscripts. Authors should: (1) not pick an important problem, (2) not challenge existing beliefs, (3) not obtain surprising results, (4) not use simple methods, (5) not provide full disclosure, and (6) not write clearly. Peters & Ceci (P&C) are obviously ignorant of the author’s formula. In their extension of the Kosinski study (Ross 1979; 1980), they broke most of the rules. Why, then, is P&C’s paper being published? In my search for an explanation, I learned the following from Peters: (a) After a long delay, the paper was rejected by Science, with advice that it would be appropriate for the American Psychologist. (b) After a long delay, the paper was rejected by the American Psychologist. This history illustrates the predictive power of the author’s formula. Submission was meanwhile encouraged by the editor of the Behavioral and Brain Sciences – a journal specializing in peer interaction on controversial papers – and, after a final round of major revision, the paper was accepted for publication. In this commentary, I describe how P&C violated many rules in the author’s formula. It may be too late to salvage their careers, but the discussion should be instructive to other authors.barriers, scientific contriution, publication

    Barriers to Scientific Contributions: The Author’s Formula

    Get PDF
    Recently I completed a review of the empirical research on scientific journals (Armstrong 1982). This review provided evidence for an “author’s formula,” a set of rules that authors can use to increase the likelihood and speed of acceptance of their manuscripts. Authors should: (1) not pick an important problem, (2) not challenge existing beliefs, (3) not obtain surprising results, (4) not use simple methods, (5) not provide full disclosure, and (6) not write clearly. Peters & Ceci (P&C) are obviously ignorant of the author’s formula. In their extension of the Kosinski study (Ross 1979; 1980), they broke most of the rules

    A short guidance how to make the 'Communication and visibility plan' operational

    Get PDF
    A practical guide for novice press and communication officers how to use a communication and visibility plan on the operational level and how to write press releases, newsletters and formulate smart messages

    On the synthesis and processing of high quality audio signals by parallel computers

    Get PDF
    This work concerns the application of new computer architectures to the creation and manipulation of high-quality audio bandwidth signals. The configuration of both the hardware and software in such systems falls under consideration in the three major sections which present increasing levels of algorithmic concurrency. In the first section, the programs which are described are distributed in identical copies across an array of processing elements; these programs run autonomously, generating data independently, but with control parameters peculiar to each copy: this type of concurrency is referred to as isonomic}The central section presents a structure which distributes tasks across an arbitrary network of processors; the flow of control in such a program is quasi- indeterminate, and controlled on a demand basis by the rate of completion of the slave tasks and their irregular interaction with the master. Whilst that interaction is, in principle, deterministic, it is also data-dependent; the dynamic nature of task allocation demands that no a priori knowledge of the rate of task completion be required. This type of concurrency is called dianomic? Finally, an architecture is described which will support a very high level of algorithmic concurrency. The programs which make efficient use of such a machine are designed not by considering flow of control, but by considering flow of data. Each atomic algorithmic unit is made as simple as possible, which results in the extensive distribution of a program over very many processing elements. Programs designed by considering only the optimum data exchange routes are said to exhibit systolic^ concurrency. Often neglected in the study of system design are those provisions necessary for practical implementations. It was intended to provide users with useful application programs in fulfilment of this study; the target group is electroacoustic composers, who use digital signal processing techniques in the context of musical composition. Some of the algorithms in use in this field are highly complex, often requiring a quantity of processing for each sample which exceeds that currently available even from very powerful computers. Consequently, applications tend to operate not in 'real-time' (where the output of a system responds to its input apparently instantaneously), but by the manipulation of sounds recorded digitally on a mass storage device. The first two sections adopt existing, public-domain software, and seek to increase its speed of execution significantly by parallel techniques, with the minimum compromise of functionality and ease of use. Those chosen are the general- purpose direct synthesis program CSOUND, from M.I.T., and a stand-alone phase vocoder system from the C.D.P..(^4) In each case, the desired aim is achieved: to increase speed of execution by two orders of magnitude over the systems currently in use by composers. This requires substantial restructuring of the programs, and careful consideration of the best computer architectures on which they are to run concurrently. The third section examines the rationale behind the use of computers in music, and begins with the implementation of a sophisticated electronic musical instrument capable of a degree of expression at least equal to its acoustic counterparts. It seems that the flexible control of such an instrument demands a greater computing resource than the sound synthesis part. A machine has been constructed with the intention of enabling the 'gestural capture' of performance information in real-time; the structure of this computer, which has one hundred and sixty high-performance microprocessors running in parallel, is expounded; and the systolic programming techniques required to take advantage of such an array are illustrated in the Occam programming language

    How long does it take to generate a group?

    Get PDF
    The diameter of a finite group GG with respect to a generating set AA is the smallest non-negative integer nn such that every element of GG can be written as a product of at most nn elements of AA1A \cup A^{-1}. We denote this invariant by \diam_A(G). It can be interpreted as the diameter of the Cayley graph induced by AA on GG and arises, for instance, in the context of efficient communication networks. In this paper we study the diameters of a finite abelian group GG with respect to its various generating sets AA. We determine the maximum possible value of \diam_A(G) and classify all generating sets for which this maximum value is attained. Also, we determine the maximum possible cardinality of AA subject to the condition that \diam_A(G) is "not too small". Connections with caps, sum-free sets, and quasi-perfect codes are discussed

    From one solution of a 3-satisfiability formula to a solution cluster: Frozen variables and entropy

    Full text link
    A solution to a 3-satisfiability (3-SAT) formula can be expanded into a cluster, all other solutions of which are reachable from this one through a sequence of single-spin flips. Some variables in the solution cluster are frozen to the same spin values by one of two different mechanisms: frozen-core formation and long-range frustrations. While frozen cores are identified by a local whitening algorithm, long-range frustrations are very difficult to trace, and they make an entropic belief-propagation (BP) algorithm fail to converge. For BP to reach a fixed point the spin values of a tiny fraction of variables (chosen according to the whitening algorithm) are externally fixed during the iteration. From the calculated entropy values, we infer that, for a large random 3-SAT formula with constraint density close to the satisfiability threshold, the solutions obtained by the survey-propagation or the walksat algorithm belong neither to the most dominating clusters of the formula nor to the most abundant clusters. This work indicates that a single solution cluster of a random 3-SAT formula may have further community structures.Comment: 13 pages, 6 figures. Final version as published in PR

    Trinity symmetry and kaleidoscopic regular maps

    Get PDF
    A cellular embedding of a connected graph (also known as a map) on an orientable surface has trinity symmetry if it is isomorphic to both its dual and its Petrie dual. A map is regular if for any two incident vertex-edge pairs there is an automorphism of the map sending the first pair onto the second. Given a map M with all vertices of the same degree d, for any e relatively prime to d the power map Me is formed from M by replacing the cyclic rotation of edges at each vertex on the surface with the e th power of the rotation. A map is kaleidoscopic if all of its power maps are pairwise isomorphic. In this paper, we present a covering construction that gives infinite families of kaleidoscopic regular maps with trinity symmetry

    Contribution of Long Wavelength Gravitational Waves to the CMB Anisotropy

    Full text link
    We present an in depth discussion of the production of gravitational waves from an inflationary phase that could have occurred in the early universe, giving derivations for the resulting spectrum and energy density. We also consider the large-scale anisotropy in the cosmic microwave background radiation coming from these waves. Assuming that the observed quadrupole anisotropy comes mostly from gravitational waves (consistent with the predictions of a flat spectrum of scalar density perturbations and the measured dipole anisotropy) we describe in detail how to derive a value for the scale of inflation of (1.55)×1016(1.5-5)\times 10^{16}GeV, which is at a particularly interesting scale for particle physics. This upper limit corresponds to a 95\% confidence level upper limit on the scale of inflation assuming only that the quadrupole anisotropy from gravitational waves is not cancelled by another source. Direct detection of gravitational waves produced by inflation near this scale will have to wait for the next generation of detectors.Comment: (LaTeX 16 pages), 2 figures not included, YCTP-P16-9

    Simulation of packet and cell-based communication networks

    Get PDF
    This thesis investigates, using simulation techniques, the practical aspects of implementing a novel mobility protocol on the emerging Broadband Integrated Services Digital Network standard. The increasing expansion of telecommunications networks has meant that the demand for simulation has increased rapidly in recent years; but conventional simulators are slow and developments in the communications field are outstripping the ability of sequential uni-processor simulators. Newer techniques using distributed simulation on a multi-processor network are investigated in an attempt to make a cell-level simulation of a non-trivial B.-I.S.D.N. network feasible. The current state of development of the Asynchronous Transfer Mode standard, which will be used to implement a B.-I.S.D.N., is reviewed and simulation studies of the Orwell Slotted Ring protocol were made in an attempt to devise a simpler model for use in the main simulator. The mobility protocol, which uses a footprinting technique to simplify hand- offs by distributing information about a connexion to surrounding base stations, was implemented on the simulator and found to be functional after a few 'special case' scenarios had been catered for
    corecore