3,265 research outputs found

    Proof of Kaneko--Tsumura Conjecture on Triple T-Values

    Full text link
    Many Q\mathbb{Q}-linear relations exist between multiple zeta values, the most interesting of which are various weighted sum formulas. In this paper, we generalized these to Euler sums and some other variants of multiple zeta values by considering the generating functions of the Euler sums. Through this approach we are able to re-prove a few known formulas, confirm a conjecture of Kaneko and Tsumura on triple TT-values, and discover many new identities.Comment: 11 pages, streamline the draft so that only results relevant to the proof of the conjecture are presente

    Risk of selection bias in randomised trials

    Get PDF
    Background: Selection bias occurs when recruiters selectively enrol patients into the trial based on what the next treatment allocation is likely to be. This can occur even if appropriate allocation concealment is used if recruiters can guess the next treatment assignment with some degree of accuracy. This typically occurs in unblinded trials when restricted randomisation is implemented to force the number of patients in each arm or within each centre to be the same. Several methods to reduce the risk of selection bias have been suggested; however, it is unclear how often these techniques are used in practice. Methods: We performed a review of published trials which were not blinded to assess whether they utilised methods for reducing the risk of selection bias. We assessed the following techniques: (a) blinding of recruiters; (b) use of simple randomisation; (c) avoidance of stratification by site when restricted randomisation is used; (d) avoidance of permuted blocks if stratification by site is used; and (e) incorporation of prognostic covariates into the randomisation procedure when restricted randomisation is used. We included parallel group, individually randomised phase III trials published in four general medical journals (BMJ, Journal of the American Medical Association, The Lancet, and New England Journal of Medicine) in 2010. Results: We identified 152 eligible trials. Most trials (98%) provided no information on whether recruiters were blind to previous treatment allocations. Only 3% of trials used simple randomisation; 63% used some form of restricted randomisation, and 35% did not state the method of randomisation. Overall, 44% of trials were stratified by site of recruitment; 27% were not, and 29% did not report this information. Most trials that did stratify by site of recruitment used permuted blocks (58%), and only 15% reported using random block sizes. Many trials that used restricted randomisation also included prognostic covariates in the randomisation procedure (56%). Conclusions: The risk of selection bias could not be ascertained for most trials due to poor reporting. Many trials which did provide details on the randomisation procedure were at risk of selection bias due to a poorly chosen randomisation methods. Techniques to reduce the risk of selection bias should be more widely implemented

    Whole Exome Sequence Analysis Provides Novel Insights into the Genetic Framework of Childhood-Onset Pulmonary Arterial Hypertension.

    Get PDF
    Pulmonary arterial hypertension (PAH) describes a rare, progressive vascular disease caused by the obstruction of pulmonary arterioles, typically resulting in right heart failure. Whilst PAH most often manifests in adulthood, paediatric disease is considered to be a distinct entity with increased morbidity and often an unexplained resistance to current therapies. Recent genetic studies have substantially increased our understanding of PAH pathogenesis, providing opportunities for molecular diagnosis and presymptomatic genetic testing in families. However, the genetic architecture of childhood-onset PAH remains relatively poorly characterised. We sought to investigate a previously unsolved paediatric cohort (n = 18) using whole exome sequencing to improve the molecular diagnosis of childhood-onset PAH. Through a targeted investigation of 26 candidate genes, we applied a rigorous variant filtering methodology to enrich for rare, likely pathogenic variants. This analysis led to the detection of novel PAH risk alleles in five genes, including the first identification of a heterozygous ATP13A3 mutation in childhood-onset disease. In addition, we provide the first independent validation of BMP10 and PDGFD as genetic risk factors for PAH. These data provide a molecular diagnosis in 28% of paediatric cases, reflecting the increased genetic burden in childhood-onset disease and highlighting the importance of next-generation sequencing approaches to diagnostic surveillance

    Calibration estimation in dual-frame surveys

    Get PDF
    Survey statisticians make use of auxiliary information to improve estimates. One important example is calibration estimation, which constructs new weights that match benchmark constraints on auxiliary variables while remaining “close” to the design weights. Multiple-frame surveys are increasingly used by statistical agencies and private organizations to reduce sampling costs and/or avoid frame undercoverage errors. Several ways of combining estimates derived from such frames have been proposed elsewhere; in this paper, we extend the calibration paradigm, previously used for single-frame surveys, to calculate the total value of a variable of interest in a dual-frame survey. Calibration is a general tool that allows to include auxiliary information from two frames. It also incorporates, as a special case, certain dual-frame estimators that have been proposed previously. The theoretical properties of our class of estimators are derived and discussed, and simulation studies conducted to compare the efficiency of the procedure, using different sets of auxiliary variables. Finally, the proposed methodology is applied to real data obtained from the Barometer of Culture of Andalusia survey.Ministerio de Educación y CienciaConsejería de Economía, Innovación, Ciencia y EmpleoPRIN-SURWE

    A survey of performance enhancement of transmission control protocol (TCP) in wireless ad hoc networks

    Get PDF
    This Article is provided by the Brunel Open Access Publishing Fund - Copyright @ 2011 Springer OpenTransmission control protocol (TCP), which provides reliable end-to-end data delivery, performs well in traditional wired network environments, while in wireless ad hoc networks, it does not perform well. Compared to wired networks, wireless ad hoc networks have some specific characteristics such as node mobility and a shared medium. Owing to these specific characteristics of wireless ad hoc networks, TCP faces particular problems with, for example, route failure, channel contention and high bit error rates. These factors are responsible for the performance degradation of TCP in wireless ad hoc networks. The research community has produced a wide range of proposals to improve the performance of TCP in wireless ad hoc networks. This article presents a survey of these proposals (approaches). A classification of TCP improvement proposals for wireless ad hoc networks is presented, which makes it easy to compare the proposals falling under the same category. Tables which summarize the approaches for quick overview are provided. Possible directions for further improvements in this area are suggested in the conclusions. The aim of the article is to enable the reader to quickly acquire an overview of the state of TCP in wireless ad hoc networks.This study is partly funded by Kohat University of Science & Technology (KUST), Pakistan, and the Higher Education Commission, Pakistan

    Neural Network Parameterizations of Electromagnetic Nucleon Form Factors

    Full text link
    The electromagnetic nucleon form-factors data are studied with artificial feed forward neural networks. As a result the unbiased model-independent form-factor parametrizations are evaluated together with uncertainties. The Bayesian approach for the neural networks is adapted for chi2 error-like function and applied to the data analysis. The sequence of the feed forward neural networks with one hidden layer of units is considered. The given neural network represents a particular form-factor parametrization. The so-called evidence (the measure of how much the data favor given statistical model) is computed with the Bayesian framework and it is used to determine the best form factor parametrization.Comment: The revised version is divided into 4 sections. The discussion of the prior assumptions is added. The manuscript contains 4 new figures and 2 new tables (32 pages, 15 figures, 2 tables

    The genomic evolution of human prostate cancer.

    Get PDF
    Prostate cancers are highly prevalent in the developed world, with inheritable risk contributing appreciably to tumour development. Genomic heterogeneity within individual prostate glands and between patients derives predominantly from structural variants and copy-number aberrations. Subtypes of prostate cancers are being delineated through the increasing use of next-generation sequencing, but these subtypes are yet to be used to guide the prognosis or therapeutic strategy. Herein, we review our current knowledge of the mutational landscape of human prostate cancer, describing what is known of the common mutations underpinning its development. We evaluate recurrent prostate-specific mutations prior to discussing the mutational events that are shared both in prostate cancer and across multiple cancer types. From these data, we construct a putative overview of the genomic evolution of human prostate cancer

    Sociological and Communication-Theoretical Perspectives on the Commercialization of the Sciences

    Get PDF
    Both self-organization and organization are important for the further development of the sciences: the two dynamics condition and enable each other. Commercial and public considerations can interact and "interpenetrate" in historical organization; different codes of communication are then "recombined." However, self-organization in the symbolically generalized codes of communication can be expected to operate at the global level. The Triple Helix model allows for both a neo-institutional appreciation in terms of historical networks of university-industry-government relations and a neo-evolutionary interpretation in terms of three functions: (i) novelty production, (i) wealth generation, and (iii) political control. Using this model, one can appreciate both subdynamics. The mutual information in three dimensions enables us to measure the trade-off between organization and self-organization as a possible synergy. The question of optimization between commercial and public interests in the different sciences can thus be made empirical.Comment: Science & Education (forthcoming

    Genome Sizes and the Benford Distribution

    Get PDF
    BACKGROUND: Data on the number of Open Reading Frames (ORFs) coded by genomes from the 3 domains of Life show the presence of some notable general features. These include essential differences between the Prokaryotes and Eukaryotes, with the number of ORFs growing linearly with total genome size for the former, but only logarithmically for the latter. RESULTS: Simply by assuming that the (protein) coding and non-coding fractions of the genome must have different dynamics and that the non-coding fraction must be particularly versatile and therefore be controlled by a variety of (unspecified) probability distribution functions (pdf's), we are able to predict that the number of ORFs for Eukaryotes follows a Benford distribution and must therefore have a specific logarithmic form. Using the data for the 1000+ genomes available to us in early 2010, we find that the Benford distribution provides excellent fits to the data over several orders of magnitude. CONCLUSIONS: In its linear regime the Benford distribution produces excellent fits to the Prokaryote data, while the full non-linear form of the distribution similarly provides an excellent fit to the Eukaryote data. Furthermore, in their region of overlap the salient features are statistically congruent. This allows us to interpret the difference between Prokaryotes and Eukaryotes as the manifestation of the increased demand in the biological functions required for the larger Eukaryotes, to estimate some minimal genome sizes, and to predict a maximal Prokaryote genome size on the order of 8-12 megabasepairs. These results naturally allow a mathematical interpretation in terms of maximal entropy and, therefore, most efficient information transmission
    • …
    corecore