173 research outputs found

    On the maximal number of cubic subwords in a string

    Full text link
    We investigate the problem of the maximum number of cubic subwords (of the form wwwwww) in a given word. We also consider square subwords (of the form wwww). The problem of the maximum number of squares in a word is not well understood. Several new results related to this problem are produced in the paper. We consider two simple problems related to the maximum number of subwords which are squares or which are highly repetitive; then we provide a nontrivial estimation for the number of cubes. We show that the maximum number of squares xxxx such that xx is not a primitive word (nonprimitive squares) in a word of length nn is exactly ⌊n2⌋−1\lfloor \frac{n}{2}\rfloor - 1, and the maximum number of subwords of the form xkx^k, for k≄3k\ge 3, is exactly n−2n-2. In particular, the maximum number of cubes in a word is not greater than n−2n-2 either. Using very technical properties of occurrences of cubes, we improve this bound significantly. We show that the maximum number of cubes in a word of length nn is between (1/2)n(1/2)n and (4/5)n(4/5)n. (In particular, we improve the lower bound from the conference version of the paper.)Comment: 14 page

    Signal and System Approximation from General Measurements

    Full text link
    In this paper we analyze the behavior of system approximation processes for stable linear time-invariant (LTI) systems and signals in the Paley-Wiener space PW_\pi^1. We consider approximation processes, where the input signal is not directly used to generate the system output, but instead a sequence of numbers is used that is generated from the input signal by measurement functionals. We consider classical sampling which corresponds to a pointwise evaluation of the signal, as well as several more general measurement functionals. We show that a stable system approximation is not possible for pointwise sampling, because there exist signals and systems such that the approximation process diverges. This remains true even with oversampling. However, if more general measurement functionals are considered, a stable approximation is possible if oversampling is used. Further, we show that without oversampling we have divergence for a large class of practically relevant measurement procedures.Comment: This paper will be published as part of the book "New Perspectives on Approximation and Sampling Theory - Festschrift in honor of Paul Butzer's 85th birthday" in the Applied and Numerical Harmonic Analysis Series, Birkhauser (Springer-Verlag). Parts of this work have been presented at the IEEE International Conference on Acoustics, Speech, and Signal Processing 2014 (ICASSP 2014

    Model-consistent estimation of the basic reproduction number from the incidence of an emerging infection

    Get PDF
    We investigate the merit of deriving an estimate of the basic reproduction number \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} R0 \mathcal{R}_0 \end{document} early in an outbreak of an (emerging) infection from estimates of the incidence and generation interval only. We compare such estimates of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} R0 \mathcal{R}_0 \end{document} with estimates incorporating additional model assumptions, and determine the circumstances under which the different estimates are consistent. We show that one has to be careful when using observed exponential growth rates to derive an estimate of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} R0 \mathcal{R}_0 \end{document} , and we quantify the discrepancies that arise

    Utility of Pathology Imagebase for Standardization of Prostate Cancer Grading

    Get PDF
    Aims: Despite efforts to standardise grading of prostate cancer, even among experts there is still a considerable variation in grading practices. In this study we describe the use of Pathology Imagebase, a novel reference image library, for setting an international standard in prostate cancer grading. Methods and results: The International Society of Urological Pathology (ISUP) recently launched a reference image database supervised by experts. A panel of 24 international experts in prostate pathology reviewed independently microphotographs of 90 cases of prostate needle biopsies with cancer. A linear weighted kappa of 0.67 (95% confidence interval = 0.62-0.72) and consensus was reached in 50 cases. The interobserver weighted kappa varied from 0.48 to 0.89. The highest level of agreement was seen for Gleason score (GS) 3 + 3 = 6 (ISUP grade 1), while higher grades and particularly GS 4 + 3 = 7 (ISUP grade 3) showed considerable disagreement. Once a two-thirds majority was reached, images were moved automatically into a public database available for all ISUP members at www.isupweb.org. Non-members are able to access a limited number of cases. Conclusions: It is anticipated that the database will assist pathologists to calibrate their grading and, hence, decrease interobserver variability. It will also help to identify instances where definitions of grades need to be clarified

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    • 

    corecore