23,294 research outputs found

    An exploration of the language within Ofsted reports and their influence on primary school performance in mathematics: a mixed methods critical discourse analysis

    Get PDF
    This thesis contributes to the understanding of the language of Ofsted reports, their similarity to one another and associations between different terms used within ‘areas for improvement’ sections and subsequent outcomes for pupils. The research responds to concerns from serving headteachers that Ofsted reports are overly similar, do not capture the unique story of their school, and are unhelpful for improvement. In seeking to answer ‘how similar are Ofsted reports’ the study uses two tools, a plagiarism detection software (Turnitin) and a discourse analysis tool (NVivo) to identify trends within and across a large corpus of reports. The approach is based on critical discourse analysis (Van Dijk, 2009; Fairclough, 1989) but shaped in the form of practitioner enquiry seeking power in the form of impact on pupils and practitioners, rather than a more traditional, sociological application of the method. The research found that in 2017, primary school section 5 Ofsted reports had more than half of their content exactly duplicated within other primary school inspection reports published that same year. Discourse analysis showed the quality assurance process overrode variables such as inspector designation, gender, or team size, leading to three distinct patterns of duplication: block duplication, self-referencing, and template writing. The most unique part of a report was found to be the ‘area for improvement’ section, which was tracked to externally verified outcomes for pupils using terms linked to ‘mathematics’. Those required to improve mathematics in their areas for improvement improved progress and attainment in mathematics significantly more than national rates. These findings indicate that there was a positive correlation between the inspection reporting process and a beneficial impact on pupil outcomes in mathematics, and that the significant similarity of one report to another had no bearing on the usefulness of the report for school improvement purposes within this corpus

    An iterative warping and clustering algorithm to estimate multiple wave-shape functions from a nonstationary oscillatory signal

    Full text link
    Nonsinusoidal oscillatory signals are everywhere. In practice, the nonsinusoidal oscillatory pattern, modeled as a 1-periodic wave-shape function (WSF), might vary from cycle to cycle. When there are finite different WSFs, s1,,sKs_1,\ldots,s_K, so that the WSF jumps from one to another suddenly, the different WSFs and jumps encode useful information. We present an iterative warping and clustering algorithm to estimate s1,,sKs_1,\ldots,s_K from a nonstationary oscillatory signal with time-varying amplitude and frequency, and hence the change points of the WSFs. The algorithm is a novel combination of time-frequency analysis, singular value decomposition entropy and vector spectral clustering. We demonstrate the efficiency of the proposed algorithm with simulated and real signals, including the voice signal, arterial blood pressure, electrocardiogram and accelerometer signal. Moreover, we provide a mathematical justification of the algorithm under the assumption that the amplitude and frequency of the signal are slowly time-varying and there are finite change points that model sudden changes from one wave-shape function to another one.Comment: 39 pages, 11 figure

    Operational meanings of a generalized conditional expectation in quantum metrology

    Full text link
    A unifying formalism of generalized conditional expectations (GCEs) for quantum mechanics has recently emerged, but its physical implications regarding the retrodiction of a quantum observable remain controversial. To address the controversy, here I offer operational meanings for a version of the GCEs in the context of quantum parameter estimation. When a quantum sensor is corrupted by decoherence, the GCE is found to relate the operator-valued optimal estimators before and after the decoherence. Furthermore, the error increase, or regret, caused by the decoherence is shown to be equal to a divergence between the two estimators. The real weak value as a special case of the GCE plays the same role in suboptimal estimation -- its divergence from the optimal estimator is precisely the regret for not using the optimal measurement. For an application of the GCE, I show that it enables the use of dynamic programming for designing a controller that minimizes the estimation error. For the frequentist setting, I show that the GCE leads to a quantum Rao-Blackwell theorem, which offers significant implications for quantum metrology and thermal-light sensing in particular. These results give the GCE and the associated divergence a natural, useful, and incontrovertible role in quantum decision and control theory.Comment: 17 pages, 3 figures. v4: polished everything and added more reference

    Time-varying STARMA models by wavelets

    Full text link
    The spatio-temporal autoregressive moving average (STARMA) model is frequently used in several studies of multivariate time series data, where the assumption of stationarity is important, but it is not always guaranteed in practice. One way to proceed is to consider locally stationary processes. In this paper we propose a time-varying spatio-temporal autoregressive and moving average (tvSTARMA) modelling based on the locally stationarity assumption. The time-varying parameters are expanded as linear combinations of wavelet bases and procedures are proposed to estimate the coefficients. Some simulations and an application to historical daily precipitation records of Midwestern states of the USA are illustrated

    Rational-approximation-based model order reduction of Helmholtz frequency response problems with adaptive finite element snapshots

    Get PDF
    We introduce several spatially adaptive model order reduction approaches tailored to non-coercive elliptic boundary value problems, specifically, parametric-in-frequency Helmholtz problems. The offline information is computed by means of adaptive finite elements, so that each snapshot lives in a different discrete space that resolves the local singularities of the analytical solution and is adjusted to the considered frequency value. A rational surrogate is then assembled adopting either a least squares or an interpolatory approach, yielding a function-valued version of the standard rational interpolation method (V-SRI) and the minimal rational interpolation method (MRI). In the context of building an approximation for linear or quadratic functionals of the Helmholtz solution, we perform several numerical experiments to compare the proposed methodologies. Our simulations show that, for interior resonant problems (whose singularities are encoded by poles on the V-SRI and MRI work comparably well. Instead, when dealing with exterior scattering problems, whose frequency response is mostly smooth, the V-SRI method seems to be the best performing one

    Countermeasures for the majority attack in blockchain distributed systems

    Get PDF
    La tecnología Blockchain es considerada como uno de los paradigmas informáticos más importantes posterior al Internet; en función a sus características únicas que la hacen ideal para registrar, verificar y administrar información de diferentes transacciones. A pesar de esto, Blockchain se enfrenta a diferentes problemas de seguridad, siendo el ataque del 51% o ataque mayoritario uno de los más importantes. Este consiste en que uno o más mineros tomen el control de al menos el 51% del Hash extraído o del cómputo en una red; de modo que un minero puede manipular y modificar arbitrariamente la información registrada en esta tecnología. Este trabajo se enfocó en diseñar e implementar estrategias de detección y mitigación de ataques mayoritarios (51% de ataque) en un sistema distribuido Blockchain, a partir de la caracterización del comportamiento de los mineros. Para lograr esto, se analizó y evaluó el Hash Rate / Share de los mineros de Bitcoin y Crypto Ethereum, seguido del diseño e implementación de un protocolo de consenso para controlar el poder de cómputo de los mineros. Posteriormente, se realizó la exploración y evaluación de modelos de Machine Learning para detectar software malicioso de tipo Cryptojacking.DoctoradoDoctor en Ingeniería de Sistemas y Computació

    On Monte Carlo methods for the Dirichlet process mixture model, and the selection of its precision parameter prior

    Get PDF
    Two issues commonly faced by users of Dirichlet process mixture models are: 1) how to appropriately select a hyperprior for its precision parameter alpha, and 2) the typically slow mixing of the MCMC chain produced by conditional Gibbs samplers based on its stick-breaking representation, as opposed to marginal collapsed Gibbs samplers based on the Polya urn, which have smaller integrated autocorrelation times. In this thesis, we analyse the most common approaches to hyperprior selection for alpha, we identify their limitations, and we propose a new methodology to overcome them. To address slow mixing, we revisit three label-switching Metropolis moves from the literature (Hastie et al., 2015; Papaspiliopoulos and Roberts, 2008), improve them, and introduce a fourth move. Secondly, we revisit two i.i.d. sequential importance samplers which operate in the collapsed space (Liu, 1996; S. N. MacEachern et al., 1999), and we develop a new sequential importance sampler for the stick-breaking parameters of Dirichlet process mixtures, which operates in the stick-breaking space and which has minimal integrated autocorrelation time. Thirdly, we introduce the i.i.d. transcoding algorithm which, conditional to a partition of the data, can infer back which specific stick in the stick-breaking construction each observation originated from. We use it as a building block to develop the transcoding sampler, which removes the need for label-switching Metropolis moves in the conditional stick-breaking sampler, as it uses the better performing marginal sampler (or any other sampler) to drive the MCMC chain, and augments its exchangeable partition posterior with conditional i.i.d. stick-breaking parameter inferences after the fact, thereby inheriting its shorter autocorrelation times

    Nonparametric Two-Sample Test for Networks Using Joint Graphon Estimation

    Full text link
    This paper focuses on the comparison of networks on the basis of statistical inference. For that purpose, we rely on smooth graphon models as a nonparametric modeling strategy that is able to capture complex structural patterns. The graphon itself can be viewed more broadly as density or intensity function on networks, making the model a natural choice for comparison purposes. Extending graphon estimation towards modeling multiple networks simultaneously consequently provides substantial information about the (dis-)similarity between networks. Fitting such a joint model - which can be accomplished by applying an EM-type algorithm - provides a joint graphon estimate plus a corresponding prediction of the node positions for each network. In particular, it entails a generalized network alignment, where nearby nodes play similar structural roles in their respective domains. Given that, we construct a chi-squared test on equivalence of network structures. Simulation studies and real-world examples support the applicability of our network comparison strategy.Comment: 25 pages, 6 figure

    Projected Multi-Agent Consensus Equilibrium (PMACE) for Distributed Reconstruction with Application to Ptychography

    Full text link
    Multi-Agent Consensus Equilibrium (MACE) formulates an inverse imaging problem as a balance among multiple update agents such as data-fitting terms and denoisers. However, each such agent operates on a separate copy of the full image, leading to redundant memory use and slow convergence when each agent affects only a small subset of the full image. In this paper, we extend MACE to Projected Multi-Agent Consensus Equilibrium (PMACE), in which each agent updates only a projected component of the full image, thus greatly reducing memory use for some applications.We describe PMACE in terms of an equilibrium problem and an equivalent fixed point problem and show that in most cases the PMACE equilibrium is not the solution of an optimization problem. To demonstrate the value of PMACE, we apply it to the problem of ptychography, in which a sample is reconstructed from the diffraction patterns resulting from coherent X-ray illumination at multiple overlapping spots. In our PMACE formulation, each spot corresponds to a separate data-fitting agent, with the final solution found as an equilibrium among all the agents. Our results demonstrate that the PMACE reconstruction algorithm generates more accurate reconstructions at a lower computational cost than existing ptychography algorithms when the spots are sparsely sampled

    Diffusion Maps for Group-Invariant Manifolds

    Full text link
    In this article, we consider the manifold learning problem when the data set is invariant under the action of a compact Lie group KK. Our approach consists in augmenting the data-induced graph Laplacian by integrating over orbits under the action of KK of the existing data points. We prove that this KK-invariant Laplacian operator LL can be diagonalized by using the unitary irreducible representation matrices of KK, and we provide an explicit formula for computing the eigenvalues and eigenvectors of LL. Moreover, we show that the normalized Laplacian operator LNL_N converges to the Laplace-Beltrami operator of the data manifold with an improved convergence rate, where the improvement grows with the dimension of the symmetry group KK. This work extends the steerable graph Laplacian framework of Landa and Shkolnisky from the case of SO(2)\operatorname{SO}(2) to arbitrary compact Lie groups
    corecore