2,731 research outputs found

    Superposition as memory: unlocking quantum automatic complexity

    Full text link
    Imagine a lock with two states, "locked" and "unlocked", which may be manipulated using two operations, called 0 and 1. Moreover, the only way to (with certainty) unlock using four operations is to do them in the sequence 0011, i.e., 0n1n0^n1^n where n=2n=2. In this scenario one might think that the lock needs to be in certain further states after each operation, so that there is some memory of what has been done so far. Here we show that this memory can be entirely encoded in superpositions of the two basic states "locked" and "unlocked", where, as dictated by quantum mechanics, the operations are given by unitary matrices. Moreover, we show using the Jordan--Schur lemma that a similar lock is not possible for n=60n=60. We define the semi-classical quantum automatic complexity Qs(x)Q_{s}(x) of a word xx as the infimum in lexicographic order of those pairs of nonnegative integers (n,q)(n,q) such that there is a subgroup GG of the projective unitary group PU(n)(n) with Gq|G|\le q and with U0,U1GU_0,U_1\in G such that, in terms of a standard basis {ek}\{e_k\} and with Uz=kUz(k)U_z=\prod_k U_{z(k)}, we have Uxe1=e2U_x e_1=e_2 and Uye1e2U_y e_1 \ne e_2 for all yxy\ne x with y=x|y|=|x|. We show that QsQ_s is unbounded and not constant for strings of a given length. In particular, Qs(0212)(2,12)<(3,1)Qs(060160) Q_{s}(0^21^2)\le (2,12) < (3,1) \le Q_{s}(0^{60}1^{60}) and Qs(0120)(2,121)Q_s(0^{120})\le (2,121).Comment: Lecture Notes in Computer Science, UCNC (Unconventional Computation and Natural Computation) 201

    A Multicenter Examination and Strategic Revisions of the Yale Global Tic Severity Scale

    Get PDF
    Objective To examine the internal consistency and distribution of the Yale Global Tic Severity Scale (YGTSS) scores to inform modification of the measure. Methods This cross-sectional study included 617 participants with a tic disorder (516 children and 101 adults), who completed an age-appropriate diagnostic interview and the YGTSS to evaluate tic symptom severity. The distributions of scores on YGTSS dimensions were evaluated for normality and skewness. For dimensions that were skewed across motor and phonic tics, a modified Delphi consensus process was used to revise selected anchor points. Results Children and adults had similar clinical characteristics, including tic symptom severity. All participants were examined together. Strong internal consistency was identified for the YGTSS Motor Tic score (α = 0.80), YGTSS Phonic Tic score (α = 0.87), and YGTSS Total Tic score (α = 0.82). The YGTSS Total Tic and Impairment scores exhibited relatively normal distributions. Several subscales and individual item scales departed from a normal distribution. Higher scores were more often used on the Motor Tic Number, Frequency, and Intensity dimensions and the Phonic Tic Frequency dimension. By contrast, lower scores were more often used on Motor Tic Complexity and Interference, and Phonic Tic Number, Intensity, Complexity, and Interference. Conclusions The YGTSS exhibits good internal consistency across children and adults. The parallel findings across Motor and Phonic Frequency, Complexity, and Interference dimensions prompted minor revisions to the anchor point description to promote use of the full range of scores in each dimension. Specific minor revisions to the YGTSS Phonic Tic Symptom Checklist were also proposed

    Normal Sequences with Non-Maximal Automatic Complexity

    Get PDF
    This paper examines Automatic Complexity, a complexity notion introduced by Shallit and Wang in 2001 [Jeffrey O. Shallit and Ming-wei Wang, 2001]. We demonstrate that there exists a normal sequence T such that I(T) = 0 and S(T) ? 1/2, where I(T) and S(T) are the lower and upper automatic complexity rates of T respectively. We furthermore show that there exists a Champernowne sequence C, i.e. a sequence formed by concatenating all strings of length one followed by concatenating all strings of length two and so on, such that S(C) ? 2/3

    ParadisEO-MO-GPU: a Framework for Parallel GPU-based Local Search Metaheuristics

    Get PDF
    International audienceIn this paper, we propose a pioneering framework called ParadisEO-MO-GPU for the reusable design and implementation of parallel local search metaheuristics (S- Metaheuristics) on Graphics Processing Units (GPU). We revisit the ParadisEO-MO software framework to allow its utilization on GPU accelerators focusing on the parallel iteration-level model, the major parallel model for S- Metaheuristics. It consists in the parallel exploration of the neighborhood of a problem solution. The challenge is on the one hand to rethink the design and implementation of this model optimizing the data transfer between the CPU and the GPU. On the other hand, the objective is to make the GPU as transparent as possible for the user minimizing his or her involvement in its management. In this paper, we propose solutions to this challenge as an extension of the ParadisEO framework. The first release of the new GPU-based ParadisEO framework has been experimented on the permuted perceptron problem. The preliminary results are convincing, both in terms of flexibility and easiness of reuse at implementation, and in terms of efficiency at execution on GPU

    Camera Calibration without Camera Access -- A Robust Validation Technique for Extended PnP Methods

    Full text link
    A challenge in image based metrology and forensics is intrinsic camera calibration when the used camera is unavailable. The unavailability raises two questions. The first question is how to find the projection model that describes the camera, and the second is to detect incorrect models. In this work, we use off-the-shelf extended PnP-methods to find the model from 2D-3D correspondences, and propose a method for model validation. The most common strategy for evaluating a projection model is comparing different models' residual variances - however, this naive strategy cannot distinguish whether the projection model is potentially underfitted or overfitted. To this end, we model the residual errors for each correspondence, individually scale all residuals using a predicted variance and test if the new residuals are drawn from a standard normal distribution. We demonstrate the effectiveness of our proposed validation in experiments on synthetic data, simulating 2D detection and Lidar measurements. Additionally, we provide experiments using data from an actual scene and compare non-camera access and camera access calibrations. Last, we use our method to validate annotations in MegaDepth

    Measuring the difficulty of text translation: The combination of text-focused and translator-oriented approaches

    Get PDF
    This paper explores the impact of text complexity on translators’ subjective perception of translation difficulty and on their cognitive load. Twenty-six MA translation students from a UK university were asked to translate three English texts with different complexity into Chinese. Their eye movements were recorded by an eye-tracker, and their cognitive load was self-assessed with a Likert scale before translation and NASA-TLX scales after translation. The results show that: (i) the intrinsic complexity measured by readability, word frequency and non-literalness was in line with the results received from informants’ subjective assessment of translation difficulty; (ii) moderate and positive correlations existed between most items in the self-assessments and the indicator (fixation and saccade durations) obtained by the eye-tracking measurements; and (iii) the informants’ cognitive load as indicated by fixation and saccade durations (but not for pupil size) increased significantly in two of the three texts along with the increase in source text complexity

    Different judgments about visual textures invoke different eye movement patterns

    Get PDF
    Top-down influences on the guidance of the eyes are generally modeled as modulating influences on bottom-up salience maps. Interested in task-driven influences on how, rather than where, the eyes are guided, we expected differences in eye movement parameters accompanying beauty and roughness judgments about visual textures. Participants judged textures for beauty and roughness, while their gaze-behavior was recorded. Eye movement parameters differed between the judgments, showing task effects on how people look at images. Similarity in the spatial distribution of attention suggests that differences in the guidance of attention are non-spatial, possibly feature-based. During the beauty judgment, participants fixated on patches that were richer in color information, further supporting the idea that differences in the guidance of attention are feature-based. A finding of shorter fixation durations during beauty judgments may indicate that extraction of the relevant features is easier during this judgment. This finding is consistent with a more ambient scanning mode during this judgment. The differences in eye movement parameters during different judgments about highly repetitive stimuli highlight the need for models of eye guidance to go beyond salience maps, to include the temporal dynamics of eye guidance

    The determination of measures of software reliability

    Get PDF
    Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. The failure ratio and failure rate were found to be consistent measures. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined
    corecore