430 research outputs found

    Le "remodelage" des terres en Martinique : modification des propriétés de "ferrisols" et d'andosols cultivés en canne à sucre

    Get PDF
    Le nivellement des collines pratiqué depuis environ 1970 en Martinique pour faciliter la mécanisation des cultures détermine des transformations physicochimiques des sols. Dans cet article les auteurs comparent les propriétés de sols ferrallitiques et d'andosols, dont les horizons A ont été éliminés par ce traitement, avec celles de témoins non remodelé

    The cooking task: making a meal of executive functions

    Get PDF
    Current standardized neuropsychological tests may fail to accurately capture real-world executive deficits. We developed a computer-based Cooking Task (CT) assessment of executive functions and trialed the measure with a normative group before use with a head-injured population. Forty-six participants completed the computerized CT and subtests from standardized neuropsychological tasks, including the Tower and Sorting Tests of executive function from the Delis-Kaplan Executive Function System (D-KEFS) and the Cambridge prospective memory test (CAMPROMPT), in order to examine whether standardized executive function tasks, predicted performance on measurement indices from the CT. Findings showed that verbal comprehension, rule detection and prospective memory contributed to measures of prospective planning accuracy and strategy implementation of the CT. Results also showed that functions necessary for cooking efficacy differ as an effect of task demands (difficulty levels). Performance on rule detection, strategy implementation and flexible thinking executive function measures contributed to accuracy on the CT. These findings raise questions about the functions captured by present standardized tasks particularly at varying levels of difficulty and during dual-task performance. Our preliminary findings also indicate that CT measures can effectively distinguish between executive function and Full Scale IQ abilities. Results of the present study indicate that the CT shows promise as an ecologically valid measure of executive function for future use with a head-injured population and indexes selective executive function’s captured by standardized tests

    Ideally HAWKward: How Not to Break Module-LIP

    Get PDF
    The module-Lattice Isomorphism Problem (module-LIP) was introduced by Ducas et al. (ASIACRYPT 22) in~\cite{HAWK:cryptoeprint:2022/1155}, and used within the signature scheme and NIST candidate HAWK. In~\cite{modLIPtotallyreal}, Mureau et al. (EUROCRYPT24) pointed out that over certain number fields FF, the problem can be reduced to enumerating solutions of x2+y2=qx^2 + y^2 = q (where q \in \O_F is given and x,y \in \O_F are the unknowns). Moreover one can always reduce to a similar equation which has only \textit{few} solutions. This key insight led to a heuristic polynomial-time algorithm for solving module-LIP on those specific instances. Yet this result doesn\u27t threaten HAWK for which the problem can be reduced to enumerating solutions of x2+y2+z2+t2=qx^2 + y^2 + z^2 + t^2 = q (where q \in \O_F is given and x,y,z,t \in \O_F are the unknowns). We show that, in all likelihood, solving this equation requires the enumeration of a \textit{too large} set to be feasible, thereby making irrelevant a straightforward adaptation of the approach in~\cite{modLIPtotallyreal}

    Reducing the Number of Qubits in Quantum Information Set Decoding

    Get PDF
    This paper presents an optimization of the memory cost of the quantum Information Set Decoding (ISD) algorithm proposed by Bernstein (PQCrypto 2010), obtained by combining Prange\u27s ISD with Grover\u27s quantum search. When the code has constant rate and length nn, this algorithm essentially performs a quantum search which, at each iterate, solves a linear system of dimension O(n)\mathcal{O}(n). The typical code lengths used in post-quantum public-key cryptosystems range from 10310^3 to 10510^5. Gaussian elimination, which was used in previous works, needs O(n2)\mathcal{O}(n^2) space to represent the matrix, resulting in millions or billions of (logical) qubits for these schemes. In this paper, we propose instead to use the algorithm for sparse matrix inversion of Wiedemann (IEEE Trans. inf. theory 1986). The interest of Wiedemann\u27s method is that one relies only on the implementation of a matrix-vector product, where the matrix can be represented in an implicit way. This is the case here. We propose two main trade-offs, which we have fully implemented, tested on small instances, and benchmarked for larger instances. The first one is a quantum circuit using O(n)\mathcal{O}(n) qubits, O(n3)\mathcal{O}(n^3) Toffoli gates like Gaussian elimination, and depth O(n2logn)\mathcal{O}(n^2 \log n). The second one is a quantum circuit using O(nlog2n)\mathcal{O}(n \log^2 n) qubits, O(n3)\mathcal{O}(n^3) gates in total but only O(n2log2n)\mathcal{O}( n^2 \log^2 n) Toffoli gates, which relies on a different representation of the search space. As an example, for the smallest Classic McEliece parameters we estimate that the Quantum Prange\u27s algorithm can run with 18098 qubits, while previous works would have required at least half a million qubits

    Reducing the Number of Qubits in Quantum Factoring

    Get PDF
    This paper focuses on the optimization of the number of logical qubits in Shor\u27s quantum factoring algorithm. As in previous works, we target the implementation of the modular exponentiation, which is the most costly component of the algorithm, both in qubits and operations. In this paper, we show that using only o(n)o(n) work qubits, one can obtain the first bit of the modular exponentiation output. We combine this result with May and Schlieper\u27s truncation technique (ToSC 2022) and the Ekerå-Håstad variant of Shor\u27s algorithm (PQCrypto 2017) to obtain a quantum factoring algorithm requiring only n/2+o(n)n/2 + o(n) qubits in the case of an nn-bit RSA modulus, while current envisioned implementations require about 2n2n qubits. Our algorithm uses a Residue Number System and succeeds with a parametrizable probability. Being completely classical, we have implemented and tested it. Among possible trade-offs, we can reach a gate count O(n3)\mathcal{O}(n^3) for a depth O(n2log3n)\mathcal{O}(n^2 \log^3 n), which then has to be multiplied by O(logn)\mathcal{O}(\log n) (the number of measurement results required by Ekerå-Håstad). Preliminary logical resource estimates suggest that this circuit could be engineered to use less than 1700 qubits and 2362^{36} Toffoli gates, and require 60 independent runs to factor an RSA-2048 instance

    MÉMOTVIT’ : L’apprentissage d’un champ lexical métier par les cartes : retours d’expériences

    Get PDF
    MEMOTVIT', un jeu de cartes comportant des définitions, permettant à des apprenants d'acquérir des connaissances sur des champs lexicaux inhérents à une discipline, et ce par le biais d'une activité ludique, en équipe. C'est un jeu qui sollicite l'intelligence individuelle et collective ainsi que les sens. L'objectif est de fournir à l'apprenant la capacité à connaître un champ lexical par contextualisation en équipe, puis une décontextualisassions individuelle et enfin une recontextualisation en équipe. L'apprenant peut évaluer son niveau de connaissance en s'autoévaluant grâce à une évaluation formative seul ou en groupe, un questionnaire en ligne

    Assessment of Executive Function in Everyday Life—Psychometric Properties of the Norwegian Adaptation of the Children’s Cooking Task

    Get PDF
    Background: There are few standardized measures available to assess executive function (EF) in a naturalistic setting for children. The Children’s Cooking Task (CCT) is a complex test that has been specifically developed to assess EF in a standardized open-ended environment (cooking). The aim of the present study was to evaluate the internal consistency, inter-rater reliability, sensitivity and specificity, and also convergent and divergent validity of the Norwegian version of CCT among children with pediatric Acquired Brain Injury (pABI) and healthy controls (HCs). Methods: The present study has a cross-sectional design, based on baseline data derived from a multicenter RCT. Seventy-five children with pABI from two university hospitals with parent-reported executive dysfunction and minimum of 12 months since injury/completed cancer therapy, as well as 59 HCs aged 10–17 years, were assessed with CCT using total errors as the main outcome measure. The pABI group completed tests assessing EF (i.e., inhibition, cognitive flexibility, working memory, and planning) on the impairment level within the ICF framework (performance-based neuropsychological tests and the Behavioral Assessment of the Dysexecutive Syndrome for Children), and on the participation level (questionnaires). In addition, they completed tests of intellectual ability, processing speed, attention, learning, and memory. Finally, overall functional outcome (pediatric Glasgow Outcome Scale-Extended) was evaluated for the children with pABI. Results: Acceptable internal consistency and good inter-rater reliability were found for the CCT. Children with pABI performed significantly worse on the CCT than the HCs. The CCT identified group membership, but the sensitivity and specificity were overall classified as poor. Convergent validity was demonstrated by associations between the CCT and performance-based tests assessing inhibition, cognitive flexibility, and working memory, as well as teacher-reported executive dysfunction (questionnaires). Divergent validity was supported by the lack of association with performance-based measures of learning and memory, attention, and verbal intellectual ability. However, there was a moderate association between the CCT and performance-based tests of processing speed. Lastly, better performance on the CCT was associated with a better functional outcome. Conclusion: Our study with a relatively large sample of children with pABI and HC’s demonstrated good psychometric properties of the CCT. CCT performance was associated with the overall level of disability and function, suggesting that CCT is related to the level of activity in everyday life and participation in society. Hence, our study suggests that the CCT has the potential to advance the assessment of EF by providing a valid analysis of real-world performance. Nevertheless, further research is needed on larger samples, focusing on predictors of task performance, and evaluating the ability of CCT to detect improvement in EF over time. The patterns of error and problem-solving strategies evaluated by the CCT could be used to inform neuropsychological rehabilitation treatmentand represent a more valid outcome measure of rehabilitation interventions.publishedVersio

    How to Claim a Computational Feat

    Get PDF
    Consider some user buying software or hardware from a provider. The provider claims to have subjected this product to a number of tests, ensuring that the system operates nominally. How can the user check this claim without running all the tests anew? The problem is similar to checking a mathematical conjecture. Many authors report having checked a conjecture C(x)=\mbox{True} for all xx in some large set or interval UU. How can mathematicians challenge this claim without performing all the expensive computations again? This article describes a non-interactive protocol in which the prover provides (a digest of) the computational trace resulting from processing xx, for randomly chosen xUx \in U. With appropriate care, this information can be used by the verifier to determine how likely it is that the prover actually checked C(x)C(x) over UU. Unlike ``traditional\u27\u27 interactive proof and probabilistically-checkable proof systems, the protocol is not limited to restricted complexity classes, nor does it require an expensive transformation of programs being executed into circuits or ad-hoc languages. The flip side is that it is restricted to checking assertions that we dub ``\emph{refutation-precious}\u27\u27: expected to always hold true, and such that the benefit resulting from reporting a counterexample far outweighs the cost of computing C(x)C(x) over all of UU

    A reduction from Hawk to the principal ideal problem in a quaternion algebra

    Get PDF
    In this article we present a non-uniform reduction from rank-2 module-LIP over Complex Multiplication fields, to a variant of the Principal Ideal Problem, in some fitting quaternion algebra. This reduction is classical deterministic polynomial-time in the size of the inputs. The quaternion algebra in which we need to solve the variant of the principal ideal problem depends on the parameters of the module-LIP problem, but not on the problem\u27s instance. Our reduction requires the knowledge of some special elements of this quaternion algebras, which is why it is non-uniform. In some particular cases, these elements can be computed in polynomial time, making the reduction uniform. This is the case for the Hawk signature scheme: we show that breaking Hawk is no harder than solving a variant of the principal ideal problem in a fixed quaternion algebra (and this reduction is uniform)
    corecore