47 research outputs found

    The mathematical development of children with Apert syndrome

    Get PDF
    Apert syndrome is a rare condition (birth prevalence of 1 in 65000) with associated risks of other physical disabilities. Children with the condition experience major surgery involving the fingers. It has been suggested that these children have greater difficulty with mathematics than with other curriculum subjects. This study explored the mathematical learning of 10 primary school age children with Apert syndrome over two years. The children in the study had varied sensory disabilities, which included hearing and visual impairments, as well as limited finger mobility. The children were visited five or six times at school, in order to detect change over time. The children were observed when they were learning mathematics in school. To explore the children’s understanding and thinking in mathematics, clinical interviews using items from number skills tests were conducted. Standardized measures of working memory and mathematical achievement were administered. Interviews were carried out with the children’s parents and school staff supporting their education. A central finding of this study is that children with Apert syndrome are heterogeneous. The only factor that the children in the study shared was their initial lack of finger use when engaging with work involving number and arithmetic. In line with contemporary neuroscience, this study suggests that finger knowledge and awareness, or finger gnosis, and finger mobility are important in early number development. Exercises in developing of finger gnosis may enable flexibility in strategy use and development for solving problems in arithmetic. However, children with Apert syndrome will continue to be confronted with many other challenges that impact their learning of mathematics

    The RecQ-like helicase HRQ1 is involved in DNA crosslink repair in Arabidopsis in a common pathway with the Fanconi anemia-associated nuclease FAN1 and the postreplicative repair ATPase RAD5A

    Get PDF
    RecQ helicases are important caretakers of genome stability and occur in varying copy numbers in different eukaryotes. Subsets of RecQ paralogs are involved in DNA crosslink (CL) repair. The orthologs of AtRECQ2, AtRECQ3 and AtHRQ1, HsWRN, DmRECQ5 and ScHRQ1 participate in CL repair in their respective organisms, and we aimed to define the function of these helicases for plants. We obtained Arabidopsis mutants of the three RecQ helicases and determined their sensitivity against CL agents in single‐ and double‐mutant analyses. Only Athrq1 , but not Atrecq2 and Atrecq3 , mutants proved to be sensitive to intra‐ and interstrand crosslinking agents. AtHRQ1 is specifically involved in the repair of replicative damage induced by CL agents. It shares pathways with the Fanconi anemia‐related endonuclease FAN1 but not with the endonuclease MUS81. Most surprisingly, AtHRQ1 is epistatic to the ATPase RAD5A for intra‐ as well as interstrand CL repair. We conclude that, as in fungi, AtHRQ1 has a conserved function in DNA excision repair. Additionally, HRQ1 not only shares pathways with the Fanconi anemia repair factors, but in contrast to fungi also seems to act in a common pathway with postreplicative DNA repair

    Improving the Performance of Parallel SpMV Operations on NUMA Systems with Adaptive Load Balancing

    No full text
    For a parallel Sparse Matrix Vector Multiply (SpMV) on a multiprocessor, rather simple and efficient work distributions often produce good results. In cases where this is not true, adaptive load balancing can improve the balance and performance. This paper introduces a low overhead framework for adaptive load balancing of parallel SpMV operations. It uses statistical filters to gather relevant runtime performance data and detects an imbalance situation. Three different algorithms were compared that adaptively balance the load with high quality and low overhead. Results show that for sparse matrices, where the adaptive load balancing was enabled, an average speedup of 1.15 (regarding the total execution time) could be achieved with our best algorithm over 4 different matrix formats and two different NUMA systems

    Program Optimization Strategies to Improve the Performance of SpMV-Operations

    No full text
    The SpMV operation -- the multiplication of a sparse matrix with a dense vector -- is used in many simulations in natural and engineering sciences as a computational kernel. This kernel is quite performance critical as it is used, e.g.,~in a linear solver many times in a simulation run. Such performance critical kernels of a program may be optimized on certain levels, ranging from using a rather coarse grained and comfortable single compiler optimization switch down to utilizing architecural features by explicitly using special instructions on an assembler level. This paper discusses a selection of such program optimization techniques in this spectrum applied to the SpMV operation. The achievable performance gain as well as the additional programming effort are discussed. It is shown that low effort optimizations can improve the performance of the SpMV operation compared to a basic implementation. But further than that, more complex low level optimizations have a higher impact on the performance, although changing the original program and the readability / maintainability of a program significantly

    SpMV Runtime Improvements with Program Optimization Techniques on Different Abstraction Levels

    No full text
    The multiplication of a sparse matrix with a dense vector is a performance critical computational kernel in many applications, especially in natural and engineering sciences. To speed up this operation, many optimization techniques have been developed in the past, mainly focusing on the data layout for the sparse matrix. Strongly related to the data layout is the program code for the multiplication. But even for a fixed data layout with an accommodated kernel, there are several alternatives for program optimizations. This paper discusses a spectrum of program optimization techniques on different abstraction layers for six different sparse matrix data format and kernels. At the one end of the spectrum, compiler options can be used that hide from the programmer all optimizations done by the compiler internally. On the other end of the spectrum, a multiplication kernel can be programmed that use highly sophisticated intrinsics on an assembler level that ask for a programmer with a deep understanding of processor architectures. These special instructions can be used to efficiently utilize hardware features in processors like vector units that have the potential to speed up sparse matrix computations. The paper compares the programming effort and required knowledge level for certain program optimizations in relation to the gained runtime improvements

    Performance Prediction and Ranking of SpMV Kernels on GPU Architectures

    No full text
    Predicting the runtime of a sparse matrix-vector multiplication (SpMV) for different sparse matrix formats and thread mappings allows the dynamic selection of the most appropriate matrix format and thread mapping for a given matrix. This paper introduces two new generally applicable performance models for SpMV – for linear and non-linear relationships – based on machine learning techniques. This approach supersedes the common manual development of an explicit performance model for a new architecture or for a new format based on empirical data. The two new models are compared to an existing explicit performance model on different GPUs. Results show that the quality of performance prediction results, the ranking of the alternatives, and the adaptability to other formats/architectures of the two machine learning techniques is better than that of the explicit performance model

    Comparing Different Programming Approaches for SpMV-Operations on GPUs

    No full text
    There exist various different high- and low-level approaches for GPU programming. These include the newer directive based OpenACC programming model, Nvidia’s programming platform CUDA and existing libraries like cuSPARSE with a fixed functionality. This work compares the attained performance and development effort of different approaches based on the example of implementing the SpMV operation, which is an important and performance critical building block in many application fields. We show that the main differences in development effort using CUDA and OpenACC are related to the memory management and the thread mapping

    Using Application Oriented Micro-Benchmarks to Characterize the Performance of Single-node Hardware Architectures

    No full text
    In this paper, a set of micro-benchmarks is proposed to determine basic performance parameters of single-node mainstream hardware architectures for High Performance Computing. Performance parameters of recent processors, including those of accelerators, are determined. The investigated systems are Intel server processor architectures as well as the two accelerator lines Intel Xeon Phi and Nvidia graphic processors. Results show similarities for some parameters between all architectures, but significant differences for others

    A Homolog of ScRAD5 Is Involved in DNA Repair and Homologous Recombination in Arabidopsis1[W]

    No full text
    Rad5 is the key component in the Rad5-dependent error-free branch of postreplication repair in yeast (Saccharomyces cerevisiae). Rad5 is a member of the Snf2 ATPase/helicase family, possessing as a characteristic feature, a RING-finger domain embedded in the Snf2-helicase domain and a HIRAN domain. Yeast mutants are sensitive to DNA-damaging agents and reveal differences in homologous recombination. By sequence comparisons we were able to identify two homologs (AtRAD5a and AtRAD5b) in the Arabidopsis thaliana genome, sharing about 30% identity and 45% similarity to yeast Rad5. AtRad5a and AtRad5b have the same kind of domain organization with a higher degree of similarity to each other than to ScRad5. Surprisingly, both genes differ in function: whereas two independent mutants of Atrad5a are hypersensitive to the cross-linking agents mitomycin C and cis-platin and to a lesser extent to the methylating agent, methyl methane sulfonate, the Atrad5b mutants did not exhibit any sensitivity to all DNA-damaging agents tested. An Atrad5a/Atrad5b double mutant resembles the sensitivity phenotype of the Atrad5a single mutants. Moreover, in contrast to Atrad5b, the two Atrad5a mutants are deficient in homologous recombination after treatment with the double-strand break-inducing agent bleomycin. Our results suggest that the RAD5-dependent error-free branch of postreplication repair is conserved between yeast and plants, and that AtRad5a might be functionally homologous to ScRad5
    corecore