11 research outputs found

    Highly parallel computation

    Get PDF
    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed

    INTERNET-TESTING OF PHYSICS

    Full text link
    The analysis of Internet testing as element of quality management of education on an example of physics is carried out. Remarks and offers on optimization of codificator, contents and results estimation of testing are formulated.ΠŸΡ€ΠΎΠ²Π΅Π΄Π΅Π½ Π°Π½Π°Π»ΠΈΠ· Π˜Π½Ρ‚Π΅Ρ€Π½Π΅Ρ‚-тСстирования ΠΊΠ°ΠΊ элСмСнта управлСния качСством образования Π½Π° ΠΏΡ€ΠΈΠΌΠ΅Ρ€Π΅ дисциплины Β«Ρ„ΠΈΠ·ΠΈΠΊΠ°Β», сформулированы замСчания ΠΈ прСдлоТСния ΠΏΠΎ ΠΎΠΏΡ‚ΠΈΠΌΠΈΠ·Π°Ρ†ΠΈΠΈ ΠΊΠΎΠ΄ΠΈΡ„ΠΈΠΊΠ°Ρ‚ΠΎΡ€Π°, содСрТания ΠΈ ΠΎΡ†Π΅Π½ΠΊΠΈ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ΠΎΠ² тСстирования

    Affordable Supercomputer in an Academic Environment. Cloud Computing in Classrooms

    Full text link
    Cloud computing technologies are widely used by large corporations, but nowa-days they become more available to research institutions. In this article author suggests a model of creating an affordable cluster for the academic environment needs.Π’Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ ΠΎΠ±Π»Π°Ρ‡Π½ΠΎΠΉ ΠΎΠ±Ρ€Π°Π±ΠΎΡ‚ΠΊΠΈ Π΄Π°Π½Π½Ρ‹Ρ… ΡˆΠΈΡ€ΠΎΠΊΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡŽΡ‚ΡΡ ΠΊΡ€ΡƒΠΏΠ½Ρ‹ΠΌΠΈ коммСрчСскими организациями, Π½ΠΎ Π² настоящСС врСмя ΠΎΠ½ΠΈ становятся доступны Π΄Π°ΠΆΠ΅ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Ρ‚Π΅Π»ΡŒΡΠΊΠΈΠΌ лабораториям Π²ΡƒΠ·ΠΎΠ² ΠΈΠ»ΠΈ НИИ. Π’ ΡΡ‚Π°Ρ‚ΡŒΠ΅ Π°Π²Ρ‚ΠΎΡ€ ΠΏΡ€Π΅Π΄Π»Π°Π³Π°Π΅Ρ‚ модСль создания доступного кластСра для удовлСтворСния потрСбностСй акадСмичСской срСды Π² Π²Ρ‹Ρ‡ΠΈΡΠ»ΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹Ρ… мощностях

    Hochgradiger Parallelismus

    Get PDF

    Asymmetric Load Balancing on a Heterogeneous Cluster of PCs

    Get PDF
    In recent years, high performance computing with commodity clusters of personal computers has become an active area of research. Many organizations build them because they need the computational speedup provided by parallel processing but cannot afford to purchase a supercomputer. With commercial supercomputers and homogenous clusters of PCs, applications that can be statically load balanced are done so by assigning equal tasks to each processor. With heterogeneous clusters, the system designers have the option of quickly adding newer hardware that is more powerful than the existing hardware. When this is done, the assignment of equal tasks to each processor results in suboptimal performance. This research addresses techniques by which the size of the tasks assigned to processors is a suitable match to the processors themselves, in which the more powerful processors can do more work, and the less powerful processors perform less work. We find that when the range of processing power is narrow, some benefit can be achieved with asymmetric load balancing. When the range of processing power is broad, dramatic improvements in performance are realized our experiments have shown up to 92% improvement when asymmetrically load balancing a modified version of the NAS Parallel Benchmarks\u27 LU application

    Performance studies of file system design choices for two concurrent processing paradigms

    Get PDF

    Redundant disk arrays: Reliable, parallel secondary storage

    Get PDF
    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures

    Applications Development for the Computational Grid

    Get PDF
    corecore