3,860 research outputs found

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates

    A stereoscopic ranging system using standard PC technology

    Get PDF
    A stereoscopic ranging system is currently being developed as a key source of positional information for an underwater ROV station keeping system. Advancements in PC technology make it possible to use a relatively simple image capture card and a PC as a platform for the fast capture and processing of video images. We make use of the extensive capabilities of fast data buses and the high processing power of fast PCs with Pentium II or III processors. Using this approach we are developing an image processing system that is largely manufacturer independent and promises a good path for both hardware and software upgrading in the future

    Enabling a High Throughput Real Time Data Pipeline for a Large Radio Telescope Array with GPUs

    Get PDF
    The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5GiB/s, grouped into 8s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5TFLOP/s (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exascale facilities.Comment: Version accepted by Comp. Phys. Com

    Mengenal pasti tahap pengetahuan pelajar tahun akhir Ijazah Sarjana Muda Kejuruteraan di KUiTTHO dalam bidang keusahawanan dari aspek pengurusan modal

    Get PDF
    Malaysia ialah sebuah negara membangun di dunia. Dalam proses pembangunan ini, hasrat negara untuk melahirkan bakal usahawan beijaya tidak boleh dipandang ringan. Oleh itu, pengetahuan dalam bidang keusahawanan perlu diberi perhatian dengan sewajarnya; antara aspek utama dalam keusahawanan ialah modal. Pengurusan modal yang tidak cekap menjadi punca utama kegagalan usahawan. Menyedari hakikat ini, kajian berkaitan Pengurusan Modal dijalankan ke atas 100 orang pelajar Tahun Akhir Kejuruteraan di KUiTTHO. Sampel ini dipilih kerana pelajar-pelajar ini akan menempuhi alam pekeijaan di mana mereka boleh memilih keusahawanan sebagai satu keijaya. Walau pun mereka bukanlah pelajar dari jurusan perniagaan, namun mereka mempunyai kemahiran dalam mereka cipta produk yang boleh dikomersialkan. Hasil dapatan kajian membuktikan bahawa pelajar-pelajar ini berminat dalam bidang keusahawanan namun masih kurang pengetahuan tentang pengurusan modal terutamanya dalam menentukan modal permulaan, pengurusan modal keija dan caracara menentukan pembiayaan kewangan menggunakan kaedah jualan harian. Oleh itu, satu garis panduan Pengurusan Modal dibina untuk memberi pendedahan kepada mereka

    Optimising and evaluating designs for reconfigurable hardware

    No full text
    Growing demand for computational performance, and the rising cost for chip design and manufacturing make reconfigurable hardware increasingly attractive for digital system implementation. Reconfigurable hardware, such as field-programmable gate arrays (FPGAs), can deliver performance through parallelism while also providing flexibility to enable application builders to reconfigure them. However, reconfigurable systems, particularly those involving run-time reconfiguration, are often developed in an ad-hoc manner. Such an approach usually results in low designer productivity and can lead to inefficient designs. This thesis covers three main achievements that address this situation. The first achievement is a model that captures design parameters of reconfigurable hardware and performance parameters of a given application domain. This model supports optimisations for several design metrics such as performance, area, and power consumption. The second achievement is a technique that enhances the relocatability of bitstreams for reconfigurable devices, taking into account heterogeneous resources. This method increases the flexibility of modules represented by these bitstreams while reducing configuration storage size and design compilation time. The third achievement is a technique to characterise the power consumption of FPGAs in different activity modes. This technique includes the evaluation of standby power and dedicated low-power modes, which are crucial in meeting the requirements for battery-based mobile devices

    Quantifying the latency benefits of near-edge and in-network FPGA acceleration

    Get PDF
    Transmitting data to cloud datacenters in distributed IoT applications introduces significant communication latency, but is often the only feasible solution when source nodes are computationally limited. To address latency concerns, cloudlets, in-network computing, and more capable edge nodes are all being explored as a way of moving processing capability towards the edge of the network. Hardware acceleration using Field Programmable Gate Arrays (FPGAs) is also seeing increased interest due to reduced computation latency and improved efficiency. This paper evaluates the the implications of these offloading approaches using a case study neural network based image classification application, quantifying both the computation and communication latency resulting from different platform choices. We consider communication latency including the ingestion of packets for processing on the target platform, showing that this varies significantly with the choice of platform. We demonstrate that emerging in-network accelerator approaches offer much improved and predictable performance as well as better scaling to support multiple data sources

    The camera of the fifth H.E.S.S. telescope. Part I: System description

    Full text link
    In July 2012, as the four ground-based gamma-ray telescopes of the H.E.S.S. (High Energy Stereoscopic System) array reached their tenth year of operation in Khomas Highlands, Namibia, a fifth telescope took its first data as part of the system. This new Cherenkov detector, comprising a 614.5 m^2 reflector with a highly pixelized camera in its focal plane, improves the sensitivity of the current array by a factor two and extends its energy domain down to a few tens of GeV. The present part I of the paper gives a detailed description of the fifth H.E.S.S. telescope's camera, presenting the details of both the hardware and the software, emphasizing the main improvements as compared to previous H.E.S.S. camera technology.Comment: 16 pages, 13 figures, accepted for publication in NIM
    corecore