562,127 research outputs found

    A Framework for Analyzing Fog-Cloud Computing Cooperation Applied to Information Processing of UAVs

    Full text link
    Unmanned aerial vehicles (UAVs) are a relatively new technology. Their application can often involve complex and unseen problems. For instance, they can work in a cooperative-based environment under the supervision of a ground station to speed up critical decision-making processes. However, the amount of information exchanged among the aircraft and ground station is limited by high distances, low bandwidth size, restricted processing capability, and energy constraints. These drawbacks restrain large-scale operations such as large area inspections. New distributed state-of-the-art processing architectures, such as fog computing, can improve latency, scalability, and efficiency to meet time constraints via data acquisition, processing, and storage at different levels. Under these amendments, this research work proposes a mathematical model to analyze distribution-based UAVs topologies and a fog-cloud computing framework for large-scale mission and search operations. The tests have successfully predicted latency and other operational constraints, allowing the analysis of fog-computing advantages over traditional cloud-computing architectures.Comment: Volume 2019, Article ID 7497924, 14 page

    Scientific Data Lake for High Luminosity LHC project and other data-intensive particle and astro-particle physics experiments

    Get PDF
    Indexación ScopusThe next phase of LHC Operations-High Luminosity LHC (HL-LHC), which is aimed at ten-fold increase in the luminosity of proton-proton collisions at the energy of 14 TeV, is expected to start operation in 2027-2028 and will deliver an unprecedented scientific data volume of multi-exabyte scale. This amount of data has to be stored and the corresponding storage system should ensure fast and reliable data delivery for processing by scientific groups distributed all over the world. The present LHC computing and data processing model will not be able to provide the required infrastructure growth even taking into account the expected hardware technology evolution. To address this challenge the new state-of-The-Art computing infrastructure technologies are now being developed and are presented here. The possibilities of application of the HL-LHC distributed data handling technique for other particle and astro-particle physics experiments dealing with large-scale data volumes like DUNE, LSST, Belle-II, JUNO, SKAO etc. are also discussed. © Published under licence by IOP Publishing Ltd.https://iopscience-iop-org.recursosbiblioteca.unab.cl/article/10.1088/1742-6596/1690/1/01216

    Meta-brokering solution for establishing Grid Interoperability

    Get PDF

    Minimizing synchronizations in sparse iterative solvers for distributed supercomputers

    Get PDF
    Eliminating synchronizations is one of the important techniques related to minimizing communications for modern high performance computing. This paper discusses principles of reducing communications due to global synchronizations in sparse iterative solvers on distributed supercomputers. We demonstrates how to minimizing global synchronizations by rescheduling a typical Krylov subspace method. The benefit of minimizing synchronizations is shown in theoretical analysis and is verified by numerical experiments using up to 900 processors. The experiments also show the communication complexity for some structured sparse matrix vector multiplications and global communications in the underlying supercomputers are in the order P1/2.5 and P4/5 respectively, where P is the number of processors and the experiments were carried on a Dawning 5000A
    • …
    corecore