1,212 research outputs found

    Integrating Blockchain and Fog Computing Technologies for Efficient Privacy-preserving Systems

    Get PDF
    This PhD dissertation concludes a three-year long research journey on the integration of Fog Computing and Blockchain technologies. The main aim of such integration is to address the challenges of each of these technologies, by integrating it with the other. Blockchain technology (BC) is a distributed ledger technology in the form of a distributed transactional database, secured by cryptography, and governed by a consensus mechanism. It was initially proposed for decentralized cryptocurrency applications with practically proven high robustness. Fog Computing (FC) is a geographically distributed computing architecture, in which various heterogeneous devices at the edge of network are ubiquitously connected to collaboratively provide elastic computation services. FC provides enhanced services closer to end-users in terms of time, energy, and network load. The integration of FC with BC can result in more efficient services, in terms of latency and privacy, mostly required by Internet of Things systems

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    A Decade of Research in Fog computing: Relevance, Challenges, and Future Directions

    Full text link
    Recent developments in the Internet of Things (IoT) and real-time applications, have led to the unprecedented growth in the connected devices and their generated data. Traditionally, this sensor data is transferred and processed at the cloud, and the control signals are sent back to the relevant actuators, as part of the IoT applications. This cloud-centric IoT model, resulted in increased latencies and network load, and compromised privacy. To address these problems, Fog Computing was coined by Cisco in 2012, a decade ago, which utilizes proximal computational resources for processing the sensor data. Ever since its proposal, fog computing has attracted significant attention and the research fraternity focused at addressing different challenges such as fog frameworks, simulators, resource management, placement strategies, quality of service aspects, fog economics etc. However, after a decade of research, we still do not see large-scale deployments of public/private fog networks, which can be utilized in realizing interesting IoT applications. In the literature, we only see pilot case studies and small-scale testbeds, and utilization of simulators for demonstrating scale of the specified models addressing the respective technical challenges. There are several reasons for this, and most importantly, fog computing did not present a clear business case for the companies and participating individuals yet. This paper summarizes the technical, non-functional and economic challenges, which have been posing hurdles in adopting fog computing, by consolidating them across different clusters. The paper also summarizes the relevant academic and industrial contributions in addressing these challenges and provides future research directions in realizing real-time fog computing applications, also considering the emerging trends such as federated learning and quantum computing.Comment: Accepted for publication at Wiley Software: Practice and Experience journa

    Metaverse: A Vision, Architectural Elements, and Future Directions for Scalable and Realtime Virtual Worlds

    Full text link
    With the emergence of Cloud computing, Internet of Things-enabled Human-Computer Interfaces, Generative Artificial Intelligence, and high-accurate Machine and Deep-learning recognition and predictive models, along with the Post Covid-19 proliferation of social networking, and remote communications, the Metaverse gained a lot of popularity. Metaverse has the prospective to extend the physical world using virtual and augmented reality so the users can interact seamlessly with the real and virtual worlds using avatars and holograms. It has the potential to impact people in the way they interact on social media, collaborate in their work, perform marketing and business, teach, learn, and even access personalized healthcare. Several works in the literature examine Metaverse in terms of hardware wearable devices, and virtual reality gaming applications. However, the requirements of realizing the Metaverse in realtime and at a large-scale need yet to be examined for the technology to be usable. To address this limitation, this paper presents the temporal evolution of Metaverse definitions and captures its evolving requirements. Consequently, we provide insights into Metaverse requirements. In addition to enabling technologies, we lay out architectural elements for scalable, reliable, and efficient Metaverse systems, and a classification of existing Metaverse applications along with proposing required future research directions

    Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey

    Full text link
    Digital twin (DT), refers to a promising technique to digitally and accurately represent actual physical entities. One typical advantage of DT is that it can be used to not only virtually replicate a system's detailed operations but also analyze the current condition, predict future behaviour, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate remote monitoring, diagnosis, prescription, surgery and rehabilitation. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT

    Dynamic Shift from Cloud Computing to Industry 4.0: Eco-Friendly Choice or Climate Change Threat

    Get PDF
    Cloud computing utilizes thousands of Cloud Data Centres (CDC) and fulfils the demand of end-users dynamically using new technologies and paradigms such as Industry 4.0 and Internet of Things (IoT). With the emergence of Industry 4.0, the quality of cloud service has increased; however, CDC consumes a large amount of energy and produces a huge quantity of carbon footprint, which is one of the major drivers of climate change. This chapter discusses the impacts of cloud developments on climate and quantifies the carbon footprint of cloud computing in a warming world. Further, the dynamic transition from cloud computing to Industry 4.0 is discussed from an eco-friendly/climate change threat perspective. Finally, open research challenges and opportunities for prospective researchers are explored
    • …
    corecore