75,759 research outputs found

    Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems

    Get PDF
    The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.Peer ReviewedPostprint (author's final draft

    Next challenges for adaptive learning systems

    Get PDF
    Learning from evolving streaming data has become a 'hot' research topic in the last decade and many adaptive learning algorithms have been developed. This research was stimulated by rapidly growing amounts of industrial, transactional, sensor and other business data that arrives in real time and needs to be mined in real time. Under such circumstances, constant manual adjustment of models is in-efficient and with increasing amounts of data is becoming infeasible. Nevertheless, adaptive learning models are still rarely employed in business applications in practice. In the light of rapidly growing structurally rich 'big data', new generation of parallel computing solutions and cloud computing services as well as recent advances in portable computing devices, this article aims to identify the current key research directions to be taken to bring the adaptive learning closer to application needs. We identify six forthcoming challenges in designing and building adaptive learning (pre-diction) systems: making adaptive systems scalable, dealing with realistic data, improving usability and trust, integrat-ing expert knowledge, taking into account various application needs, and moving from adaptive algorithms towards adaptive tools. Those challenges are critical for the evolving stream settings, as the process of model building needs to be fully automated and continuous.</jats:p

    Social and Big Data Computing for Knowledge Management

    Get PDF
    The proceedings from the eighth KMO conference represent the findings of this international meeting which brought together researchers and developers from industry and the academic world to report on the latest scientific and technical advances on knowledge management in organizations. This conference provided an international forum for authors to present and discuss research focused on the role of knowledge management for innovative services in industries, to shed light on recent advances in social and big data computing for KM as well as to identify future directions for researching the role of knowledge management in service innovation and how cloud computing can be used to address many of the issues currently facing KM in academia and industrial sectors

    Data Collection in Smart Communities Using Sensor Cloud: Recent Advances, Taxonomy, and Future Research Directions

    Get PDF
    The remarkable miniaturization of sensors has led to the production of massive amounts of data in smart communities. These data cannot be efficiently collected and processed in WSNs due to the weak communication capability of these networks. This drawback can be compensated for by amalgamating WSNs and cloud computing to obtain sensor clouds. In this article, we investigate, highlight, and report recent premier advances in sensor clouds with respect to data collection. We categorize and classify the literature by devising a taxonomy based on important parameters, such as objectives, applications, communication technology, collection types, discovery, data types, and classification. Moreover, a few prominent use cases are presented to highlight the role of sensor clouds in providing high computation capabilities. Furthermore, several open research challenges and issues, such as big data issues, deployment issues, data security, data aggregation, dissemination of control message, and on time delivery are discussed. Future research directions are also provided

    The DeepHealth Toolkit: A key European free and open-source software for deep learning and computer vision ready to exploit heterogeneous HPC and cloud architectures

    Get PDF
    At the present time, we are immersed in the convergence between Big Data, High-Performance Computing and Artificial Intelligence. Technological progress in these three areas has accelerated in recent years, forcing different players like software companies and stakeholders to move quickly. The European Union is dedicating a lot of resources to maintain its relevant position in this scenario, funding projects to implement large-scale pilot testbeds that combine the latest advances in Artificial Intelligence, High-Performance Computing, Cloud and Big Data technologies. The DeepHealth project is an example focused on the health sector whose main outcome is the DeepHealth toolkit, a European unified framework that offers deep learning and computer vision capabilities, completely adapted to exploit underlying heterogeneous High-Performance Computing, Big Data and cloud architectures, and ready to be integrated into any software platform to facilitate the development and deployment of new applications for specific problems in any sector. This toolkit is intended to be one of the European contributions to the field of AI. This chapter introduces the toolkit with its main components and complementary tools, providing a clear view to facilitate and encourage its adoption and wide use by the European community of developers of AI-based solutions and data scientists working in the healthcare sector and others. iThis chapter describes work undertaken in the context of the DeepHealth project, “Deep-Learning and HPC to Boost Biomedical Applications for Health”, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825111.Peer Reviewed"Article signat per 19 autors/es: Marco Aldinucci, David Atienza, Federico Bolelli, Mónica Caballero, Iacopo Colonnelli, José Flich, Jon A. Gómez, David González, Costantino Grana, Marco Grangetto, Simone Leo, Pedro López, Dana Oniga, Roberto Paredes, Luca Pireddu, Eduardo Quiñones, Tatiana Silva, Enzo Tartaglione & Marina Zapater "Postprint (author's final draft

    Spatial resolution enhancement using deep learning and data augmentation for cloud-based infrastructure

    Get PDF
    Satellite data have fundamentally changed how we perceive and understand the world we live in. The amount of data produced is rapidly increasing and classical computing resources and processing tools are no longer sufficient. These massive amounts of data require huge storage in addition to advanced computing capacity in order to allow users to benefit from the derived datasets. Cloud computing has provided the required storage and computing capacity on a scalable level to add or remove resources according to requirements. Simultaneously, recent advances in data processing techniques such as Deep Learning (DL) have paved the way to integrated solutions for Earth Observation (EO) big data understanding. By bringing together a unique combination of computing capacity, ultra fast data storage and advanced data processing techniques, our ability to derive useful insights will be revolutionised. This thesis focuses on harnessing cloud computing and deep learning capabilities to enhance spatial resolution of satellite imagery data. Particularly, the thesis highlights cloud computing architectures to accommodate satellite image processing services. A conceptual data model was developed to enable utilising cloud computing resources with EO big data in near real-time. Moreover, a state-of-the-art super resolution algorithm (SRCNN) was adapted and tested in detail to be exploited with the satellite image domain. In addition, a novel fusion-based data augmentation approach was developed to boost super resolution accuracy. To evaluate the super resolution accuracy with a real-life application, landcover classification was adopted to assess the accuracy between super resolved Landsat-8 data and crowd-source data collected using the Google Earth interface. The accuracy achieved opens a wide field of research with deep learning and data augmentation in the satellite image super resolution domain

    Deep Learning for Edge Computing Applications: A State-of-the-Art Survey

    Get PDF
    With the booming development of Internet-of-Things (IoT) and communication technologies such as 5G, our future world is envisioned as an interconnected entity where billions of devices will provide uninterrupted service to our daily lives and the industry. Meanwhile, these devices will generate massive amounts of valuable data at the network edge, calling for not only instant data processing but also intelligent data analysis in order to fully unleash the potential of the edge big data. Both the traditional cloud computing and on-device computing cannot sufficiently address this problem due to the high latency and the limited computation capacity, respectively. Fortunately, the emerging edge computing sheds a light on the issue by pushing the data processing from the remote network core to the local network edge, remarkably reducing the latency and improving the efficiency. Besides, the recent breakthroughs in deep learning have greatly facilitated the data processing capacity, enabling a thrilling development of novel applications, such as video surveillance and autonomous driving. The convergence of edge computing and deep learning is believed to bring new possibilities to both interdisciplinary researches and industrial applications. In this article, we provide a comprehensive survey of the latest efforts on the deep-learning-enabled edge computing applications and particularly offer insights on how to leverage the deep learning advances to facilitate edge applications from four domains, i.e., smart multimedia, smart transportation, smart city, and smart industry. We also highlight the key research challenges and promising research directions therein. We believe this survey will inspire more researches and contributions in this promising field

    Analyzing the Hardware-Software Implications of Multi-modal DNN Workloads using MMBench

    Full text link
    The explosive growth of various types of big data and advances in AI technologies have catalyzed a new type of applications called multi-modal DNNs. Multi-modal DNNs are capable of interpreting and reasoning about information from multiple modalities, making them more applicable to real-world AI scenarios. In recent research, multi-modal DNNs have outperformed the best uni-modal DNN in a wide range of applications from traditional multimedia to emerging autonomous systems. However, despite their importance and superiority, very limited research attention has been devoted to understand the characteristics of multi-modal DNNs and their implications on current computing software/hardware platforms. To facilitate research and advance the understanding of these multi-modal DNN workloads, we first present MMbench, an open-source benchmark suite consisting of a set of real-world multi-modal DNN workloads with relevant performance metrics for evaluation. Then we use MMbench to conduct an in-depth analysis on the characteristics of multi-modal DNNs. We study their implications on application and programming framework, operating and scheduling system, as well as execution hardware. Finally, we conduct a case study and extend our benchmark to edge devices. We hope that our work can provide guidance for future software/hardware design and optimization to underpin multi-modal DNNs on both cloud and edge computing platforms

    Sketch of Big Data Real-Time Analytics Model

    Get PDF
    Big Data has drawn huge attention from researchers in information sciences, decision makers in governments and enterprises. However, there is a lot of potential and highly useful value hidden in the huge volume of data. Data is the new oil, but unlike oil data can be refined further to create even more value. Therefore, a new scientific paradigm is born as data-intensive scientific discovery, also known as Big Data. The growth volume of real-time data requires new techniques and technologies to discover insight value. In this paper we introduce the Big Data real-time analytics model as a new technique. We discuss and compare several Big Data technologies for real-time processing along with various challenges and issues in adapting Big Data. Real-time Big Data analysis based on cloud computing approach is our future research direction
    corecore