618,218 research outputs found

    Intelligent Management and Efficient Operation of Big Data

    Get PDF
    This chapter details how Big Data can be used and implemented in networking and computing infrastructures. Specifically, it addresses three main aspects: the timely extraction of relevant knowledge from heterogeneous, and very often unstructured large data sources, the enhancement on the performance of processing and networking (cloud) infrastructures that are the most important foundational pillars of Big Data applications or services, and novel ways to efficiently manage network infrastructures with high-level composed policies for supporting the transmission of large amounts of data with distinct requisites (video vs. non-video). A case study involving an intelligent management solution to route data traffic with diverse requirements in a wide area Internet Exchange Point is presented, discussed in the context of Big Data, and evaluated.Comment: In book Handbook of Research on Trends and Future Directions in Big Data and Web Intelligence, IGI Global, 201

    Security and Privacy Issues of Big Data

    Get PDF
    This chapter revises the most important aspects in how computing infrastructures should be configured and intelligently managed to fulfill the most notably security aspects required by Big Data applications. One of them is privacy. It is a pertinent aspect to be addressed because users share more and more personal data and content through their devices and computers to social networks and public clouds. So, a secure framework to social networks is a very hot topic research. This last topic is addressed in one of the two sections of the current chapter with case studies. In addition, the traditional mechanisms to support security such as firewalls and demilitarized zones are not suitable to be applied in computing systems to support Big Data. SDN is an emergent management solution that could become a convenient mechanism to implement security in Big Data systems, as we show through a second case study at the end of the chapter. This also discusses current relevant work and identifies open issues.Comment: In book Handbook of Research on Trends and Future Directions in Big Data and Web Intelligence, IGI Global, 201

    Big-Data-Driven Materials Science and its FAIR Data Infrastructure

    Get PDF
    This chapter addresses the forth paradigm of materials research -- big-data driven materials science. Its concepts and state-of-the-art are described, and its challenges and chances are discussed. For furthering the field, Open Data and an all-embracing sharing, an efficient data infrastructure, and the rich ecosystem of computer codes used in the community are of critical importance. For shaping this forth paradigm and contributing to the development or discovery of improved and novel materials, data must be what is now called FAIR -- Findable, Accessible, Interoperable and Re-purposable/Re-usable. This sets the stage for advances of methods from artificial intelligence that operate on large data sets to find trends and patterns that cannot be obtained from individual calculations and not even directly from high-throughput studies. Recent progress is reviewed and demonstrated, and the chapter is concluded by a forward-looking perspective, addressing important not yet solved challenges.Comment: submitted to the Handbook of Materials Modeling (eds. S. Yip and W. Andreoni), Springer 2018/201

    Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications

    Full text link
    In the era when the market segment of Internet of Things (IoT) tops the chart in various business reports, it is apparently envisioned that the field of medicine expects to gain a large benefit from the explosion of wearables and internet-connected sensors that surround us to acquire and communicate unprecedented data on symptoms, medication, food intake, and daily-life activities impacting one's health and wellness. However, IoT-driven healthcare would have to overcome many barriers, such as: 1) There is an increasing demand for data storage on cloud servers where the analysis of the medical big data becomes increasingly complex, 2) The data, when communicated, are vulnerable to security and privacy issues, 3) The communication of the continuously collected data is not only costly but also energy hungry, 4) Operating and maintaining the sensors directly from the cloud servers are non-trial tasks. This book chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog Computing is a service-oriented intermediate layer in IoT, providing the interfaces between the sensors and cloud servers for facilitating connectivity, data transfer, and queryable local database. The centerpiece of Fog computing is a low-power, intelligent, wireless, embedded computing node that carries out signal conditioning and data analytics on raw data collected from wearables or other medical sensors and offers efficient means to serve telehealth interventions. We implemented and tested an fog computing system using the Intel Edison and Raspberry Pi that allows acquisition, computing, storage and communication of the various medical data such as pathological speech data of individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area Network, Body Sensor Network, Edge Computing, Fog Computing, Medical Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment, Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in Smart Healthcare (2017), Springe

    Chapter 19 Unsupervised Methods

    Get PDF
    The Handbook of Computational Social Science is a comprehensive reference source for scholars across multiple disciplines. It outlines key debates in the field, showcasing novel statistical modeling and machine learning methods, and draws from specific case studies to demonstrate the opportunities and challenges in CSS approaches. The Handbook is divided into two volumes written by outstanding, internationally renowned scholars in the field. This second volume focuses on foundations and advances in data science, statistical modeling, and machine learning. It covers a range of key issues, including the management of big data in terms of record linkage, streaming, and missing data. Machine learning, agent-based and statistical modeling, as well as data quality in relation to digital trace and textual data, as well as probability, non-probability, and crowdsourced samples represent further foci. The volume not only makes major contributions to the consolidation of this growing research field, but also encourages growth into new directions. With its broad coverage of perspectives (theoretical, methodological, computational), international scope, and interdisciplinary approach, this important resource is integral reading for advanced undergraduates, postgraduates, and researchers engaging with computational methods across the social sciences, as well as those within the scientific and engineering sectors

    Digital Twins: Potentials, Ethical Issues, and Limitations

    Full text link
    After Big Data and Artificial Intelligence (AI), the subject of Digital Twins has emerged as another promising technology, advocated, built, and sold by various IT companies. The approach aims to produce highly realistic models of real systems. In the case of dynamically changing systems, such digital twins would have a life, i.e. they would change their behaviour over time and, in perspective, take decisions like their real counterparts \textemdash so the vision. In contrast to animated avatars, however, which only imitate the behaviour of real systems, like deep fakes, digital twins aim to be accurate "digital copies", i.e. "duplicates" of reality, which may interact with reality and with their physical counterparts. This chapter explores, what are possible applications and implications, limitations, and threats.Comment: 22 pages, in Andrej Zwitter and Oskar Gstrein, Handbook on the Politics and Governance of Big Data and Artificial Intelligence, Edward Elgar [forthcoming] (Handbooks in Political Science series

    Scientific Realism and Primordial Cosmology

    Get PDF
    We discuss scientific realism from the perspective of modern cosmology, especially primordial cosmology: i.e. the cosmological investigation of the very early universe. We first (Section 2) state our allegiance to scientific realism, and discuss what insights about it cosmology might yield, as against "just" supplying scientific claims that philosophers can then evaluate. In particular, we discuss: the idea of laws of cosmology, and limitations on ascertaining the global structure of spacetime. Then we review some of what is now known about the early universe (Section 3): meaning, roughly, from a thousandth of a second after the Big Bang onwards(!). The rest of the paper takes up two issues about primordial cosmology, i.e. the very early universe, where "very early" means, roughly, much earlier (logarithmically) than one second after the Big Bang: say, less than 101110^{-11} seconds. Both issues illustrate that familiar philosophical threat to scientific realism, the under-determination of theory by data---on a cosmic scale. The first issue (Section 4) concerns the difficulty of observationally probing the very early universe. More specifically, the difficulty is to ascertain details of the putative inflationary epoch. The second issue (Section 5) concerns difficulties about confirming a cosmological theory that postulates a multiverse, i.e. a set of domains (universes) each of whose inhabitants (if any) cannot directly observe, or otherwise causally interact with, other domains. This again concerns inflation, since many inflationary models postulate a multiverse. For all these issues, it will be clear that much remains unsettled, as regards both physics and philosophy. But we will maintain that these remaining controversies do not threaten scientific realism.Comment: 52 pages. An abridged version will appear in "The Routledge Handbook of Scientific Realism", ed. Juha Saats

    Interoperability in IoT

    Full text link
    Interoperability refers to the ability of IoT systems and components to communicate and share information among them. This crucial feature is key to unlock all of the IoT paradigm´s potential, including immense technological, economic, and social benefits. Interoperability is currently a major challenge in IoT, mainly due to the lack of a reference standard and the vast heterogeneity of IoT systems. IoT interoperability has also a significant importance in big data analytics because it substantively eases data processing. This chapter analyzes the critical importance of IoT interoperability, its different types, challenges to face, diverse use cases, and prospective interoperability solutions. Given that it is a complex concept that involves multiple aspects and elements of IoT, for a deeper insight, interoperability is studied across different levels of IoT systems. Furthermore, interoperability is also re-examined from a global approach among platforms and systems.González-Usach, R.; Yacchirema-Vargas, DC.; Julián-Seguí, M.; Palau Salvador, CE. (2019). Interoperability in IoT. Handbook of Research on Big Data and the IoT. 149-173. http://hdl.handle.net/10251/150250S14917
    corecore