15 research outputs found

    The Role of Technology in the Learning Process: A Decision Tree-Based Model Using Machine Learning

    Get PDF
    Machine learning approaches may establish a complex and non-linear relationship among input and response variables for the assessment of the Basic Education Development Index (IDEB) database and show indicators that may contribute to monitoring the quality of education. This paper uses extensive experimental databases from public schools, consisting of a case study in Brazil, to analyze data such as the physical and technological structure of schools and teacher profiles. The research proposes decision tree-based machine learning models for predictions of the best attributes to positively contribute to IDEB. It employs a newly developed SHapley Additive exPlanations (SHAP) approach to classify input variables, so to identify variables that impact the most the final model; a non-probabilistic sample was used, composed from three official databases of 450 schools, and 617 teachers. Results show that the number of computers per student, teachers’ service time, broadband internet access, investments in technology training for teachers, and computer labs in schools are the variables that have the greatest effect on IDEB. The model applied shows high prediction accuracy for test data (MSE = 0.2094 and R² = 0.8991). This article contributes to improving efficiency when monitoring parameters used to measure the quality of a teaching-learning process. Doi: 10.28991/ESJ-2022-SIED-020 Full Text: PD

    Fog of everything: energy-efficient networked computing architectures, research challenges, and a case study

    Get PDF
    Fog computing (FC) and Internet of Everything (IoE) are two emerging technological paradigms that, to date, have been considered standing-alone. However, because of their complementary features, we expect that their integration can foster a number of computing and network-intensive pervasive applications under the incoming realm of the future Internet. Motivated by this consideration, the goal of this position paper is fivefold. First, we review the technological attributes and platforms proposed in the current literature for the standing-alone FC and IoE paradigms. Second, by leveraging some use cases as illustrative examples, we point out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming, while introducing new open issues. Third, we propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, that integrates FC and IoE and then we detail the main building blocks and services of the corresponding technological platform and protocol stack. Fourth, as a proof-of-concept, we present the simulated energy-delay performance of a small-scale FoE prototype, namely, the V-FoE prototype. Afterward, we compare the obtained performance with the corresponding one of a benchmark technological platform, e.g., the V-D2D one. It exploits only device-to-device links to establish inter-thing \u27ad hoc\u27 communication. Last, we point out the position of the proposed FoE paradigm over a spectrum of seemingly related recent research projects

    Fog Asiste Cloud Paradigma para la Accesibilidad y Colaboración al Análisis de Datos Genómicos

    No full text
    La secuenciación de la próxima generación es cada vez más creciente y requiere recursos informáticos a gran escala para manejar la enorme cantidad de datos producidos. El paradigma Cloud computing fácilmente maneja datos enormes, pero el problema central con este paradigma es la transferencia de datos enormes hacia y desde las computadoras en cloud debido al ancho de banda limitado que radica en la naturaleza centralizada de la arquitectura Cloud computing la cual está localizada lejos de los usuarios. Una arquitectura donde la potencia de computación se distribuya de manera más uniforme en toda la red es una forma de combatir este problema. La arquitectura debe llevar la capacidad de procesamiento hacia el borde de la red, más cerca de la fuente de los datos. Para esta propuesta, Fog computing ofrece una solución prometedora para acercar las capacidades computacionales a los datos generados y será la solución para ganar fuerza en la investigación genómica. Proponemos un nuevo modelo llamado Collaborative-Fog (Co-Fog) que adopta los paradigmas Fog y Cloud computing para administrar grandes conjuntos de datos genómicos y para permitir la comprensión de cómo las partes interesadas pueden gestionar la interacción y la colaboración. El presente trabajo describe el modelo Co-Fog que promete un mayor rendimiento, eficiencia energética, menor latencia, tiempo de respuesta más rápido, escalabilidad y una mejor precisión localizada para futuras colaboraciones a gran escala en la genómica

    Design and energy-efficient resource management of virtualized networked Fog architectures for the real-time support of IoT applications

    No full text
    With the incoming 5G access networks, it is forecasted that Fog computing (FC) and Internet of Things (IoT) will converge onto the Fog-of-IoT paradigm. Since the FC paradigm spreads, by design, networking and computing resources over the wireless access network, it would enable the support of computing-intensive and delay-sensitive streaming applications under the energy-limited wireless IoT realm. Motivated by this consideration, the goal of this paper is threefold. First, it provides a motivating study the main “killer” application areas envisioned for the considered Fog-of-IoT paradigm. Second, it presents the design of a CoNtainer-based virtualized networked computing architecture. The proposed architecture operates at the Middleware layer and exploits the native capability of the Container Engines, so as to allow the dynamic real-time scaling of the available computing-plus-networking virtualized resources. Third, the paper presents a low-complexity penalty-aware bin packing-type heuristic for the dynamic management of the resulting virtualized computing-plus-networking resources. The proposed heuristic pursues the joint minimization of the networking-plus-computing energy by adaptively scaling up/down the processing speeds of the virtual processors and transport throughputs of the instantiated TCP/IP virtual connections, while guaranteeing hard (i.e., deterministic) upper bounds on the per-task computing-plus-networking delays. Finally, the actual energy performance-versus-implementation complexity trade-off of the proposed resource manager is numerically tested under both wireless static and mobile Fog-of-IoT scenarios and comparisons against the corresponding performances of some state-of-the-art benchmark resource managers and device-to-device edge computing platforms are also carried out. © 2018, Springer Science+Business Media, LLC, part of Springer Nature

    FLAPS: bandwidth and delay-efficient distributed data searching in fog-supported P2P content delivery networks

    No full text
    .Due to the growing interest for multimedia contents by mobile users, designing bandwidth and delay-efficient distributed algorithms for data searching over wireless (possibly, mobile) “ad hoc” Peer-to-Peer (P2P) content Delivery Networks (CDNs) is a topic of current interest. This is mainly due to the limited computing-plus-communication resources featuring state-of-the-art wireless P2P CDNs. In principle, an effective means to cope with this limitation is to empower traditional P2P CDNs by distributed Fog nodes. Motivated by this consideration, the goal of this paper is twofold. First, we propose and describe the main building blocks of a hybrid (e.g., mixed infrastructure and “ad hoc”) Fog-supported P2P architecture for wireless content delivery, namely, the Fog-Caching P2P architecture. It exploits the topological (possibly, time varying) information locally available at the serving Fog nodes, in order to speed up the data searching operations performed by the served peers. Second, we propose a bandwidth and delay-efficient, distributed and adaptive probabilistic search algorithm, that relies on the learning automata paradigm, e.g., the Fog-supported Learning Automata Adaptive Probabilistic Search (FLAPS) algorithm. The main feature of the FLAPS algorithm is the exploitation of the local topology information provided by the serving Fog nodes and the current status of the collaborating peers, in order to run a suitably distributed reinforcement algorithm for the adaptive discovery of peer-to-peer and peer-to-fog minimum-hop routes. The performance of the proposed FLAPS algorithm is numerically evaluated in terms of Success Rate, Hit-per-Query, Message-per-Query, Response Delay and Message Duplication Factor over a number of randomly generated benchmark CDN topologies. Furthermore, in order to corroborate the actual effectiveness of the FLAPS algorithm, extensive performance comparisons are carried out with some state-of-the-art searching algorithms, namely the Adaptive Probabilistic Search, Improved Adaptive Probabilistic Search and the Random Walk algorithms

    Energy performance of heuristics and meta-heuristics for real-time joint resource scaling and consolidation in virtualized networked data centers

    No full text
    In this paper, we explore on a comparative basis the performance suitability of meta-heuristic, sometime denoted as random search algorithms, and greedy-type heuristics for the energy-saving joint dynamic scaling and consolidation of the network-plus-computing resources hosted by networked virtualized data centers when the target is the support of real-time streaming-type applications. For this purpose, the energy and delay performances of Tabu Search (TS), Simulated Annealing (SA) and Evolutionary Strategy (ES) meta-heuristics are tested and compared with the corresponding ones of Best-Fit Decreasing-type heuristics, in order to give insight on the resulting performance-versus-implementation complexity trade-offs. In principle, the considered meta-heuristics and heuristics are general formal approaches that can be applied to large classes of (typically, non-convex and mixed integer) optimization problems. However, specially for the meta-heuristics, a main challenge is to design them to properly address the real-time joint computing-plus-networking resource consolidation and scaling optimization problem. To this purpose, the aim of this paper is: (i) introduce a novel Virtual Machine Allocation (VMA) scheme that aims at choosing a suitable set of possible Virtual Machine placements among the (possibly, non-homogeneous) set of available servers; (ii) propose a new class of random search algorithms (RSAs) denoted as consolidation meta-heuristic, considering the VMA problem in RSAs. In particular, the design of novel variants of meta-heuristics, namely TS-RSC, SA-RSC and ES-RSC, is particularized to the resource scaling and consolidation (RSC) problem; (iii) compare the results of the obtained new RSAs class against some state-of-the-art heuristic approaches. A set of experimental results, both simulated and real-world ones, support the effectiveness of the proposed approaches against the traditional ones

    Q: Energy and delay-efficient dynamic queue management in TCP/IP virtualized data centers

    No full text
    The emerging utilization of Software-as-a-Service (SaaS) Fog computing centers as an Internet virtual computing commodity is raising concerns over the energy consumptions of networked data centers for the support of delay-sensitive applications. In addition to the energy consumed by the servers, the energy wasted by the network devices that support TCP/IP reliable inter-Virtual Machines (VMs) connections is becoming a significant challenge. In this paper, we propose and develop a framework for the joint characterization and optimization of TCP/IP SaaS Fog data centers that utilize a bank of queues for increasing the fraction of the admitted workload. Our goal is two-fold: (i) we maximize the average workload admitted by the data center; and, (ii) we minimize the resulting networking-plus-computing average energy consumption. For this purpose, we exploit the Lyapunov stochastic optimization approach, in order to design and analyze an optimal (yet practical) online joint resource management framework, which dynamically performs: (i) admission control; (ii) dispatching of the admitted workload; (iii) flow control of the inter-VM TCP/IP connections; (iv) queue control; (v) up/down scaling of the processing frequencies of the instantiated VMs; and, (vi) adaptive joint consolidation of both physical servers and TCP/IP connections. The salient features of the resulting scheduler (e.g., the Q* scheduler) are that: (i) it admits distributed and scalable implementation; (ii) it provides deterministic bounds on the instantaneous queue backlogs; (iii) it avoids queue overflow phenomena; and, (iv) it effectively tracks the (possibly unpredictable) time-fluctuations of the input workload, in order to perform joint resource consolidation without requiring any a prioriinformation and/or forecast of the input workload. Actual energy and delay performances of the proposed scheduler are numerically evaluated and compared against the corresponding ones of some competing and state-of-the-art schedulers, under: (i) Fast - Giga - 10Giga Ethernet switching technologies; (ii) various settings of the reconfiguration-consolidation costs; and, (iii) synthetic, as well as real-world workloads. The experimental results support the conclusion that the proposed scheduler can achieve over 30% energy savings

    Fog of social IoT: when the fog becomes social

    No full text
    SIoT and FC are two stand-alone technological paradigms under the realm of the Future Internet. SIoT relies on the self-establishment and self-management of inter-thing social relationships, in order to guarantee scalability to large IoT networks composed of both human and non-human agents. FC extends cloud capabilities to the access network, in order to allow resource-poor IoT devices to support delay-sensitive applications. Motivated by these complementary features of the SIoT and FC models, in this article we propose their integration into the novel paradigm of the SoFT. Specifically, we provide the following three main contributions. After describing the SoFT paradigm, we discuss its introduction through a number of exemplary use cases. We describe the architecture and the main resource-management functions of the resulting virtualized SoFT technological platform. It merges the physical things at the IoT layer and their virtual clones at the Fog layer into a cyber-physical overlay network of social clones. As a proof-of-concept, we present the simulated performance of a small-scale SoFT prototype, and compare its energy-vs.-delay performance with the corresponding one of a state-of-the-art virtualization-free technological platform, which relies only on device-to-device (D2D) inter-thing communication

    FOCAN: A Fog-supported smart city network architecture for management of applications in the Internet of Everything environments

    No full text
    Smart city vision brings emerging heterogeneous communication technologies such as Fog Computing (FC) together to substantially reduce the latency and energy consumption of Internet of Everything (IoE) devices running various applications. The key feature that distinguishes the FC paradigm for smart cities is that it spreads communication and computing resources over the wired/wireless access network (e.g., proximate access points and base stations) to provide resource augmentation (e.g., cyberforaging) for resource- and energy-limited wired/wireless (possibly mobile) things. Motivated by these considerations, this paper presents a Fog-supported smart city network architecture called Fog Computing Architecture Network (FOCAN), a multi-tier structure in which the applications are running on things thatjointly compute, route, and communicate with one another through the smart city environment. FOCAN decreases latency and improves energy provisioning and the efficiency of services among things with different capabilities. In particular, three types of communications are defined between FOCAN devices – interprimary, primary, and secondary communication –to manage applications in a way that meets the quality of service standards for the Internet of Everything. One of the main advantages of the proposed architecture is that the devices can provide the services with low energy usage and in an efficient manner. Simulation results for a selected case study demonstrate the tremendous impact of the FOCAN energy-efficient solution on the communication performance of various types of things in smart cities.•Present a generalized multi-tiered smart city architecture utilizes FC for devices.•Develop an FC-supported resource allocation model to cover FNs/device components.•Provide various types of communications between the components.•Evaluate the performance of the solution for an FC platform on real datasets
    corecore