586 research outputs found

    SmartFog: Training the Fog for the energy-saving analytics of Smart-Meter data

    Get PDF
    In this paper, we characterize the main building blocks and numerically verify the classification accuracy and energy performance of SmartFog, a distributed and virtualized networked Fog technological platform for the support for Stacked Denoising Auto-Encoder (SDAE)-based anomaly detection in data flows generated by Smart-Meters (SMs). In SmartFog, the various layers of an SDAE are pretrained at different Fog nodes, in order to distribute the overall computational efforts and, then, save energy. For this purpose, a new Adaptive Elitist Genetic Algorithm (AEGA) is “ad hoc” designed to find the optimized allocation of the SDAE layers to the Fog nodes. Interestingly, the proposed AEGA implements a (novel) mechanism that adaptively tunes the exploration and exploitation capabilities of the AEGA, in order to quickly escape the attraction basins of local minima of the underlying energy objective function and, then, speed up the convergence towards global minima. As a matter of fact, the main distinguishing feature of the resulting SmartFog paradigm is that it accomplishes the joint integration on a distributed Fog computing platform of the anomaly detection functionality and the minimization of the resulting energy consumption. The reported numerical tests support the effectiveness of the designed technological platform and point out that the attained performance improvements over some state-of-the-art competing solutions are around 5%, 68% and 30% in terms of detection accuracy, execution time and energy consumption, respectively

    An accuracy vs. complexity comparison of deep learning architectures for the detection of covid-19 disease

    Get PDF
    In parallel with the vast medical research on clinical treatment of COVID-19, an important action to have the disease completely under control is to carefully monitor the patients. What the detection of COVID-19 relies on most is the viral tests, however, the study of X-rays is helpful due to the ease of availability. There are various studies that employ Deep Learning (DL) paradigms, aiming at reinforcing the radiography-based recognition of lung infection by COVID-19. In this regard, we make a comparison of the noteworthy approaches devoted to the binary classification of infected images by using DL techniques, then we also propose a variant of a convolutional neural network (CNN) with optimized parameters, which performs very well on a recent dataset of COVID-19. The proposed model’s effectiveness is demonstrated to be of considerable importance due to its uncomplicated design, in contrast to other presented models. In our approach, we randomly put several images of the utilized dataset aside as a hold out set; the model detects most of the COVID-19 X-rays correctly, with an excellent overall accuracy of 99.8%. In addition, the significance of the results obtained by testing different datasets of diverse characteristics (which, more specifically, are not used in the training process) demonstrates the effectiveness of the proposed approach in terms of an accuracy up to 93%

    Learning-in-the-Fog (LiFo): Deep learning meets Fog Computing for the minimum-energy distributed early-exit of inference in delay-critical IoT realms

    Get PDF
    Fog Computing (FC) and Conditional Deep Neural Networks (CDDNs) with early exits are two emerging paradigms which, up to now, are evolving in a standing-Alone fashion. However, their integration is expected to be valuable in IoT applications in which resource-poor devices must mine large volume of sensed data in real-Time. Motivated by this consideration, this article focuses on the optimized design and performance validation of {L} earning-{i} ext{n}-The-Fo g (LiFo), a novel virtualized technological platform for the minimum-energy and delay-constrained execution of the inference-phase of CDDNs with early exits atop multi-Tier networked computing infrastructures composed by multiple hierarchically-organized wireless Fog nodes. The main research contributions of this article are threefold, namely: (i) we design the main building blocks and supporting services of the LiFo architecture by explicitly accounting for the multiple constraints on the per-exit maximum inference delays of the supported CDNN; (ii) we develop an adaptive algorithm for the minimum-energy distributed joint allocation and reconfiguration of the available computing-plus-networking resources of the LiFo platform. Interestingly enough, the designed algorithm is capable to self-detect (typically, unpredictable) environmental changes and quickly self-react them by properly re-configuring the available computing and networking resources; and, (iii) we design the main building blocks and related virtualized functionalities of an Information Centric-based networking architecture, which enables the LiFo platform to perform the aggregation of spatially-distributed IoT sensed data. The energy-vs.-inference delay LiFo performance is numerically tested under a number of IoT scenarios and compared against the corresponding ones of some state-of-The-Art benchmark solutions that do not rely on the Fog support

    Deepfogsim: A toolbox for execution and performance evaluation of the inference phase of conditional deep neural networks with early exits atop distributed fog platforms

    Get PDF
    The recent introduction of the so-called Conditional Neural Networks (CDNNs) with multiple early exits, executed atop virtualized multi-tier Fog platforms, makes feasible the real-time and energy-efficient execution of analytics required by future Internet applications. However, until now, toolkits for the evaluation of energy-vs.-delay performance of the inference phase of CDNNs executed on such platforms, have not been available. Motivated by these considerations, in this contribution, we present DeepFogSim. It is a MATLAB-supported software toolbox aiming at testing the performance of virtualized technological platforms for the real-time distributed execution of the inference phase of CDNNs with early exits under IoT realms. The main peculiar features of the proposed DeepFogSim toolbox are that: (i) it allows the joint dynamic energy-aware optimization of the Fog-hosted computing-networking resources under hard constraints on the tolerated inference delays; (ii) it allows the repeatable and customizable simulation of the resulting energy-delay performance of the overall Fog execution platform; (iii) it allows the dynamic tracking of the performed resource allocation under time-varying operating conditions and/or failure events; and (iv) it is equipped with a user-friendly Graphic User Interface (GUI) that supports a number of graphic formats for data rendering. Some numerical results give evidence for about the actual capabilities of the proposed DeepFogSim toolbox

    A Fast Gradient Approximation for Nonlinear Blind Signal Processing

    Get PDF
    When dealing with nonlinear blind processing algorithms (deconvolution or post-nonlinear source separation), complex mathematical estimations must be done giving as a result very slow algorithms. This is the case, for example, in speech processing, spike signals deconvolution or microarray data analysis. In this paper, we propose a simple method to reduce computational time for the inversion of Wiener systems or the separation of post-nonlinear mixtures, by using a linear approximation in a minimum mutual information algorithm. Simulation results demonstrate that linear spline interpolation is fast and accurate, obtaining very good results (similar to those obtained without approximation) while computational time is dramatically decreased. On the other hand, cubic spline interpolation also obtains similar good results, but due to its intrinsic complexity, the global algorithm is much more slow and hence not useful for our purpose

    A longitudinal study of DNA methylation as a potential mediator of age-related diabetes risk

    Get PDF
    DNA methylation (DNAm) has been found to show robust and widespread age-related changes across the genome. DNAm profiles from whole blood can be used to predict human aging rates with great accuracy. We sought to test whether DNAm-based predictions of age are related to phenotypes associated with type 2 diabetes (T2D), with the goal of identifying risk factors potentially mediated by DNAm. Our participants were 43 women enrolled in the Women's Health Initiative. We obtained methylation data via the Illumina 450K Methylation array on whole blood samples from participants at three timepoints, covering on average 16 years per participant. We employed the method and software of Horvath, which uses DNAm at 353 CpGs to form a DNAm-based estimate of chronological age. We then calculated the epigenetic age acceleration, or Δage, at each timepoint. We fit linear mixed models to characterize how Δage contributed to a longitudinal model of aging and diabetes-related phenotypes and risk factors. For most participants, Δage remained constant, indicating that age acceleration is generally stable over time. We found that Δage associated with body mass index (p = 0.0012), waist circumference (p = 0.033), and fasting glucose (p = 0.0073), with the relationship with BMI maintaining significance after correction for multiple testing. Replication in a larger cohort of 157 WHI participants spanning 3 years was unsuccessful, possibly due to the shorter time frame covered. Our results suggest that DNAm has the potential to act as a mediator between aging and diabetes-related phenotypes, or alternatively, may serve as a biomarker of these phenotypes

    Analysis and Design of a Compact Leaky-Wave Antenna for Wide-Band Broadside Radiation

    Get PDF
    A low-cost compact planar leaky-wave antenna (LWA) is proposed offering directive broadside radiation over a significantly wide bandwidth. The design is based on an annular metallic strip grating (MSG) configuration, placed on top of a dual-layer grounded dielectric substrate. This defines a new two-layer parallel-plate open waveguide, whose operational principles are accurately investigated. To assist in our antenna design, a method-of-moments dispersion analysis has been developed to characterize the relevant TM and TE modes of the perturbed guiding structure. By proper selection of the MSG for a fabricated prototype and its supporting dielectric layers as well as the practical TM antenna feed embedded in the bottom ground plane, far-field pencil-beam patterns are observed at broadside and over a wide frequency range, i.e., from 21.9 GHz to 23.9 GHz, defining a radiating percentage bandwidth of more than 8.5%. This can be explained by a dominantly excited TM mode, with low dispersion, employed to generate a two-sided far-field beam pattern which combines to produce a single beam at broadside over frequency. Some applications of this planar antenna include radar and satellite communications at microwave and millimeter-wave frequencies as well as future 5G communication devices and wireless power transmission systems
    corecore