54 research outputs found

    Learning-in-the-Fog (LiFo): Deep learning meets Fog Computing for the minimum-energy distributed early-exit of inference in delay-critical IoT realms

    Get PDF
    Fog Computing (FC) and Conditional Deep Neural Networks (CDDNs) with early exits are two emerging paradigms which, up to now, are evolving in a standing-Alone fashion. However, their integration is expected to be valuable in IoT applications in which resource-poor devices must mine large volume of sensed data in real-Time. Motivated by this consideration, this article focuses on the optimized design and performance validation of {L} earning-{i} ext{n}-The-Fo g (LiFo), a novel virtualized technological platform for the minimum-energy and delay-constrained execution of the inference-phase of CDDNs with early exits atop multi-Tier networked computing infrastructures composed by multiple hierarchically-organized wireless Fog nodes. The main research contributions of this article are threefold, namely: (i) we design the main building blocks and supporting services of the LiFo architecture by explicitly accounting for the multiple constraints on the per-exit maximum inference delays of the supported CDNN; (ii) we develop an adaptive algorithm for the minimum-energy distributed joint allocation and reconfiguration of the available computing-plus-networking resources of the LiFo platform. Interestingly enough, the designed algorithm is capable to self-detect (typically, unpredictable) environmental changes and quickly self-react them by properly re-configuring the available computing and networking resources; and, (iii) we design the main building blocks and related virtualized functionalities of an Information Centric-based networking architecture, which enables the LiFo platform to perform the aggregation of spatially-distributed IoT sensed data. The energy-vs.-inference delay LiFo performance is numerically tested under a number of IoT scenarios and compared against the corresponding ones of some state-of-The-Art benchmark solutions that do not rely on the Fog support

    An accuracy vs. complexity comparison of deep learning architectures for the detection of covid-19 disease

    Get PDF
    In parallel with the vast medical research on clinical treatment of COVID-19, an important action to have the disease completely under control is to carefully monitor the patients. What the detection of COVID-19 relies on most is the viral tests, however, the study of X-rays is helpful due to the ease of availability. There are various studies that employ Deep Learning (DL) paradigms, aiming at reinforcing the radiography-based recognition of lung infection by COVID-19. In this regard, we make a comparison of the noteworthy approaches devoted to the binary classification of infected images by using DL techniques, then we also propose a variant of a convolutional neural network (CNN) with optimized parameters, which performs very well on a recent dataset of COVID-19. The proposed model’s effectiveness is demonstrated to be of considerable importance due to its uncomplicated design, in contrast to other presented models. In our approach, we randomly put several images of the utilized dataset aside as a hold out set; the model detects most of the COVID-19 X-rays correctly, with an excellent overall accuracy of 99.8%. In addition, the significance of the results obtained by testing different datasets of diverse characteristics (which, more specifically, are not used in the training process) demonstrates the effectiveness of the proposed approach in terms of an accuracy up to 93%

    Deepfogsim: A toolbox for execution and performance evaluation of the inference phase of conditional deep neural networks with early exits atop distributed fog platforms

    Get PDF
    The recent introduction of the so-called Conditional Neural Networks (CDNNs) with multiple early exits, executed atop virtualized multi-tier Fog platforms, makes feasible the real-time and energy-efficient execution of analytics required by future Internet applications. However, until now, toolkits for the evaluation of energy-vs.-delay performance of the inference phase of CDNNs executed on such platforms, have not been available. Motivated by these considerations, in this contribution, we present DeepFogSim. It is a MATLAB-supported software toolbox aiming at testing the performance of virtualized technological platforms for the real-time distributed execution of the inference phase of CDNNs with early exits under IoT realms. The main peculiar features of the proposed DeepFogSim toolbox are that: (i) it allows the joint dynamic energy-aware optimization of the Fog-hosted computing-networking resources under hard constraints on the tolerated inference delays; (ii) it allows the repeatable and customizable simulation of the resulting energy-delay performance of the overall Fog execution platform; (iii) it allows the dynamic tracking of the performed resource allocation under time-varying operating conditions and/or failure events; and (iv) it is equipped with a user-friendly Graphic User Interface (GUI) that supports a number of graphic formats for data rendering. Some numerical results give evidence for about the actual capabilities of the proposed DeepFogSim toolbox

    High genetic diversity of measles virus, World Health Organization European region, 2005-2006

    Get PDF
    During 2005-2006, nine measles virus (MV) genotypes were identified throughout the World Health Organization European Region. All major epidemics were associated with genotypes D4, D6, and B3. Other genotypes (B2, D5, D8, D9, G2, and H1) were only found in limited numbers of cases after importation from other continents. The genetic diversity of endemic D6 strains was low; genotypes C2 and D7, circulating in Europe until recent years, were no longer identified. The transmission chains of several indigenous MV strains may thus have been interrupted by enhanced vaccination. However, multiple importations from Africa and Asia and virus introduction into highly mobile and unvaccinated communities caused a massive spread of D4 and B3 strains throughout much of the region. Thus, despite the reduction of endemic MV circulation, importation of MV from other continents caused prolonged circulation and large outbreaks after their introduction into unvaccinated and highly mobile communities

    AFAFed—asynchronous fair adaptive federated learning for IoT stream applications

    No full text
    In this paper, we design, analyze the convergence properties, address the implementation aspects, and numerically test the performance of AFAFed. This is a novel Asynchronous Fair Adaptive Federated learning framework for stream-oriented IoT application environments, which are featured by time-varying operating conditions, heterogeneous resource-limited devices (i.e., coworkers), non-i.i.d. local training data and unreliable communication links. The key new of AFAFed is the synergic co-design of: (i) two sets of adaptively tuned tolerance thresholds and fairness coefficients at the coworkers and central server, respectively; and, (ii) a distributed adaptive mechanism, which allows each coworker to adaptively tune own communication rate. The convergence properties of AFAFed under (possibly) non-convex loss functions is guaranteed by a set of new analytical bounds, which formally unveil the impact on the resulting AFAFed convergence rate of a number of Federated Learning (FL) parameters, like, first and second moments of the per-coworker number of consecutive model updates, data skewness, communication packet-loss probability, and maximum/minimum values of the (adaptively tuned) mixing coefficient used for model aggregation. Extensive numerical tests show that AFAFed is capable to improve test accuracy by up to 20% and reduce training time by up to 50%, compared to state-of-the-art FL schemes, even under challenging learning scenarios featured by deep Machine Learning (ML) models, data skewness, coworker heterogeneity and unreliable communication

    193 Improved post-thaw survival of bovine embryos produced in serum-free In-vitro production system

    No full text
    Over a few decades the bovine in vitro embryo production (IVP) systems have been improving rapidly. Still, the goal to produce the same quality embryos in vitro as in vivo has not yet been reached. The FCS is usually added to media during IVP to provide growth factors and energy sources. Currently, serum-free culture systems are often preferred due to the lower risk of contamination and prevention of the development of large offspring syndrome. The aim of this study was to establish whether complete elimination of FCS from the bovine IVP system has an effect on blastocyst rates, embryo quality, and embryo survival rates after slow freezing. We replaced our conventional in vitro maturation (IVM) medium [tissue culture medium-199, 10% (v/v) FCS, 10 µg mL–1 epidermal growth factor (EGF), 1500 U mL–1 serum gonadotropin and chorionic gonadotropin (PG600), Na-pyruvate 0.5 mM, gentamycin sulfate 50 µg mL–1 and l-glutamine 1 mM] with SOF (SOFaaci) supplemented with 0.4% fatty acid-free BSA fraction V, 10 µg mL–1 EGF, and 1500 U mL–1 PG600. Matured cumulus-oocyte complexes (COC) from both experimental groups (total of 1145 from serum-free IVP and 687 from our conventional IVP system) were used for in vitro fertilisation and culture. Blastocyst rates were similar in the serum-free and our usual IVP protocol, 18 and 22%, respectively. Seventy-seven Grade 1 (according to IETS) Day 7 blastocysts from the serum-free IVP system and 80 Grade 1 Day 7 blastocysts from our conventional IVP system were frozen in 1.5 M ethylene glycol and 0.1 M sucrose containing cryopreservation medium. The post-thaw survival rates after 24 h of culture and evaluated as percentages of re-expanded embryos were 63.6% for the serum-free IVP and 46.3% for the conventional IVP system (P < 0.05, Z Test for 2 population proportions). These results indicate that it is possible to have a completely serum-free bovine IVP system and based on the slow freezing and thawing results the quality of serum-free IVP embryos might be better than of the embryos matured in our conventional maturation media. However, more experiments and increased sample sizes are needed to confirm the results

    Exploiting probability density function of deep convolutional autoencoders’ latent space for reliable COVID-19 detection on CT scans

    No full text
    We present a probabilistic method for classifying chest computed tomography (CT) scans into COVID-19 and non-COVID-19. To this end, we design and train, in an unsupervised manner, a deep convolutional autoencoder (DCAE) on a selected training data set, which is composed only of COVID-19 CT scans. Once the model is trained, the encoder can generate the compact hidden representation (the hidden feature vectors) of the training data set. Afterwards, we exploit the obtained hidden representation to build up the target probability density function (PDF) of the training data set by means of kernel density estimation (KDE). Subsequently, in the test phase, we feed a test CT into the trained encoder to produce the corresponding hidden feature vector, and then, we utilise the target PDF to compute the corresponding PDF value of the test image. Finally, this obtained value is compared to a threshold to assign the COVID-19 label or non-COVID-19 to the test image. We numerically check our approach’s performance (i.e. test accuracy and training times) by comparing it with those of some state-of-the-art methods

    A histogram-based low-complexity approach for the effective detection of COVID-19 disease from CT and X-ray images

    No full text
    The global COVID-19 pandemic certainly has posed one of the more difficult challenges for researchers in the current century. The development of an automatic diagnostic tool, able to detect the disease in its early stage, could undoubtedly offer a great advantage to the battle against the pandemic. In this regard, most of the research efforts have been focused on the application of Deep Learning (DL) techniques to chest images, including traditional chest X-rays (CXRs) and Computed Tomography (CT) scans. Although these approaches have demonstrated their effectiveness in detecting the COVID-19 disease, they are of huge computational complexity and require large datasets for training. In addition, there may not exist a large amount of COVID-19 CXRs and CT scans available to researchers. To this end, in this paper, we propose an approach based on the evaluation of the histogram from a common class of images that is considered as the target. A suitable inter-histogram distance measures how this target histogram is far from the histogram evaluated on a test image: if this distance is greater than a threshold, the test image is labeled as anomaly, i.e., the scan belongs to a patient affected by COVID-19 disease. Extensive experimental results and comparisons with some benchmark state-of-the-art methods support the effectiveness of the developed approach, as well as demonstrate that, at least when the images of the considered datasets are homogeneous enough (i.e., a few outliers are present), it is not really needed to resort to complex-to-implement DL techniques, in order to attain an effective detection of the COVID-19 disease. Despite the simplicity of the proposed approach, all the considered metrics (i.e., accuracy, precision, recall, and F-measure) attain a value of 1.0 under the selected datasets, a result comparable to the corresponding state-of-the-art DNN approaches, but with a remarkable computational simplicity

    A novel unsupervised approach based on the hidden features of deep denoising autoencoders for COVID-19 disease detection

    No full text
    Chest imaging can represent a powerful tool for detecting the Coronavirus disease 2019 (COVID-19). Among the available technologies, the chest Computed Tomography (CT) scan is an effective approach for reliable and early detection of the disease. However, it could be difficult to rapidly identify by human inspection anomalous area in CT images belonging to the COVID-19 disease. Hence, it becomes necessary the exploitation of suitable automatic algorithms able to quick and precisely identify the disease, possibly by using few labeled input data, because large amounts of CT scans are not usually available for the COVID-19 disease. The method proposed in this paper is based on the exploitation of the compact and meaningful hidden representation provided by a Deep Denoising Convolutional Autoencoder (DDCAE). Specifically, the proposed DDCAE, trained on some target CT scans in an unsupervised way, is used to build up a robust statistical representation generating a target histogram. A suitable statistical distance measures how this target histogram is far from a companion histogram evaluated on an unknown test scan: if this distance is greater of a threshold, the test image is labeled as anomaly, i.e. the scan belongs to a patient affected by COVID-19 disease. Some experimental results and comparisons with other state-of-the-art methods show the effectiveness of the proposed approach reaching a top accuracy of 100% and similar high values for other metrics. In conclusion, by using a statistical representation of the hidden features provided by DDCAEs, the developed architecture is able to differentiate COVID-19 from normal and pneumonia scans with high reliability and at low computational cost
    corecore