30 research outputs found

    A Review on Modern Distributed Computing Paradigms: Cloud Computing, Jungle Computing and Fog Computing

    Get PDF
    The distributed computing attempts to improve performance in large-scale computing problems by resource sharing. Moreover, rising low-cost computing power coupled with advances in communications/networking and the advent of big data, now enables new distributed computing paradigms such as Cloud, Jungle and Fog computing.Cloud computing brings a number of advantages to consumers in terms of accessibility and elasticity. It is based on centralization of resources that possess huge processing power and storage capacities. Fog computing, in contrast, is pushing the frontier of computing away from centralized nodes to the edge of a network, to enable computing at the source of the data. On the other hand, Jungle computing includes a simultaneous combination of clusters, grids, clouds, and so on, in order to gain maximum potential computing power.To understand these new buzzwords, reviewing these paradigms together can be useful. Therefore, this paper describes the advent of new forms of distributed computing. It provides a definition for Cloud, Jungle and Fog computing, and the key characteristics of them are determined. In addition, their architectures are illustrated and, finally, several main use cases are introduced

    Reconocimiento de acto de diálogo secuencial para debates argumentativos árabes

    Get PDF
    Dialogue act recognition remains a primordial task that helps user to automatically identify participants’ intentions. In this paper, we propose a sequential approach consisting of segmentation followed by annotation process to identify dialogue acts within Arabic politic debates. To perform DA recognition, we used the CARD corpus labeled using the SADA annotation schema. Segmentation and annotation tasks were then carried out using Conditional Random Fields probabilistic models as they prove high performance in segmenting and labeling sequential data. Learning results are notably important for the segmentation task (F-score=97.9%) and relatively reliable within the annotation process (f-score=63.4%) given the complexity of identifying argumentative tags and the presence of disfluencies in spoken conversations.El reconocimiento del acto de diálogo sigue siendo una tarea primordial que ayuda al usuario a identificar automáticamente las intenciones de los participantes. En este documento, proponemos un enfoque secuencial que consiste en la segmentación seguida de un proceso de anotación para identificar actos de diálogo dentro de los debates políticos árabes. Para realizar el reconocimiento DA, utilizamos el corpus CARD etiquetado utilizando el esquema de anotación SADA. Las tareas de segmentación y anotación se llevaron a cabo utilizando modelos probabilísticos de Campos aleatorios condicionales, ya que demuestran un alto rendimiento en la segmentación y el etiquetado de datos secuenciales. Los resultados de aprendizaje son especialmente importantes para la tarea de segmentación (F-score = 97.9%) y relativamente confiables dentro del proceso de anotación (f-score = 63.4%) dada la complejidad de identificar etiquetas argumentativas y la presencia de disfluencias en las conversaciones habladas

    Inverse values of EEG signal power in joint EEG-fMRI analysis

    Get PDF
    Úvodní část práce shrnuje základní teorii měření mozkové aktivity pomocí BOLD signálu a skalpového EEG, vliv šumových jevů v naměřených datech, možnosti jejich potlačení, fúze naměřených dat pomocí obecného lineárního modelu a stávající implementaci výpočetních algoritmů v softwarové knihovně EEG Regressor Builder 1.0. V rámci vlastního řešení jsou v práci popsány změny softwarové knihovny na verzi 1.1 podle požadavků zadání bakalářské práce. Byla testována hypotéza, zda časové změny relativního výkonu pásma (20-40Hz) EEG signálu disponují stejnými prostorovými koreláty s BOLD signálem jako převrácené hodnoty výkonu ve frekvenčním pásmu 0-12Hz. Hypotéza byla zamítnuta na základě výpočtu hodnot podobnostních kritérií mezi 3D aktivačními mapami pro různé nastavení parametrů výpočtu společné analýzy, kde se jako vhodné ukázaly korelační koeficient nebo kosínové kritérium, Euklidovská vzdálenost se ukázala jako nevhodné podobnostní kritérium. Zároveň bylo prokázáno, že převrácená hodnota výkonu EEG signálu v daném frekvenčním pásmu pouze přináší do společné EEG-fMRI analýzy anti-korelovaný signál ke standardnímu absolutnímu výkonu ve stejném frekvenčním pásmu. Dále vliv pohybových regresorů snižuje počet nadprahových voxelů.The first part of this thesis summarizes the basic theory of brain activity measurement using the BOLD signal and scalp EEG, the effect of noise phenomena in the data and its suppression, the merger of the fusion of the measured data using the general linear model and the current implementation of computational algorithms in the software library EEG Regressor Builder 1.0. Within the own solution of this thesis, the changes of the software library to version 1.1 were realized according to the requirements of the bachelor thesis. The hypothesis that temporal changes of the EEG relative band power (20 - 40Hz) has the same spatial correlates with the BOLD signal as the inverse power in the frequency range 0-12Hz. The hypothesis was rejected based on the calculation of similarity criterions between 3D activation maps for different parameter settings of the joint analysis calculations. As an appropriate criterions were chosen the correlation coefficient and the cosine criterion. The Euclidean distance was proved to be unfit. Also it was proved the inverse power value of EEG signal in the given frequency band brings to the common EEG-fMRI analysis an anti-correlated signal to the normal absolute power in the same frequency band. Furthermore the influence of regressors describing motion artifacts reduces the number of supra-thresholded voxels.

    DMRN+16: Digital Music Research Network One-day Workshop 2021

    Get PDF
    DMRN+16: Digital Music Research Network One-day Workshop 2021 Queen Mary University of London Tuesday 21st December 2021 Keynote speakers Keynote 1. Prof. Sophie Scott -Director, Institute of Cognitive Neuroscience, UCL. Title: "Sound on the brain - insights from functional neuroimaging and neuroanatomy" Abstract In this talk I will use functional imaging and models of primate neuroanatomy to explore how sound is processed in the human brain. I will demonstrate that sound is represented cortically in different parallel streams. I will expand this to show how this can impact on the concept of auditory perception, which arguably incorporates multiple kinds of distinct perceptual processes. I will address the roles that subcortical processes play in this, and also the contributions from hemispheric asymmetries. Keynote 2: Prof. Gus Xia - Assistant Professor at NYU Shanghai Title: "Learning interpretable music representations: from human stupidity to artificial intelligence" Abstract Gus has been leading the Music X Lab in developing intelligent systems that help people better compose and learn music. In this talk, he will show us the importance of music representation for both humans and machines, and how to learn better music representations via the design of inductive bias. Once we got interpretable music representations, the potential applications are limitless

    Boosting bonsai trees for handwritten/printed text discrimination

    Get PDF
    International audienceBoosting over decision-stumps proved its e ciency in Natural Language Processing essentially with symbolic features, and its good properties (fast, few and not critical parameters, not sensitive to overfitting) could be of great interest in the numeric world of pixel images. In this article we investigated the use of boosting over small decision trees, in image classification processing, for the discrimination of handwritten/printed text. Then, we conducted experiments to compare it to usual SVM-based classification revealing convincing results with very close performance, but with faster predictions and behaving far less as a black-box. Those promising results tend to make use of this classifier in more complex recognition tasks like multiclass problems

    An Approach to Self-Supervised Object Localisation through Deep Learning Based Classification

    Get PDF
    Deep learning has become ubiquitous in science and industry for classifying images or identifying patterns in data. The most widely used approach to training convolutional neural networks is supervised learning, which requires a large set of annotated data. To elude the high cost of collecting and annotating datasets, selfsupervised learning methods represent a promising way to learn the common functions of images and videos from large-scale unlabeled data without using humanannotated labels. This thesis provides the results of using self-supervised learning and explainable AI to localise objects in images from electron microscopes. The work used a synthetic geometric dataset and a synthetic pollen dataset. The classification was used as a pretext task. Different methods of explainable AI were applied: Grad-CAM and backpropagation-based approaches showed the lack of prospects; at the same time, the Extremal Perturbation function has shown efficiency. As a result of the downstream localisation task, the objects of interest were detected with competitive accuracy for one-class images. The advantages and limitations of the approach have been analysed. Directions for further work are proposed

    Quantifying the Simulation-Reality Gap for Deep Learning-Based Drone Detection

    Get PDF
    The detection of drones or unmanned aerial vehicles is a crucial component in protecting safety-critical infrastructures and maintaining privacy for individuals and organizations. The widespread use of optical sensors for perimeter surveillance has made optical sensors a popular choice for data collection in the context of drone detection. However, efficiently processing the obtained sensor data poses a significant challenge. Even though deep learning-based object detection models have shown promising results, their effectiveness depends on large amounts of annotated training data, which is time consuming and resource intensive to acquire. Therefore, this work investigates the applicability of synthetically generated data obtained through physically realistic simulations based on three-dimensional environments for deep learning-based drone detection. Specifically, we introduce a novel three-dimensional simulation approach built on Unreal Engine and Microsoft AirSim for generating synthetic drone data. Furthermore, we quantify the respective simulation-reality gap and evaluate established techniques for mitigating this gap by systematically exploring different compositions of real and synthetic data. Additionally, we analyze the adaptation of the simulation setup as part of a feedback loop-based training strategy and highlight the benefits of a simulation-based training setup for image-based drone detection, compared to a training strategy relying exclusively on real-world data

    Research on Efficiency and Security for Emerging Distributed Applications

    Get PDF
    Distributed computing has never stopped its advancement since the early years of computer systems. In recent years, edge computing has emerged as an extension of cloud computing. The main idea of edge computing is to provide hardware resources in proximity to the end devices, thereby offering low network latency and high network bandwidth. However, as an emerging distributed computing paradigm, edge computing currently lacks effective system support. To this end, this dissertation studies the ways of building system support for edge computing. We first study how to support the existing, non-edge-computing applications in edge computing environments. This research leads to the design of a platform called SMOC that supports executing mobile applications on edge servers. We consider mobile applications in this project because there are a great number of mobile applications in the market and we believe that mobile-edge computing will become an important edge computing paradigm in the future. SMOC supports executing ARM-based mobile applications on x86 edge servers by establishing a running environment identical to that of the mobile device at the edge. It also exploits hardware virtualization on the mobile device to protect user input. Next, we investigate how to facilitate the development of edge applications with system support. This study leads to the design of an edge computing framework called EdgeEngine, which consists of a middleware running on top of the edge computing infrastructure and a powerful, concise programming interface. Developers can implement edge applications with minimal programming effort through the programming interface, and the middleware automatically fulfills the routine tasks, such as data dispatching, task scheduling, lock management, etc., in a highly efficient way. Finally, we envision that consensus will be an important building block for many edge applications, because we consider the consensus problem to be the most important fundamental problem in distributed computing while edge computing is an emerging distributed computing paradigm. Therefore, we investigate how to support the edge applications that rely on consensus, helping them achieve good performance. This study leads to the design of a novel, Paxos-based consensus protocol called Nomad, which rapidly orders the messages received by the edge. Nomad can quickly adapt to the workload changes across the edge computing system, and it incorporates a backend cloud to resolve the conflicts in a timely manner. By doing so, Nomad reduces the user-perceived latency as much as possible, outperforming the existing consensus protocols

    Error resilience and concealment techniques for high-efficiency video coding

    Get PDF
    This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the streaming stage, a prioritization algorithm, based on spatial dependencies, selects a reduced set of motion vectors to be transmitted, as side information, to reduce mismatched motion predictions at the decoder. The problem of error concealment-aware video coding is also investigated to enhance the overall error robustness. A new approach based on scalable coding and optimally error concealment selection is proposed, where the optimal error concealment modes are found by simulating transmission losses, followed by a saliency-weighted optimisation. Moreover, recovery residual information is encoded using a rate-controlled enhancement layer. Both are transmitted to the decoder to be used in case of data loss. Finally, an adaptive error resilience scheme is proposed to dynamically predict the video stream that achieves the highest decoded quality for a particular loss case. A neural network selects among the various video streams, encoded with different levels of compression efficiency and error protection, based on information from the video signal, the coded stream and the transmission network. Overall, the new robust video coding methods investigated in this thesis yield consistent quality gains in comparison with other existing methods and also the ones implemented in the HEVC reference software. Furthermore, the trade-off between coding efficiency and error robustness is also better in the proposed methods
    corecore