1,613 research outputs found

    Performance modelling and optimization for video-analytic algorithms in a cloud-like environment using machine learning

    Get PDF
    CCTV cameras produce a large amount of video surveillance data per day, and analysing them require the use of significant computing resources that often need to be scalable. The emergence of the Hadoop distributed processing framework has had a significant impact on various data intensive applications as the distributed computed based processing enables an increase of the processing capability of applications it serves. Hadoop is an open source implementation of the MapReduce programming model. It automates the operation of creating tasks for each function, distribute data, parallelize executions and handles machine failures that reliefs users from the complexity of having to manage the underlying processing and only focus on building their application. It is noted that in a practical deployment the challenge of Hadoop based architecture is that it requires several scalable machines for effective processing, which in turn adds hardware investment cost to the infrastructure. Although using a cloud infrastructure offers scalable and elastic utilization of resources where users can scale up or scale down the number of Virtual Machines (VM) upon requirements, a user such as a CCTV system operator intending to use a public cloud would aspire to know what cloud resources (i.e. number of VMs) need to be deployed so that the processing can be done in the fastest (or within a known time constraint) and the most cost effective manner. Often such resources will also have to satisfy practical, procedural and legal requirements. The capability to model a distributed processing architecture where the resource requirements can be effectively and optimally predicted will thus be a useful tool, if available. In literature there is no clear and comprehensive modelling framework that provides proactive resource allocation mechanisms to satisfy a user's target requirements, especially for a processing intensive application such as video analytic. In this thesis, with the hope of closing the above research gap, novel research is first initiated by understanding the current legal practices and requirements of implementing video surveillance system within a distributed processing and data storage environment, since the legal validity of data gathered or processed within such a system is vital for a distributed system's applicability in such domains. Subsequently the thesis presents a comprehensive framework for the performance ii modelling and optimization of resource allocation in deploying a scalable distributed video analytic application in a Hadoop based framework, running on virtualized cluster of machines. The proposed modelling framework investigates the use of several machine learning algorithms such as, decision trees (M5P, RepTree), Linear Regression, Multi Layer Perceptron(MLP) and the Ensemble Classifier Bagging model, to model and predict the execution time of video analytic jobs, based on infrastructure level as well as job level parameters. Further in order to propose a novel framework for the allocate resources under constraints to obtain optimal performance in terms of job execution time, we propose a Genetic Algorithms (GAs) based optimization technique. Experimental results are provided to demonstrate the proposed framework's capability to successfully predict the job execution time of a given video analytic task based on infrastructure and input data related parameters and its ability determine the minimum job execution time, given constraints of these parameters. Given the above, the thesis contributes to the state-of-art in distributed video analytics, design, implementation, performance analysis and optimisation

    An architecture for adaptive task planning in support of IoT-based machine learning applications for disaster scenarios

    Get PDF
    The proliferation of the Internet of Things (IoT) in conjunction with edge computing has recently opened up several possibilities for several new applications. Typical examples are Unmanned Aerial Vehicles (UAV) that are deployed for rapid disaster response, photogrammetry, surveillance, and environmental monitoring. To support the flourishing development of Machine Learning assisted applications across all these networked applications, a common challenge is the provision of a persistent service, i.e., a service capable of consistently maintaining a high level of performance, facing possible failures. To address these service resilient challenges, we propose APRON, an edge solution for distributed and adaptive task planning management in a network of IoT devices, e.g., drones. Exploiting Jackson's network model, our architecture applies a novel planning strategy to better support control and monitoring operations while the states of the network evolve. To demonstrate the functionalities of our architecture, we also implemented a deep-learning based audio-recognition application using the APRON NorthBound interface, to detect human voices in challenged networks. The application's logic uses Transfer Learning to improve the audio classification accuracy and the runtime of the UAV-based rescue operations

    Predicting Workflow Task Execution Time in the Cloud using A Two-Stage Machine Learning Approach

    Get PDF
    Many techniques such as scheduling and resource provisioning rely on performance prediction of workflow tasks for varying input data. However, such estimates are difficult to generate in the cloud. This paper introduces a novel two-stage machine learning approach for predicting workflow task execution times for varying input data in the cloud. In order to achieve high accuracy predictions, our approach relies on parameters reflecting runtime information and two stages of predictions. Empirical results for four real world workflow applications and several commercial cloud providers demonstrate that our approach outperforms existing prediction methods. In our experiments, our approach respectively achieves a best-case and worst-case estimation error of 1.6% and 12.2%, while existing methods achieved errors beyond 20% (for some cases even over 50%) in more than 75% of the evaluated workflow tasks. In addition, we show that the models predicted by our approach for a specific cloud can be ported with low effort to new clouds with low errors by requiring only a small number of executions

    Cloud Computing with Artificial Intelligence Techniques for Effective Disease Detection

    Get PDF
    With the current rapid advancement of cloud computing (CC) technology, which enabled the connectivity of many intelligent objects and detectors and created smooth data interchange between systems, there is now a strict need for platforms for data processing, the Internet of Things (IoT), and data management. The field of medicine in CC is receiving a lot of attention from the scientific world, as well as the private and governmental sectors. Thousands of individuals now have a digital system due to these apps where they may regularly obtain helpful medical advice for leading a healthy life. The use of artificial intelligence (AI) in the medical field has several advantages, including the ability to automate processes and analyze large patient databases to offer superior medicine more quickly and effectively. IoT-enabled smart health tools provide both internet solutions and a variety of features. CC infrastructure improves these healthcare solutions by enabling safe storage and accessibility. We suggest a novel Cloud computing and artificial intelligence (CC-AI) premised smart medical solution for surveillance and detecting major illnesses to provide superior solutions to the users. For disease detection, we suggested AI-based whale optimization (WO) and fuzzy neural network (FNN) (WO-FNN). Patients' IoT wearable sensor data is gathered for detection. The accuracy, sensitivity, specificity, and computation time are evaluated and compared with existing techniques

    Energy management and guidelines to digitalisation of integrated natural gas distribution systems equipped with expander technology

    Get PDF
    In a swirling dynamic interaction, digital innovation, environment and anthropological evolution are swiftly shaping the smart grid scenario. Integration and flexibility are the keywords in this emergent picture characterised by a low carbon footprint. Digitalisation, within the natural limits imposed by the thermodynamics, seems to offer excellent opportunities for these purposes. Of course, here starts a new challenge: how digital technologies should be employed to achieve these objectives? How would we ensure a digital retrofit does not lead to a carbon emission increase? In author opinion, as long as it remains a generalised question, none answer exists: the need to contextualise the issue emerges from the variety of the characteristics of the energy systems and from their interactions with external processes. To address these points, in the first part of this research, the author presented a collection of his research contributions to the topic related to the energy management in natural gas pressure reduction station equipped with turbo expander technology. Furthermore, starting from the state of the art and the author's previous research contributions, the guidelines for the digital retrofit for a specific kind of distributed energy system, were outlined. Finally, a possible configuration of the ideal ICT architecture is extracted. This aims to achieve a higher level of coordination involving, natural gas distribution and transportation, local energy production, thermal user integration and electric vehicles charging. Finally, the barriers and the risks of a digitalisation process are critically analysed outlining in this way future research needs

    Dispatcher3 D1.1 - Technical resources and problem definition

    Get PDF
    This deliverable starts with the proposal of Dispatcher3 and incorporates the development produced during the first five months of the project: activities on different workpackages, interaction with Topic Manager and Project Officer, and input received during the first Advisory Board meeting and follow up consultations. This deliverable presents the definition of Dispatcher3 concept and methodology. It includes the high level the requirements of the prototype, preliminary data requirements, preliminary technical infrastructure requirements, preliminary data processing and analytic techniques identification and a preliminary definition of scenarios. The deliverable aims at defining the view of the consortium on the project at these early stages, incorporating the feedback obtained from the Advisory Board and highlighting the further activities required to define some of the aspects of the project

    A machine learning-based investigation of cloud service attacks

    Get PDF
    In this thesis, the security challenges of cloud computing are investigated in the Infrastructure as a Service (IaaS) layer, as security is one of the major concerns related to Cloud services. As IaaS consists of different security terms, the research has been further narrowed down to focus on Network Layer Security. Review of existing research revealed that several types of attacks and threats can affect cloud security. Therefore, there is a need for intrusion defence implementations to protect cloud services. Intrusion Detection (ID) is one of the most effective solutions for reacting to cloud network attacks. [Continues.

    A novel Approach for sEMG Gesture Recognition using Resource-constrained Hardware Platforms

    Get PDF
    Classifying human gestures using surface electromyografic sensors (sEMG) is a challenging task. Wearable sensors have proven to be extremely useful in this context, but their performance is limited by several factors (signal noise, computing resources, battery consumption, etc.). In particular, computing resources impose a limitation in many application scenarios, in which lightweight classification approaches are desirable. Recent research has shown that machine learning techniques are useful for human gesture classification once their salient features have been determined. This paper presents a novel approach for human gesture classification in which two different strategies are combined: a) a technique based on autoencoders is used to perform feature extraction; b) two alternative machine learning algorithms (namely J48 and K*) are then used for the classification stage. Empirical results are provided, showing that for limited computing power platforms our approach outperforms other alternative methodologies.Fil: Micheletto, Matías Javier. Universidad Nacional de la Patagonia Austral. Centro de Investigaciones y Transferencia Golfo San Jorge. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro de Investigaciones y Transferencia Golfo San Jorge. Universidad Nacional de la Patagonia "San Juan Bosco". Centro de Investigaciones y Transferencia Golfo San Jorge; ArgentinaFil: Chesñevar, Carlos Iván. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Santos, Rodrigo Martin. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin
    • …
    corecore