23 research outputs found

    Role of Ontology with Multi-Agent System in Cloud Computing

    Get PDF
    Information technology is playing a major role in revolutionizing how organizations operate, manage, as well as automate their processes. However, most of the systems today are not reusable because there is mixing the knowledge of the society and that of the processes. This is because the knowledge of societies is different from each other applications; hence, it is not reusable. This paper will address how dependent the applications are on societies, and it will separately define the processes of ontology, the knowledge of the agent, ontology of society, and the knowledge of the society [1]. This will be an introduction of ontology-based, process oriented, and an agent system that is independent of society that allows most if not all organizations to make use of it. This is by defining, as well as importing the ontology of the society and some process patterns, which may be instantiated from the ontology of the process into the system. This proposed system can be used on the platform of cloud computing. The evaluation is from two different perspectives: the quality of making use of the cohesion and the coupling measures. Coupling measures entails measuring the degree to which the system will focus on solving a problem in particular. Secondly, it focuses on the applicability, which is determined by evaluating how manageable and automobile the seven processes from three different societies are [2]

    Developing Methods and Algorithms for Cloud Computing Management Systems in Industrial Polymer Synthesis Processes

    Get PDF
    To date, the resources and computational capacity of companies have been insufficient to evaluate the technological properties of emerging products based on mathematical modelling tools. Often, several calculations have to be performed with different initial data. A remote computing system using a high-performance cluster can overcome this challenge. This study aims to develop unified methods and algorithms for a remote computing management system for modelling polymer synthesis processes at a continuous production scale. The mathematical description of the problem-solving algorithms is based on a kinetic approach to process investigation. A conceptual scheme for the proposed service can be built as a multi-level architecture with distributed layers for data storage and computation. This approach provides the basis for a unified database of laboratory and computational experiments to address and solve promising problems in the use of neural network technologies in chemical kinetics. The methods and algorithms embedded in the system eliminate the need for model description. The operation of the system was tested by simulating the simultaneous statement and computation of 15 to 30 tasks for an industrially significant polymer production process. Analysis of the time required showed a nearly 10-fold increase in the rate of operation when managing a set of similar tasks. The analysis shows that the described formulation and solution of problems is more time-efficient and provides better production modes. Doi: 10.28991/esj-2021-01324 Full Text: PD

    Performance modelling and optimization for video-analytic algorithms in a cloud-like environment using machine learning

    Get PDF
    CCTV cameras produce a large amount of video surveillance data per day, and analysing them require the use of significant computing resources that often need to be scalable. The emergence of the Hadoop distributed processing framework has had a significant impact on various data intensive applications as the distributed computed based processing enables an increase of the processing capability of applications it serves. Hadoop is an open source implementation of the MapReduce programming model. It automates the operation of creating tasks for each function, distribute data, parallelize executions and handles machine failures that reliefs users from the complexity of having to manage the underlying processing and only focus on building their application. It is noted that in a practical deployment the challenge of Hadoop based architecture is that it requires several scalable machines for effective processing, which in turn adds hardware investment cost to the infrastructure. Although using a cloud infrastructure offers scalable and elastic utilization of resources where users can scale up or scale down the number of Virtual Machines (VM) upon requirements, a user such as a CCTV system operator intending to use a public cloud would aspire to know what cloud resources (i.e. number of VMs) need to be deployed so that the processing can be done in the fastest (or within a known time constraint) and the most cost effective manner. Often such resources will also have to satisfy practical, procedural and legal requirements. The capability to model a distributed processing architecture where the resource requirements can be effectively and optimally predicted will thus be a useful tool, if available. In literature there is no clear and comprehensive modelling framework that provides proactive resource allocation mechanisms to satisfy a user's target requirements, especially for a processing intensive application such as video analytic. In this thesis, with the hope of closing the above research gap, novel research is first initiated by understanding the current legal practices and requirements of implementing video surveillance system within a distributed processing and data storage environment, since the legal validity of data gathered or processed within such a system is vital for a distributed system's applicability in such domains. Subsequently the thesis presents a comprehensive framework for the performance ii modelling and optimization of resource allocation in deploying a scalable distributed video analytic application in a Hadoop based framework, running on virtualized cluster of machines. The proposed modelling framework investigates the use of several machine learning algorithms such as, decision trees (M5P, RepTree), Linear Regression, Multi Layer Perceptron(MLP) and the Ensemble Classifier Bagging model, to model and predict the execution time of video analytic jobs, based on infrastructure level as well as job level parameters. Further in order to propose a novel framework for the allocate resources under constraints to obtain optimal performance in terms of job execution time, we propose a Genetic Algorithms (GAs) based optimization technique. Experimental results are provided to demonstrate the proposed framework's capability to successfully predict the job execution time of a given video analytic task based on infrastructure and input data related parameters and its ability determine the minimum job execution time, given constraints of these parameters. Given the above, the thesis contributes to the state-of-art in distributed video analytics, design, implementation, performance analysis and optimisation

    A Framework to support cloud adoption decision-making by SMEs in Tamil Nadu

    Get PDF
    Cloud computing is a disruptive technology which represents a paradigm shift in the way computing services are purchased and maintained within organisations. Due to its benefits like low capital, scalability and high reliability, the cloud infrastructure has the features and facilities to speed up Information Technology (IT) adoption in developing countries. However, moving data and applications to a cloud environment is not straightforward and can be very challenging as decision makers need to consider numerous technical and organisational aspects before deciding to adopt cloud infrastructure. There are existing models and framework available to support different stages of the cloud adoption decision making process. However, they are developed for technologically developed countries and there has been very little investigation done to determine whether the factors that affect cloud adoption are any different for a technologically developing country like India. This research aims to provide a framework to aid cloud adoption among SMEs in Tamil Nadu, a southern state of the Indian Union. The major contribution to knowledge is the framework, based on Scientific Decision Making (SDM) which has been developed to support SME decision makers at all the different stages of the cloud adoption decision making process. The theories of technology adoption like Diffusion of Innovation (DOI), Technology, Organisation and Environment (TOE) framework along with Multi Criteria Decision Making (MCDM) forms the theoretical underpinnings of the research. The primary data was collected via two web-based questionnaire surveys among SME decision makers from Tamil Nadu. Six determinants of cloud adoption such relative advantage, compatibility, innovativeness, organisation size, external issues and industry type were identified. The findings identify that 12 organisational factor specific to SME location is a very important decision factor while planning cloud adoption. The proposed cloud adoption decision support framework (CADSF) includes two tools namely; cloud suitability assessment and cloud service identification. The framework provides a preliminary structure for developing a knowledge driven Decision Support System (DSS) to support cloud adoption among SMEs in Tamil Nadu. Finally, based on the findings of the research, it is expected with developments to the existing cloud infrastructure, especially the availability of reliable internet and increased awareness, more SMEs in Tamil Nadu would adopt the cloud computing infrastructure

    IT Laws in the Era of Cloud-Computing

    Get PDF
    This book documents the findings and recommendations of research into the question of how IT laws should develop on the understanding that today’s information and communication technology is shaped by cloud computing, which lies at the foundations of contemporary and future IT as its most widespread enabler. In particular, this study develops on both a comparative and an interdisciplinary axis, i.e. comparatively by examining EU and US law, and on an interdisciplinary level by dealing with law and IT. Focusing on the study of data protection and privacy in cloud environments, the book examines three main challenges on the road towards more efficient cloud computing regulation: -understanding the reasons behind the development of diverging legal structures and schools of thought on IT law -ensuring privacy and security in digital clouds -converging regulatory approaches to digital clouds in the hope of more harmonised IT laws in the future

    Towards Improving the Reliability of Live Migration Operations in Openstack Clouds

    Get PDF
    RÉSUMÉ Grâce au succès de la virtualisation, les solutions infonuagiques sont aujourd’hui présentes dans plusieurs aspects de nos vies. La virtualisation permet d’abstraire les caractéristiques d’une ma- chine physique sous forme d’instances de machines virtuelles. Les utilisateurs finaux peuvent alors consommer les ressources de ces machines virtuelles comme s’ils étaient sur une machine physique. De plus, les machines virtuelles en cours d’exécution peuvent être migrées d’un hôte source (généralement hébergé dans un centre de données) vers un autre hôte (hôte de destination, qui peut être hébergé dans un centre de données différent), sans perturber les services. Ce processus est appelé migration en temps réel de machine virtuelles. La migration en temps réel de machine virtuelles, est un outil puissant qui permet aux administrateurs de système infonuagiques d’équilibrer les charges dans un centre de données ou encore de déplacer des applications dans le but d’améliorer leurs performances et–ou leurs fiabilités. Toutefois, si elle n’est pas planifiée soigneusement, cette opération peut échouer. Ce qui peut entraîner une dégradation significative de la qualité de service des applications concernées et même parfois des interruptions de services. Il est donc extrêmement important d’équiper les administrateurs de systèmes infonuagiques d’outils leurs permettant d’évaluer et d’améliorer la performance des opérations de migration temps réel de machine virtuelles. Des efforts ont été réalisées par la communauté scientifique dans le but d’améliorer la fiabilité de ces opérations. Cependant, à cause de leur complexité et de la nature dynamique des environnements infonuagiques, plusieurs migrations en temps réel de machines virtuelles échouent encore. Dans ce mémoire, nous nous appuyons sur les prédictions d’un modèle de classification (Random Forest) et sur des politiques générées par un processus de décision markovien (MDP), pour décider du moment propice pour une migration en temps réel de machine virtuelle, et de la destination qui assurerait un succès a l’opération. Nous réalisons des études de cas visant à évaluer l’efficacité de notre approche. Les défaillances sont simulées dans notre environnement d’exécution grâce à l’outils DestroyStack. Les résultats de ces études de cas montrent que notre approche permet de prédire les échecs de migration avec une précision de 95%. En identifiant le meilleur moment pour une migration en temps réel de machine virtuelle (grâce aux modèles MDP), en moyenne, nous sommes capable de réduire le temps de migration de 74% et la durée d’indisponibilité de la machine virtuelle de 21%.----------ABSTRACT Cloud computing has become commonplace with the help of virtualization as an enabling technology. Virtualization abstracts pools of compute resources and represents them as instances of virtual machines (VMs). End users can consume the resources of these VMs as if they were on a physical machine. Moreover, the running VMs can be migrated from one node (Source node; usually a data center) to another node (destination node; another datacenter) without disrupting services. A process known as live VM migration. Live migration is a powerful tool that system administrators can leverage to, for example, balance the loads in a data center or relocate an application to improve its performance and–or reliability. However, if not planned carefully, a live migration can fail, which can lead to service outage or significant performance degradation. Hence, it is utterly important to be able to assess and forecast the performance of live migration operations, before they are executed. The research community have proposed models and mechanisms to improve the reliability of live migration. Yet, because of the scale, complexity and the dynamic nature of cloud environments, live migration operations still fail. In this thesis, we rely on predictions made by a Random Forest model and scheduling policies generated by a Markovian Decision Process (MDP), to decide on the migration time and destination node of a VM, during a live migration operation in OpenStack. We conduct a case study to assess the effectiveness of our approach, using the fault injection framework DestroyStack. Results show that our proposed approach can predict live migration failures with and accuracy of 95%. By identifying the best time for live migration with MDP models, in average, we can reduce the live migration time by 74% and the downtime by 21%

    Cloud computing concepts, technology and architecture by Thomas Erl, Zaigham Mahmood and Ricardo Puttini

    No full text
    corecore