18 research outputs found

    Resources management architecture and algorithms for virtualized IVR applications in cloud environment

    Get PDF
    Interactive Voice Response (IVR) applications are ubiquitous nowadays. IVR is a telephony technology that allows interactions with a wide range of automated information systems via a telephone keypad or voice commands. Cloud computing is a newly emerging paradigm that hosts and provides services over the Internet with many inherent benefits. It has three major service models: Infrastructure as a service (IaaS), Platform as a service (PaaS), and Software as a Service (SaaS). Cloud computing is based on the virtualization technology that enables the co-existence of entities in general on the same substrates. These entities may be operating systems co-existing on the same hardware, applications co-existing on the same operating system, or even full-blown networks co-existing on the same routers. The key benefit is efficiency through the sharing of physical resources. Several multimedia applications are provided in cloud environments nowadays. However, to the best of our knowledge, there is no architecture that creates and manages IVR applications in cloud environment. Therefore, we propose to develop a new virtualized architecture that can create, deploy and manage IVR applications in cloud environment. We also propose two new algorithms for resources management and task scheduling as an essential part of resource sharing in such environment

    Multi-objective ACO resource consolidation in cloud computing environment

    Get PDF
    Cloud computing systems provide services to users based on a pay-as-you-go model. High volume of interest and a number of requests by user in cloud computing has resulted in the creation of data centers with large amounts of physical machines. These data centers consume huge amounts of electrical energy and air emissions. In order to improve Datacenter efficiency, resource consolidation using virtualization technology is becoming important for the reduction of the environmental impact caused by the data centers (e.g. electricity usage and carbon dioxide). By using Virtualization technology multiple VM (logical slices that conceptually called VMs) instances can be initialised on a physical machine. As a result, the amounts of active hardware are reduced and the utilisations of physical resources are increased. The present thesis focuses on problem of virtual machine placement and virtual machine consolidation in cloud computing environment. VM placement is a process of mapping virtual machines (Beloglazov and Buyya) to physical machines (PMs). VM consolidation reallocates and optimizes the mapping of VMs and PMs based on migration technique. The goal is to minimize energy consumption, resource wastage and energy communication cost between network elements within a data center under QoS constraints through VM placement and VM consolidation algorithms. The multi objective algorithms are proposed to control trade-off between energy, performance and quality of services. The algorithms have been analyzed with other approaches using Cloudsim tools. The results demonstrate that the proposed algorithms can seek and find solutions that exhibit balance between different objectives. Our main contributions are the proposal of a multi- objective optimization placement approach in order to minimize the total energy consumption of a data center, resource wastage and energy communication cost. Another contribution is to propose a multiobjective consolidation approach in order to minimize the total energy consumption of a data center, minimize number of migrations, minimize number of PMs and reconfigure resources to satisfy the SLA. Also the results have been compared with other single-objective and multi-objective algorithms

    A Decade of Research in Fog computing: Relevance, Challenges, and Future Directions

    Full text link
    Recent developments in the Internet of Things (IoT) and real-time applications, have led to the unprecedented growth in the connected devices and their generated data. Traditionally, this sensor data is transferred and processed at the cloud, and the control signals are sent back to the relevant actuators, as part of the IoT applications. This cloud-centric IoT model, resulted in increased latencies and network load, and compromised privacy. To address these problems, Fog Computing was coined by Cisco in 2012, a decade ago, which utilizes proximal computational resources for processing the sensor data. Ever since its proposal, fog computing has attracted significant attention and the research fraternity focused at addressing different challenges such as fog frameworks, simulators, resource management, placement strategies, quality of service aspects, fog economics etc. However, after a decade of research, we still do not see large-scale deployments of public/private fog networks, which can be utilized in realizing interesting IoT applications. In the literature, we only see pilot case studies and small-scale testbeds, and utilization of simulators for demonstrating scale of the specified models addressing the respective technical challenges. There are several reasons for this, and most importantly, fog computing did not present a clear business case for the companies and participating individuals yet. This paper summarizes the technical, non-functional and economic challenges, which have been posing hurdles in adopting fog computing, by consolidating them across different clusters. The paper also summarizes the relevant academic and industrial contributions in addressing these challenges and provides future research directions in realizing real-time fog computing applications, also considering the emerging trends such as federated learning and quantum computing.Comment: Accepted for publication at Wiley Software: Practice and Experience journa

    An Ethical Evaluation of Three Digitization Measures in the Health Sector: How to Better Accommodate Patients Suffering Chronic Diseases

    Get PDF
    Denne masteravhandlingen er en kvalitativ beskrivelse og etisk analyse av tre digitaliseringstiltak som er gjort for å følge opp pasienter som lider av kroniske sykdommer. Masteroppgaven gir en historisk innføring i den etiske utviklingen av forholdet mellom pasient og lege, med fokus på de fire etiske prinsippene; velgjørenhet (beneficene), ikke skade (non-maleficence), rettferdighet (justice) og respekten for selvbestemmelse (autonomy). Disse fire etiske prinsippene vil være rammeverket som utgjør den etiske analysen av de tre digitaliseringstiltakene oppgaven tar for seg. En sentral del av masteravhandlingen vil være å belyse at det kreves et tverrfaglig samarbeid mellom flere fagfelt for at teknologi og helse skal følge etiske normer. Masteravhandling diskuterer hvilke etiske implikasjoner vi møter når etikk og digitalisering møtes i et symbiotisk forhold innenfor medisinske oppfølgingsmetoder. Noen eksempler på etiske implikasjoner vil være generasjons gap i forbindelse med brukervennlighet av digitaliserte medisinske tiltak, rettferdig fordeling av medisinske tiltak og ressurser uavhengig av økonomisk og geografisk bakgrunn og stigmatisering av enkelte pasientgrupper som lider av kroniske sykdommer. De tre digitaliseringstiltakene er forskjellige i deres metodiske gjennomføring som utspiller seg i ulike etiske utfordringer relatert til de fire etiske prinsippene. Oppgaven belyser viktigheten i at gode etiske retningslinjer må gjenspeiles i utviklingen og gjennomføringen av digitaliserte tiltak for oppfølging av pasienter med kroniske sykdommer.Mastergradsoppgave i digital kulturDIKULT35

    Towards Tactile Internet in Beyond 5G Era: Recent Advances, Current Issues and Future Directions

    Get PDF
    Tactile Internet (TI) is envisioned to create a paradigm shift from the content-oriented communications to steer/control-based communications by enabling real-time transmission of haptic information (i.e., touch, actuation, motion, vibration, surface texture) over Internet in addition to the conventional audiovisual and data traffics. This emerging TI technology, also considered as the next evolution phase of Internet of Things (IoT), is expected to create numerous opportunities for technology markets in a wide variety of applications ranging from teleoperation systems and Augmented/Virtual Reality (AR/VR) to automotive safety and eHealthcare towards addressing the complex problems of human society. However, the realization of TI over wireless media in the upcoming Fifth Generation (5G) and beyond networks creates various non-conventional communication challenges and stringent requirements in terms of ultra-low latency, ultra-high reliability, high data-rate connectivity, resource allocation, multiple access and quality-latency-rate tradeoff. To this end, this paper aims to provide a holistic view on wireless TI along with a thorough review of the existing state-of-the-art, to identify and analyze the involved technical issues, to highlight potential solutions and to propose future research directions. First, starting with the vision of TI and recent advances and a review of related survey/overview articles, we present a generalized framework for wireless TI in the Beyond 5G Era including a TI architecture, the main technical requirements, the key application areas and potential enabling technologies. Subsequently, we provide a comprehensive review of the existing TI works by broadly categorizing them into three main paradigms; namely, haptic communications, wireless AR/VR, and autonomous, intelligent and cooperative mobility systems. Next, potential enabling technologies across physical/Medium Access Control (MAC) and network layers are identified and discussed in detail. Also, security and privacy issues of TI applications are discussed along with some promising enablers. Finally, we present some open research challenges and recommend promising future research directions

    Adaptive Failure-Aware Scheduling for Hadoop

    Get PDF
    Given the dynamic nature of cloud environments, failures are the norm rather than the exception in data centers powering cloud frameworks. Despite the diversity of integrated recovery mechanisms in cloud frameworks, their schedulers still generate poor scheduling decisions leading to tasks' failures due to unforeseen events such as unpredicted demands of services or hardware outages. Traditionally, simulation and analytical modeling have been widely used to analyze the impact of the scheduling decisions on the failures rates. However, they cannot provide accurate results and exhaustive coverage of the cloud systems especially when failures occur. In this thesis, we present new approaches for modeling and verifying an adaptive failure-aware scheduling algorithm for Hadoop to early detect these failures and to reschedule tasks according to changes in the cloud. Hadoop is the framework of choice on many off-the-shelf clusters in the cloud to process data-intensive applications by efficiently running them across distributed multiple machines. The proposed scheduling algorithm for Hadoop relies on predictions made by machine learning algorithms trained on previously executed tasks and data collected from the Hadoop environment. To further improve Hadoop scheduling decisions on the fly, we use reinforcement learning techniques to select an appropriate scheduling action for a scheduled task. Furthermore, we propose an adaptive algorithm to dynamically detect failures of nodes in Hadoop. We implement the above approaches in ATLAS: an AdapTive Failure-Aware Scheduling algorithm that can be built on top of existing Hadoop schedulers. To illustrate the usefulness and benefits of ATLAS, we conduct a large empirical study on a Hadoop cluster deployed on Amazon Elastic MapReduce (EMR) to compare the performance of ATLAS to those of three Hadoop scheduling algorithms (FIFO, Fair, and Capacity). Results show that ATLAS outperforms these scheduling algorithms in terms of failures' rates, execution times, and resources utilization. Finally, we propose a new methodology to formally identify the impact of the scheduling decisions of Hadoop on the failures rates. We use model checking to verify some of the most important scheduling properties in Hadoop (schedulability, resources-deadlock freeness, and fairness) and provide possible strategies to avoid their occurrences in ATLAS. The formal verification of the Hadoop scheduler allows to identify more tasks failures and hence reduce the number of failures in ATLAS

    Cyber-Physical Threat Intelligence for Critical Infrastructures Security

    Get PDF
    Modern critical infrastructures comprise of many interconnected cyber and physical assets, and as such are large scale cyber-physical systems. Hence, the conventional approach of securing these infrastructures by addressing cyber security and physical security separately is no longer effective. Rather more integrated approaches that address the security of cyber and physical assets at the same time are required. This book presents integrated (i.e. cyber and physical) security approaches and technologies for the critical infrastructures that underpin our societies. Specifically, it introduces advanced techniques for threat detection, risk assessment and security information sharing, based on leading edge technologies like machine learning, security knowledge modelling, IoT security and distributed ledger infrastructures. Likewise, it presets how established security technologies like Security Information and Event Management (SIEM), pen-testing, vulnerability assessment and security data analytics can be used in the context of integrated Critical Infrastructure Protection. The novel methods and techniques of the book are exemplified in case studies involving critical infrastructures in four industrial sectors, namely finance, healthcare, energy and communications. The peculiarities of critical infrastructure protection in each one of these sectors is discussed and addressed based on sector-specific solutions. The advent of the fourth industrial revolution (Industry 4.0) is expected to increase the cyber-physical nature of critical infrastructures as well as their interconnection in the scope of sectorial and cross-sector value chains. Therefore, the demand for solutions that foster the interplay between cyber and physical security, and enable Cyber-Physical Threat Intelligence is likely to explode. In this book, we have shed light on the structure of such integrated security systems, as well as on the technologies that will underpin their operation. We hope that Security and Critical Infrastructure Protection stakeholders will find the book useful when planning their future security strategies

    Distributed System for Attack Classification in VoIP Infrastructure Based on SIP Protocol

    Get PDF
    Import 14/02/2017Dizertační práce se zaměřuje na strojové metody klasifikace SIP útoků. Data o VoIP útocích jsou získána distribuovanou sítí detekčních sond s honeypot aplikacemi. Zachycené útoky následně zpracovává centralizovaný expertní systém Beekeeper. Tento systém provádí transformaci dat a jejich klasifikaci algoritmy strojového učení. V práci rozebírám různé typy těchto algoritmů, využívající učení bez i s učitelem, kdy nejlepších výsledků klasifikace dosahuje MLP neuronová síť. Tato neuronová síť je blíže popsána a testována v různých konfiguracích a nastaveních. Výsledná implementace obsahuje i techniky k vylepšení přesnosti, které stávající implementace nevyužívají. V práci seznamuji čtenáře se SIP protokolem, VoIP útoky a současným stavem na poli detekce těchto útoků. Navrhované řešení spoléhá na nasazení expertního systému Beekeeper s distribuovanou sítí detekčních sond. Koncept systému Beekeeper má modulární design s moduly pro agregaci a čištění dat, analýzu a vyhodnocení útoku, monitoring stavu jednotlivých sond, webové rozhraní pro komunikaci s uživateli atd. Různorodost a široká škála dostupných sond umožňuje jejich snadné nasazení v cílové síti, přičemž vyhodnocení nežádoucího provozu provádí autonomně systém Beekeeper. Díky modulární architektuře však není nutné omezovat funkci tohoto systému jen na detekci útoků. Věrohodnost a přesnost klasifikace útoků neuronovou sítí byla ověřena srovnáním s ostatními algoritmy strojového učení a výhody modelu byly popsány.The dissertation thesis focuses on machine learning methods for SIP attack classification. VoIP attacks are gathered with various types of detection nodes through a set of a honeypot applications. The data uncovered by different nodes collects centralized expert system Beekeeper. The system transforms attacks to the database and classifies them with machine learning algorithms. The thesis covers various supervised and unsupervised algorithms, but the best results and highest classification accuracy achieves MLP neural network. The neural network model is closely described and tested under varying condition and settings. The final neural network implementation contains the latest improvements for enhancing the MLP accuracy. The thesis familiarizes the reader with SIP protocol, VoIP attacks and the current state of the art methods for attack detection and mitigation. I propose the concept of a centralized expert system with distributed detection nodes. This concept also provides techniques for attack aggregation, data cleaning, node state monitoring, an analysis module, web interface and so on. The expert system Beekeeper is a modular system for attack classification and evaluation. Various detection nodes enable easy deployment in target network by the administrator, while the Beekeeper interprets the malicious traffic on the node. But the general nature and modularity of the expert system Beekeeper allow it to be used in other cases as well. The reliability and accuracy of the neural network model are verified and compared with other machine learning available nowadays. The benefits of proposed model are highlighted.440 - Katedra telekomunikační technikyvyhově

    Representation Challenges

    Get PDF
    corecore