1,002 research outputs found
Building Programmable Wireless Networks: An Architectural Survey
In recent times, there have been a lot of efforts for improving the ossified
Internet architecture in a bid to sustain unstinted growth and innovation. A
major reason for the perceived architectural ossification is the lack of
ability to program the network as a system. This situation has resulted partly
from historical decisions in the original Internet design which emphasized
decentralized network operations through co-located data and control planes on
each network device. The situation for wireless networks is no different
resulting in a lot of complexity and a plethora of largely incompatible
wireless technologies. The emergence of "programmable wireless networks", that
allow greater flexibility, ease of management and configurability, is a step in
the right direction to overcome the aforementioned shortcomings of the wireless
networks. In this paper, we provide a broad overview of the architectures
proposed in literature for building programmable wireless networks focusing
primarily on three popular techniques, i.e., software defined networks,
cognitive radio networks, and virtualized networks. This survey is a
self-contained tutorial on these techniques and its applications. We also
discuss the opportunities and challenges in building next-generation
programmable wireless networks and identify open research issues and future
research directions.Comment: 19 page
Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud
With the advent of cloud computing, organizations are nowadays able to react
rapidly to changing demands for computational resources. Not only individual
applications can be hosted on virtual cloud infrastructures, but also complete
business processes. This allows the realization of so-called elastic processes,
i.e., processes which are carried out using elastic cloud resources. Despite
the manifold benefits of elastic processes, there is still a lack of solutions
supporting them.
In this paper, we identify the state of the art of elastic Business Process
Management with a focus on infrastructural challenges. We conceptualize an
architecture for an elastic Business Process Management System and discuss
existing work on scheduling, resource allocation, monitoring, decentralized
coordination, and state management for elastic processes. Furthermore, we
present two representative elastic Business Process Management Systems which
are intended to counter these challenges. Based on our findings, we identify
open issues and outline possible research directions for the realization of
elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and
P. Hoenisch (2015). Elastic Business Process Management: State of the Art and
Open Challenges for BPM in the Cloud. Future Generation Computer Systems,
Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00
The Internet-of-Things Meets Business Process Management: Mutual Benefits and Challenges
The Internet of Things (IoT) refers to a network of connected devices
collecting and exchanging data over the Internet. These things can be
artificial or natural, and interact as autonomous agents forming a complex
system. In turn, Business Process Management (BPM) was established to analyze,
discover, design, implement, execute, monitor and evolve collaborative business
processes within and across organizations. While the IoT and BPM have been
regarded as separate topics in research and practice, we strongly believe that
the management of IoT applications will strongly benefit from BPM concepts,
methods and technologies on the one hand; on the other one, the IoT poses
challenges that will require enhancements and extensions of the current
state-of-the-art in the BPM field. In this paper, we question to what extent
these two paradigms can be combined and we discuss the emerging challenges
Comparative Analysis of Malware Behavior in Hardware and Virtual Sandboxes
openMalicious software, or malware, continues to be a pervasive threat to computer systems and networks worldwide. As malware constantly evolves and becomes more sophisticated, it is crucial to develop effective methods for its detection and analysis. Sandboxing technology has emerged as a valuable tool in the field of cybersecurity, allowing researchers to safely execute and observe malware behavior in controlled environments.
This thesis presents a comprehensive investigation into the behavior of malware samples when executed in both hardware and virtual sandboxes. The primary objective is to assess the effectiveness of hardware sandboxing in capturing and analyzing malware behaviors compared to traditional virtual sandboxes.
The research methodology involves the execution of various malware samples in both hardware and virtual sandboxes, followed by the analysis of key parameters, including memory changes, file system logs, and network traffic. By comparing the results obtained from the two sandboxing approaches, this study aims to provide insights into the advantages and limitations of each method.
Furthermore, the research delves into the potential evasion techniques employed by malware to bypass detection in either sandboxing environment. Identifying such evasion strategies is vital for enhancing the overall security posture and developing more robust defense mechanisms against evolving malware threats.
The findings of this research contribute to the field of cybersecurity by shedding light on the strengths and weaknesses of hardware and virtual sandboxes for malware analysis. Ultimately, this work serves as a valuable resource for security practitioners and researchers seeking to improve malware detection and analysis techniques in the ever-evolving landscape of cybersecurity threats.Malicious software, or malware, continues to be a pervasive threat to computer systems and networks worldwide. As malware constantly evolves and becomes more sophisticated, it is crucial to develop effective methods for its detection and analysis. Sandboxing technology has emerged as a valuable tool in the field of cybersecurity, allowing researchers to safely execute and observe malware behavior in controlled environments.
This thesis presents a comprehensive investigation into the behavior of malware samples when executed in both hardware and virtual sandboxes. The primary objective is to assess the effectiveness of hardware sandboxing in capturing and analyzing malware behaviors compared to traditional virtual sandboxes.
The research methodology involves the execution of various malware samples in both hardware and virtual sandboxes, followed by the analysis of key parameters, including memory changes, file system logs, and network traffic. By comparing the results obtained from the two sandboxing approaches, this study aims to provide insights into the advantages and limitations of each method.
Furthermore, the research delves into the potential evasion techniques employed by malware to bypass detection in either sandboxing environment. Identifying such evasion strategies is vital for enhancing the overall security posture and developing more robust defense mechanisms against evolving malware threats.
The findings of this research contribute to the field of cybersecurity by shedding light on the strengths and weaknesses of hardware and virtual sandboxes for malware analysis. Ultimately, this work serves as a valuable resource for security practitioners and researchers seeking to improve malware detection and analysis techniques in the ever-evolving landscape of cybersecurity threats
Recommended from our members
Improving virtual memory performance in virtualized environments
Virtual Memory is a major system performance bottleneck in virtualized environments. In addition to expensive address translations, frequent virtual machine context switches are common in virtualized environments, resulting in increased TLB miss rates, subsequent expensive page walks and data cache contention due to incoming page table entries evicting useful data. Orthogonally, translation coherence, which is currently an expensive operation implemented in software, can consume up to 50% of the runtime of an application executing on the guest. To improve the performance of virtual memory in virtualized environments, two solutions have been proposed in this thesis - namely, (1) Context Switch Aware Large TLB (CSALT), an architecture which addresses the problem of increased TLB miss rates and their adverse impact on data caches. CSALT copes with the increased demand of context switches by storing a large number TLB entries. It mitigates data cache contention by employing a novel TLB-aware cache partitioning scheme. On 8-core systems that switch between two virtual machine contexts executing multi-threaded workloads, CSALT achieves an average performance improvement of 85% over a baseline with conventional L1-L2 TLBs and 25% over a baseline which has a large L3 TLB (2) Translation Coherence using Addressable TLBs (TCAT), a hardware translation coherence scheme which eliminates almost all of the overheads associated with address translation coherence. TCAT overlays translation coherence atop cache coherence to accurately identify slave cores. It then leverages the addressable Part-Of-Memory TLB (POM-TLB) to eliminate expensive Inter Processor Interrupts (IPI) and achieve precise invalidations on the slave core. On 8-core systems with one virtual machine context executing multi-threaded workloads, TCAT achieves an average performance improvement of 13% over the kvmtlb baselineElectrical and Computer Engineerin
Progressive introduction of network softwarization in operational telecom networks: advances at architectural, service and transport levels
Technological paradigms such as Software Defined Networking, Network Function
Virtualization and Network Slicing are altogether offering new ways of providing services.
This process is widely known as Network Softwarization, where traditional operational
networks adopt capabilities and mechanisms inherit form the computing world, such as
programmability, virtualization and multi-tenancy.
This adoption brings a number of challenges, both from the technological and operational
perspectives. On the other hand, they provide an unprecedented flexibility opening
opportunities to developing new services and new ways of exploiting and consuming telecom
networks.
This Thesis first overviews the implications of the progressive introduction of network
softwarization in operational networks for later on detail some advances at different levels,
namely architectural, service and transport levels. It is done through specific exemplary use
cases and evolution scenarios, with the goal of illustrating both new possibilities and existing
gaps for the ongoing transition towards an advanced future mode of operation.
This is performed from the perspective of a telecom operator, paying special attention on
how to integrate all these paradigms into operational networks for assisting on their evolution
targeting new, more sophisticated service demands.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Eduardo Juan Jacob Taquet.- Secretario: Francisco Valera Pintor.- Vocal: Jorge López Vizcaín
Exploring Views on Data Centre Power Consumption and Server Virtualization
The primary purpose of this Thesis is to explore views on Green IT/Computing and how it relates to Server Virtualization, in particular for Data Centre IT environments. Our secondary purpose is to explore other important aspects of Server Virtualization, in the same context. The primary research question was to determine if Data Centre (DC) power consumption reduction is related to, or perceived as, a success factor for implementing and deploying server virtualization for consolidation purposes, and if not, what other decision areas affect Server Virtualization and power consumption reduction, respectively. The conclusions from our research are that there is a difference of opinion regarding how to factor power consumption reduction from server equipment, both from promoters and deployers. However, it was a common view that power consumption reduction was usually achieved, but not necessarily considered, and thus not evaluated, as a success factor, nor that actual power consumption was measured or monitored after server virtualization deployment. We found that other factors seemed more important, such as lower cost through higher physical machine utilization, simplified high availability and disaster recovery capabilities
Advances in Dynamic Virtualized Cloud Management
Cloud computing continues to gain in popularity, with more and more applications being deployed into public and private clouds. Deploying an application in the cloud allows application owners to provision computing resources on-demand, and scale quickly to meet demand. An Infrastructure as a Service (IaaS) cloud provides low-level resources, in the form of virtual machines (VMs), to clients on a pay-per-use basis. The cloud provider (owner) can reduce costs by lowering power consumption. As a typical server can consume 50% or more of its peak power consumption when idle, this can be accomplished by consolidating client VMs onto as few hosts (servers) as possible. This, however, can lead to resource contention, and degraded VM performance. As such, VM placements must be dynamically adapted to meet changing workload demands. We refer to this process as dynamic management. Clients should also take advantage of the cloud environment by scaling their applications up and down (adding and removing VMs) to match current workload demands.
This thesis proposes a number of contributions to the field of dynamic cloud management. First, we propose a method of dynamically switching between management strategies at run-time in order to achieve more than one management goal. In order to increase the scalability of dynamic management algorithms, we introduce a distributed version of our management algorithm. We then consider deploying applications which consist of multiple VMs, and automatically scale their deployment to match their workload. We present an integrated management algorithm which handles both dynamic management and application scaling. When dealing with multi-VM applications, the placement of communicating VMs within the data centre topology should be taken into account. To address this consideration, we propose a topology-aware version of our dynamic management algorithm. Finally, we describe a simulation tool, DCSim, which we have developed to help evaluate dynamic management algorithms and techniques
Mitigating Interference During Virtual Machine Live Migration through Storage Offloading
Today\u27s cloud landscape has evolved computing infrastructure into a dynamic, high utilization, service-oriented paradigm. This shift has enabled the commoditization of large-scale storage and distributed computation, allowing engineers to tackle previously untenable problems without large upfront investment. A key enabler of flexibility in the cloud is the ability to transfer running virtual machines across subnets or even datacenters using live migration. However, live migration can be a costly process, one that has the potential to interfere with other applications not involved with the migration. This work investigates storage interference through experimentation with real-world systems and well-established benchmarks. In order to address migration interference in general, a buffering technique is presented that offloads the migration\u27s read, eliminating interference in the majority of scenarios
- …