24 research outputs found
Exploring distributed computing tools through data mining tasks
Harnessing idle PCs CPU cycles, storage space and other resources of networked computers to collaborative are mainly fixated on for all major grid computing research projects. Most of the university computers labs are occupied with the high puissant desktop PC nowadays. It is plausible to notice that most of the time machines are lying idle or wasting their computing power without utilizing in felicitous ways. However, for intricate quandaries and for analyzing astronomically immense amounts of data, sizably voluminous computational resources are required. For such quandaries, one may run the analysis algorithms in very puissant and expensive computers, which reduces the number of users that can afford such data analysis tasks. Instead of utilizing single expensive machines, distributed computing systems, offers the possibility of utilizing a set of much less expensive machines to do the same task. BOINC and Condor projects have been prosperously utilized for solving authentic scientific research works around the world at a low cost. In this work the main goal is to explore both distributed computing to implement, Condor and BOINC, and utilize their potency to harness the ideal PCs resources for the academic researchers to utilize in their research work. In this thesis, Data mining tasks have been performed in implementation of several machine learning algorithms on the distributed computing environment.Tirar partido dos recursos de CPU disponíveis, do espaço de armazenamento, e de outros recursos de computadores interligados em rede, de modo a que possam trabalhar conjuntamente, são características comuns a todos os grandes projetos de investigação em grid computing. Hoje em dia, a maioria dos laboratórios informáticos dos centros de investigação das instituições de ensino superior encontra-se equipada com poderosos computadores. Constata-se que, na maioria do tempo, estas máquinas não estão a utilizar o seu poder de processamento ou, pelo menos, não o utilizam na sua plenitude. No entanto, para problemas complexos e para a análise de grandes quantidades de dados, são necessários vastos recursos computacionais. Em tais situações, os algoritmos de análise requerem computadores muito potentes e caros, o que reduz o número de utilizadores que podem realizar essas tarefas de análise de dados. Em vez de se utilizarem máquinas individuais dispendiosas, os sistemas de computação distribuída oferecem a possibilidade de se utilizar um conjunto de máquinas muito menos onerosas que realizam a mesma tarefa. Os projectos BOINC e Condor têm sido utilizados com sucesso em trabalhos de investigação científica, em todo o mundo, com um custo reduzido. Neste trabalho, o objetivo principal é explorar ambas as ferramentas de computação distribuída, Condor e BOINC, para que se possa aproveitar os recursos computacionais disponíveis dos computadores, utilizando-os de modo a que os investigadores possam tirar partido deles nos seus trabalhos de investigação. Nesta dissertação, são realizadas tarefas de data mining com diferentes algoritmos de aprendizagem automática, num ambiente de computação distribuída
Contributions to Edge Computing
Efforts related to Internet of Things (IoT), Cyber-Physical Systems (CPS), Machine to Machine (M2M) technologies, Industrial Internet, and Smart Cities aim to improve society through the coordination of distributed devices and analysis of resulting data. By the year 2020 there will be an estimated 50 billion network connected devices globally and 43 trillion gigabytes of electronic data. Current practices of moving data directly from end-devices to remote and potentially distant cloud computing services will not be sufficient to manage future device and data growth.
Edge Computing is the migration of computational functionality to sources of data generation. The importance of edge computing increases with the size and complexity of devices and resulting data. In addition, the coordination of global edge-to-edge communications, shared resources, high-level application scheduling, monitoring, measurement, and Quality of Service (QoS) enforcement will be critical to address the rapid growth of connected devices and associated data.
We present a new distributed agent-based framework designed to address the challenges of edge computing. This actor-model framework implementation is designed to manage large numbers of geographically distributed services, comprised from heterogeneous resources and communication protocols, in support of low-latency real-time streaming applications. As part of this framework, an application description language was developed and implemented. Using the application description language a number of high-order management modules were implemented including solutions for resource and workload comparison, performance observation, scheduling, and provisioning. A number of hypothetical and real-world use cases are described to support the framework implementation
Treatment-Based Classi?cation in Residential Wireless Access Points
IEEE 802.11 wireless access points (APs) act as the central communication hub inside homes, connecting all networked devices to the Internet. Home users run a variety of network applications with diverse Quality-of-Service requirements (QoS) through their APs. However, wireless APs are often the bottleneck in residential networks as broadband connection speeds keep increasing. Because of the lack of QoS support and complicated configuration procedures in most off-the-shelf APs, users can experience QoS degradation with their wireless networks, especially when multiple applications are running concurrently.
This dissertation presents CATNAP, Classification And Treatment iN an AP , to provide better QoS support for various applications over residential wireless networks, especially timely delivery for real-time applications and high throughput for download-based applications. CATNAP consists of three major components: supporting functions, classifiers, and treatment modules. The supporting functions collect necessary flow level statistics and feed it into the CATNAP classifiers. Then, the CATNAP classifiers categorize flows along three-dimensions: response-based/non-response-based, interactive/non-interactive, and greedy/non-greedy. Each CATNAP traffic category can be directly mapped to one of the following treatments: push/delay, limited advertised window size/drop, and reserve bandwidth. Based on the classification results, the CATNAP treatment module automatically applies the treatment policy to provide better QoS support.
CATNAP is implemented with the NS network simulator, and evaluated against DropTail and Strict Priority Queue (SPQ) under various network and traffic conditions. In most simulation cases, CATNAP provides better QoS supports than DropTail: it lowers queuing delay for multimedia applications such as VoIP, games and video, fairly treats FTP flows with various round trip times, and is even functional when misbehaving UDP traffic is present. Unlike current QoS methods, CATNAP is a plug-and-play solution, automatically classifying and treating flows without any user configuration, or any modification to end hosts or applications
New approaches to data access in large-scale distributed system
Mención Internacional en el título de doctorA great number of scientific projects need supercomputing resources, such as, for example, those carried out in physics, astrophysics, chemistry, pharmacology, etc. Most of them generate, as well, a great amount of data; for example, a some minutes long experiment in a particle accelerator generates several terabytes of data.
In the last years, high-performance computing environments have evolved towards large-scale distributed systems such as Grids, Clouds, and Volunteer Computing environments. Managing a great volume of data in these environments means an added huge problem since the data have to travel from one site to another through the internet.
In this work a novel generic I/O architecture for large-scale distributed systems used for high-performance and high-throughput computing will be proposed. This solution is based on applying parallel I/O techniques to remote data access. Novel replication and data search schemes will also be proposed; schemes that, combined with the above techniques, will allow to improve the performance of those applications that execute in these environments. In addition, it will be proposed to develop simulation tools that allow to test these and other ideas without needing to use real platforms due to their technical and logistic limitations. An initial prototype of this solution has been evaluated and the results show a noteworthy improvement regarding to data access compared to existing solutions.Un gran número de proyectos científicos necesitan recursos de supercomputación como, por ejemplo, los llevados a cabo en física, astrofísica, química, farmacología, etc. Muchos de ellos generan, además, una gran cantidad de datos; por ejemplo, un experimento de unos minutos de duración en un acelerador de partículas genera varios terabytes de datos.
Los entornos de computación de altas prestaciones han evolucionado en los últimos años hacia sistemas distribuidos a gran escala tales como Grids, Clouds y entornos de computación voluntaria. En estos entornos gestionar un gran volumen de datos supone un problema añadido de importantes dimensiones ya que los datos tienen que viajar de un sitio a otro a través de internet.
En este trabajo se propondrá una nueva arquitectura de E/S genérica para sistemas distribuidos a gran escala usados para cómputo de altas prestaciones y de alta productividad. Esta solución se basa en la aplicación de técnicas de E/S paralela al acceso remoto a los datos. Así mismo, se estudiarán y propondrán nuevos esquemas de replicación y búsqueda de datos que, en combinación con las técnicas anteriores, permitan mejorar las prestaciones de aquellas aplicaciones que ejecuten en este tipo de entornos. También se propone desarrollar herramientas de simulación que permitan probar estas y otras ideas sin necesidad de recurrir a una plataforma real debido a las limitaciones técnicas y logísticas que ello supone. Se ha evaluado un prototipo inicial de esta solución y los resultados muestran una mejora significativa en el acceso a los datos sobre las soluciones existentes.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: David Expósito Singh.- Secretario: María de los Santos Pérez Hernández.- Vocal: Juan Manuel Tirado Mart
A policy-based architecture for virtual network embedding (PhD thesis)
Network virtualization is a technology that enables multiple virtual instances to coexist on a common physical network infrastructure. This paradigm fostered new business models, allowing infrastructure providers to lease or share their physical resources. Each virtual network is isolated and can be customized to support a new class of customers and applications. To this end, infrastructure providers need to embed virtual networks on their infrastructure. The virtual network embedding is the (NP-hard) problem of matching constrained virtual networks onto a physical network. Heuristics to solve the embedding problem have exploited several policies under different settings. For example, centralized solutions have been devised for small enterprise physical networks, while distributed solutions have been proposed over larger federated wide-area networks. In this thesis we present a policy-based architecture for the virtual network embedding problem. By policy, we mean a variant aspect of any of the three (invariant) embedding mechanisms: physical resource discovery, virtual network mapping, and allocation on the physical infrastructure. Our architecture adapts to different scenarios by instantiating appropriate policies, and has bounds on embedding enablesciency, and on convergence embedding time, over a single provider, or across multiple federated providers. The performance of representative novel and existing policy configuration are compared via extensive simulations, and over a prototype implementation. We also present an object model as a foundation for a protocol specification, and we release a testbed to enable users to test their own embedding policies, and to run applications within their virtual networks. The testbed uses a Linux system architecture to reserve virtual node and link capacities
Multicloud Resource Allocation:Cooperation, Optimization and Sharing
Nowadays our daily life is not only powered by water, electricity, gas and telephony but by "cloud" as well. Big cloud vendors such as Amazon, Microsoft and Google have built large-scale centralized data centers to achieve economies of scale, on-demand resource provisioning, high resource availability and elasticity. However, those massive data centers also bring about many other problems, e.g., bandwidth bottlenecks, privacy, security, huge energy consumption, legal and physical vulnerabilities. One of the possible solutions for those problems is to employ multicloud architectures. In this thesis, our work provides research contributions to multicloud resource allocation from three perspectives of cooperation, optimization and data sharing. We address the following problems in the multicloud: how resource providers cooperate in a multicloud, how to reduce information leakage in a multicloud storage system and how to share the big data in a cost-effective way. More specifically, we make the following contributions: Cooperation in the decentralized cloud. We propose a decentralized cloud model in which a group of SDCs can cooperate with each other to improve performance. Moreover, we design a general strategy function for SDCs to evaluate the performance of cooperation based on different dimensions of resource sharing. Through extensive simulations using a realistic data center model, we show that the strategies based on reciprocity are more effective than other strategies, e.g., those using prediction based on historical data. Our results show that the reciprocity-based strategy can thrive in a heterogeneous environment with competing strategies. Multicloud optimization on information leakage. In this work, we firstly study an important information leakage problem caused by unplanned data distribution in multicloud storage services. Then, we present StoreSim, an information leakage aware storage system in multicloud. StoreSim aims to store syntactically similar data on the same cloud, thereby minimizing the user's information leakage across multiple clouds. We design an approximate algorithm to efficiently generate similarity-preserving signatures for data chunks based on MinHash and Bloom filter, and also design a function to compute the information leakage based on these signatures. Next, we present an effective storage plan generation algorithm based on clustering for distributing data chunks with minimal information leakage across multiple clouds. Finally, we evaluate our scheme using two real datasets from Wikipedia and GitHub. We show that our scheme can reduce the information leakage by up to 60% compared to unplanned placement. Furthermore, our analysis in terms of system attackability demonstrates that our scheme makes attacks on information much more complex. Smart data sharing. Moving large amounts of distributed data into the cloud or from one cloud to another can incur high costs in both time and bandwidth. The optimization on data sharing in the multicloud can be conducted from two different angles: inter-cloud scheduling and intra-cloud optimization. We first present CoShare, a P2P inspired decentralized cost effective sharing system for data replication to optimize network transfer among small data centers. Then we propose a data summarization method to reduce the total size of dataset, thereby reducing network transfer
Analyzing Granger causality in climate data with time series classification methods
Attribution studies in climate science aim for scientifically ascertaining the influence of climatic variations on natural or anthropogenic factors. Many of those studies adopt the concept of Granger causality to infer statistical cause-effect relationships, while utilizing traditional autoregressive models. In this article, we investigate the potential of state-of-the-art time series classification techniques to enhance causal inference in climate science. We conduct a comparative experimental study of different types of algorithms on a large test suite that comprises a unique collection of datasets from the area of climate-vegetation dynamics. The results indicate that specialized time series classification methods are able to improve existing inference procedures. Substantial differences are observed among the methods that were tested
Security in Distributed, Grid, Mobile, and Pervasive Computing
This book addresses the increasing demand to guarantee privacy, integrity, and availability of resources in networks and distributed systems. It first reviews security issues and challenges in content distribution networks, describes key agreement protocols based on the Diffie-Hellman key exchange and key management protocols for complex distributed systems like the Internet, and discusses securing design patterns for distributed systems. The next section focuses on security in mobile computing and wireless networks. After a section on grid computing security, the book presents an overview of security solutions for pervasive healthcare systems and surveys wireless sensor network security
Applying the finite-difference time-domain to the modelling of large-scale radio channels
A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophy (PhD)Finite-difference models have been used for nearly 40 years to solve electromagnetic problems of heterogeneous nature. Further, these techniques are well known for being computationally expensive, as well as subject to various numerical artifacts. However, little is yet understood about the errors arising in the simulation of wideband sources with the finitedifference time-domain (FDTD) method. Within this context, the focus of this thesis is on two different problems. On the one hand, the speed and accuracy of current FDTD implementations is analysed and increased. On the other hand, the distortion of numerical pulses is characterised and mitigation techniques proposed.
In addition, recent developments in general-purpose computing on graphics processing units (GPGPU) have unveiled new methods for the efficient implementation of FDTD algorithms. Therefore, this thesis proposes specific GPU-based guidelines for the implementation of the standard FDTD. Then, metaheuristics are used for the calibration of a FDTD-based narrowband simulator. Regarding the simulation of wideband sources, this thesis uses first Lagrange multipliers to characterise the extrema of the numerical group velocity. Then, the spread of numerical Gaussian pulses is characterised analytically in terms of the FDTD grid parameters.
The usefulness of the proposed solutions to the previously described problems is illustrated in this thesis using coverage and wideband predictions in large-scale scenarios. In particular, the indoor-to-outdoor radio channel in residential areas is studied. Furthermore, coverage and wideband measurements have also been used to validate the predictions.
As a result of all the above, this thesis introduces first an efficient and accurate FDTD simulator. Then, it characterises analytically the propagation of numerical pulses. Finally, the narrowband and wideband indoorto-outdoor channels are modeled using the developed techniques