6 research outputs found

    Natural computing for vehicular networks

    Get PDF
    La presente tesis aborda el dise帽o inteligente de soluciones para el despliegue de redes vehiculares ad-hoc (vehicular ad hoc networks, VANETs). Estas son redes de comunicaci贸n inal谩mbrica formada principalmente por veh铆culos y elementos de infraestructura vial. Las VANETs ofrecen la oportunidad para desarrollar aplicaciones revolucionarias en el 谩mbito de la seguridad y eficiencia vial. Al ser un dominio tan novedoso, existe una serie de cuestiones abiertas, como el dise帽o de la infraestructura de estaciones base necesaria y el encaminamiento (routing) y difusi贸n (broadcasting) de paquetes de datos, que todav铆a no han podido resolverse empleando estrategias cl谩sicas. Es por tanto necesario crear y estudiar nuevas t茅cnicas que permitan de forma eficiente, eficaz, robusta y flexible resolver dichos problemas. Este trabajo de tesis doctoral propone el uso de computaci贸n inspirada en la naturaleza o Computaci贸n Natural (CN) para tratar algunos de los problemas m谩s importantes en el 谩mbito de las VANETs, porque representan una serie de algoritmos vers谩tiles, flexibles y eficientes para resolver problemas complejos. Adem谩s de resolver los problemas VANET en los que nos enfocamos, se han realizado avances en el uso de estas t茅cnicas para que traten estos problemas de forma m谩s eficiente y eficaz. Por 煤ltimo, se han llevado a cabo pruebas reales de concepto empleando veh铆culos y dispositivos de comunicaci贸n reales en la ciudad de M谩laga (Espa帽a). La tesis se ha estructurado en cuatro grandes fases. En la primera fase, se han estudiado los principales fundamentos en los que se basa esta tesis. Para ello se hizo un estudio exhaustivo sobre las tecnolog铆as que emplean las redes vehiculares, para as铆, identificar sus principales debilidades. A su vez, se ha profundizado en el an谩lisis de la CN como herramienta eficiente para resolver problemas de optimizaci贸n complejos, y de c贸mo utilizarla en la resoluci贸n de los problemas en VANETs. En la segunda fase, se han abordado cuatro problemas de optimizaci贸n en redes vehiculares: la transferencia de archivos, el encaminamiento (routing) de paquetes, la difusi贸n (broadcasting) de mensajes y el dise帽o de la infraestructura de estaciones base necesaria para desplegar redes vehiculares. Para la resoluci贸n de dichos problemas se han propuesto diferentes algoritmos CN que se clasifican en algoritmos evolutivos (evolutionary algorithms, EAs), m茅todos de inteligencia de enjambre (swarm intelligence, SI) y enfriamiento simulado (simulated annealing, SA). Los resultados obtenidos han proporcionado protocolos de han mejorado de forma significativa las comunicaciones en VANETs. En la tercera y 煤ltima fase, se han realizado experimentos empleando veh铆culos reales circulando por las carreteras de M谩laga y que se comunicaban entre s铆. El principal objetivo de estas pruebas ha sido el validar las mejoras que presentan los protocolos que se han optimizado empleando CN. Los resultados obtenidos de las fases segunda y tercera confirman la hip贸tesis de trabajo, que la CN es una herramienta eficiente para tratar el dise帽o inteligente en redes vehiculares

    Runtime reconfiguration of physical and virtual pervasive systems

    Full text link
    Today, almost everyone comes in contact with smart environments during their everyday鈥檚 life. Environments such as smart homes, smart offices, or pervasive classrooms contain a plethora of heterogeneous connected devices and provide diverse services to users. The main goal of such smart environments is to support users during their daily chores and simplify the interaction with the technology. Pervasive Middlewares can be used for a seamless communication between all available devices and by integrating them directly into the environment. Only a few years ago, a user entering a meeting room had to set up, for example, the projector and connect a computer manually or teachers had to distribute files via mail. With the rise of smart environments these tasks can be automated by the system, e.g., upon entering a room, the smartphone automatically connects to a display and the presentation starts. Besides all the advantages of smart environments, they also bring up two major problems. First, while the built-in automatic adaptation of many smart environments is often able to adjust the system in a helpful way, there are situations where the user has something different in mind. In such cases, it can be challenging for unexperienced users to configure the system to their needs. Second, while users are getting increasingly mobile, they still want to use the systems they are accustomed to. As an example, an employee on a business trip wants to join a meeting taking place in a smart meeting room. Thus, smart environments need to be accessible remotely and should provide all users with the same functionalities and user experience. For these reasons, this thesis presents the PerFlow system consisting of three parts. First, the PerFlow Middleware which allows the reconfiguration of a pervasive system during runtime. Second, with the PerFlow Tool unexperi- enced end users are able to create new configurations without having previous knowledge in programming distributed systems. Therefore, a specialized visual scripting language is designed, which allows the creation of rules for the commu- nication between different devices. Third, to offer remote participants the same user experience, the PerFlow Virtual Extension allows the implementation of pervasive applications for virtual environments. After introducing the design for the PerFlow system, the implementation details and an evaluation of the developed prototype is outlined. The evaluation discusses the usability of the system in a real world scenario and the performance implications of the middle- ware evaluated in our own pervasive learning environment, the PerLE testbed. Further, a two stage user study is introduced to analyze the ease of use and the usefulness of the visual scripting tool

    EXPLORING MULTIPLE LEVELS OF PERFORMANCE MODELING FOR HETEROGENEOUS SYSTEMS

    Get PDF
    The current trend in High-Performance Computing (HPC) is to extract concurrency from clusters that include heterogeneous resources such as General Purpose Graphical Processing Units (GPGPUs) and Field Programmable Gate Array (FPGAs). Although these heterogeneous systems can provide substantial performance for massively parallel applications, much of the available computing resources are often under-utilized due to inefficient application mapping, load balancing, and tuning. While several performance prediction models exist to efficiently tune applications, they often require significant computing architecture knowledge for reliable prediction. In addition, they do not address multiple levels of design space abstraction and it is often difficult to choose a reliable prediction model for a given design. In this research, we develop a multi-level suite of performance prediction models for heterogeneous systems that primarily targets Synchronous Iterative Algorithms (SIAs). The modeling suite aims to produce accurate and straightforward application runtime prediction prior to the actual large-scale implementation. This suite addresses two levels of system abstraction: 1) low-level where partial knowledge of the application implementation is present along with the system specifications and 2) high-level where the implementation details are minimum and only high-level computing system specifications are given. The performance prediction modeling suite is developed using our proposed Synchronous Iterative GPGPU Execution (SIGE) model for GPGPU clusters, motivated by the RC Amenability Test for Scalable Systems (RATSS) model for FPGA clusters. The low-level abstraction for GPGPU clusters consists of a regression-based performance prediction framework that statistically abstracts system architecture characteristics, enabling performance prediction without detailed architecture knowledge. In this framework, the overall execution time of an application is predicted using regression models developed for host-device computations and network-level communications performed in the algorithm. We have used a family of Spiking Neural Network (SNN) models and an Anisotropic Diffusion Filter (ADF) algorithm as SIA case studies for verification of the regression-based framework and achieved over 90% prediction accuracy compared to the actual implementations for several GPGPU cluster configurations tested. The results establish the adequacy of the low-level abstraction model for advanced, fine-grained performance prediction and design space exploration (DSE). The high-level abstraction consists of the following two primary modeling approaches: qualitative modeling that uses existing subjective-analytical models for computation and communication; and quantitative modeling that predicts computation and communication performance by measuring hardware events associated with objective-analytical models using micro-benchmarks. The performance prediction provided by the high-level abstraction approaches, albeit coarse-grained, delivers useful insight into application performance on the chosen heterogeneous system. A blend of the two high-level modeling approaches, labeled as hybrid modeling, is explored for insightful preliminary performance prediction. The performance prediction models in the multi-level suite are verified and compared for their accuracy and ease-of-use, allowing developers to choose a model that best satisfies their design space abstraction. We also construct a roadmap that guides user from optimal Application-to-Accelerator (A2A) mapping to fine-grained performance prediction, thereby providing a hierarchical approach to optimal application porting on the target heterogeneous system. The end goal of this dissertation research is to offer the HPC community a thorough, non-architecture specific, performance prediction framework in the form of a hierarchical modeling suite that enables them to optimally utilize the heterogeneous resources

    Efficient multilevel scheduling in grids and clouds with dynamic provisioning

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Inform谩tica, Departamento de Arquitectura de Computadores y Autom谩tica, le铆da el 12-01-2016La consolidaci贸n de las grandes infraestructuras para la Computaci贸n Distribuida ha resultado en una plataforma de Computaci贸n de Alta Productividad que est谩 lista para grandes cargas de trabajo. Los mejores exponentes de este proceso son las federaciones grid actuales. Por otro lado, la Computaci贸n Cloud promete ser m谩s flexible, utilizable, disponible y simple que la Computaci贸n Grid, cubriendo adem谩s muchas m谩s necesidades computacionales que las requeridas para llevar a cabo c谩lculos distribuidos. En cualquier caso, debido al dinamismo y la heterogeneidad presente en grids y clouds, encontrar la asignaci贸n ideal de las tareas computacionales en los recursos disponibles es, por definici贸n un problema NP-completo, y s贸lo se pueden encontrar soluciones sub贸ptimas para estos entornos. Sin embargo, la caracterizaci贸n de estos recursos en ambos tipos de infraestructuras es deficitaria. Los sistemas de informaci贸n disponibles no proporcionan datos fiables sobre el estado de los recursos, lo cual no permite la planificaci贸n avanzada que necesitan los diferentes tipos de aplicaciones distribuidas. Durante la 煤ltima d茅cada esta cuesti贸n no ha sido resuelta para la Computaci贸n Grid y las infraestructuras cloud establecidas recientemente presentan el mismo problema. En este marco, los planificadores (brokers) s贸lo pueden mejorar la productividad de las ejecuciones largas, pero no proporcionan ninguna estimaci贸n de su duraci贸n. La planificaci贸n compleja ha sido abordada tradicionalmente por otras herramientas como los gestores de flujos de trabajo, los auto-planificadores o los sistemas de gesti贸n de producci贸n pertenecientes a ciertas comunidades de investigaci贸n. Sin embargo, el bajo rendimiento obtenido con estos mecanismos de asignaci贸n anticipada (early-binding) es notorio. Adem谩s, la diversidad en los proveedores cloud, la falta de soporte de herramientas de planificaci贸n y de interfaces de programaci贸n estandarizadas para distribuir la carga de trabajo, dificultan la portabilidad masiva de aplicaciones legadas a los entornos cloud...The consolidation of large Distributed Computing infrastructures has resulted in a High-Throughput Computing platform that is ready for high loads, whose best proponents are the current grid federations. On the other hand, Cloud Computing promises to be more flexible, usable, available and simple than Grid Computing, covering also much more computational needs than the ones required to carry out distributed calculations. In any case, because of the dynamism and heterogeneity that are present in grids and clouds, calculating the best match between computational tasks and resources in an effectively characterised infrastructure is, by definition, an NP-complete problem, and only sub-optimal solutions (schedules) can be found for these environments. Nevertheless, the characterisation of the resources of both kinds of infrastructures is far from being achieved. The available information systems do not provide accurate data about the status of the resources that can allow the advanced scheduling required by the different needs of distributed applications. The issue was not solved during the last decade for grids and the cloud infrastructures recently established have the same problem. In this framework, brokers only can improve the throughput of very long calculations, but do not provide estimations of their duration. Complex scheduling was traditionally tackled by other tools such as workflow managers, self-schedulers and the production management systems of certain research communities. Nevertheless, the low performance achieved by these earlybinding methods is noticeable. Moreover, the diversity of cloud providers and mainly, their lack of standardised programming interfaces and brokering tools to distribute the workload, hinder the massive portability of legacy applications to cloud environments...Depto. de Arquitectura de Computadores y Autom谩ticaFac. de Inform谩ticaTRUEsubmitte

    Group trust in distributed systems and its relationship with information security and cyber security

    Get PDF
    Tesis in茅dita de la Universidad Complutense de Madrid, Facultad de Inform谩tica, Departamento de Ingenier铆a del Software e Inteligencia Artificial, le铆da el 15-01-2016This thesis describes aspects regarding trust, reputation, information security and cyber security as connected subjects. It is the belief of this research that without trust it is not possible to address security in computational systems properly. One fundamental aspect is to use information systems or to rely on them. One must trust the technology involved and consequently the entire system, without even knowing it in the rst place. Due to human characteristics, there is the tendency to trust that systems will keep our data secure. However, one basic problem is that trust and reputation deals with subjective evaluations. In many situations, information systems nowadays have distributed support. It means that there are a lot of parts that connect everything together, which is unknown to the ordinary users. Depending on the technology, it is even unknown to systems administrators when it comes to for example clusters, cloud and peer-to-peer systems. Considering the rst area of research - trust - from the perspective of this work, a group trust model for distributed systems is presented as an extension applied to conventional trust and reputation mechanisms, extension developed considering groups, which is herein de ned as a collection of entities with particular a nities and capabilities. Broadening this perspective, the formation of groups is very common, but very few trust and reputation models studied deal with trust in the perspective of a collection of entities with common a nities. Thus, group trust is a way of representing the set of trust and reputation of their particular members. One aspect to be aware of is the fact that this set has pre-de ned activities and common objectives....This thesis describes aspects regarding trust, reputation, information security and cyber security as connected subjects. It is the belief of this research that without trust it is not possible to address security in computational systems properly. One fundamental aspect is to use information systems or to rely on them. One must trust the technology involved and consequently the entire system, without even knowing it in the rst place. Due to human characteristics, there is the tendency to trust that systems will keep our data secure. However, one basic problem is that trust and reputation deals with subjective evaluations. In many situations, information systems nowadays have distributed support. It means that there are a lot of parts that connect everything together, which is unknown to the ordinary users. Depending on the technology, it is even unknown to systems administrators when it comes to for example clusters, cloud and peer-to-peer systems. Considering the rst area of research - trust - from the perspective of this work, a group trust model for distributed systems is presented as an extension applied to conventional trust and reputation mechanisms, extension developed considering groups, which is herein de ned as a collection of entities with particular a nities and capabilities. Broadening this perspective, the formation of groups is very common, but very few trust and reputation models studied deal with trust in the perspective of a collection of entities with common a nities. Thus, group trust is a way of representing the set of trust and reputation of their particular members. One aspect to be aware of is the fact that this set has pre-de ned activities and common objectives....Depto. de Ingenier铆a de Software e Inteligencia Artificial (ISIA)Fac. de Inform谩ticaTRUEunpu

    Fault Tolerant Computation of Hyperbolic Partial Differential Equations with the Sparse Grid Combination Technique

    No full text
    As the computing power of supercomputers continues to increase exponentially the mean time between failures (MTBF) is decreasing. Checkpoint-restart has historically been the method of choice for recovering from failures. However, such methods become increasingly inefficient as the time required to complete a checkpoint-restart cycle approaches the MTBF. There is therefore a need to explore different ways of making computations fault tolerant. This thesis studies generalisations of the sparse grid combination technique with the goal of developing and analysing a holistic approach to the fault tolerant computation of partial differential equations (PDEs). Sparse grids allow one to reduce the computational complexity of high dimensional problems with only small loss of accuracy. A drawback is the need to perform computations with a hierarchical basis rather than a traditional nodal basis. We survey classical error estimates for sparse grid interpolation and extend results to functions which are non-zero on the boundary. The combination technique approximates sparse grid solutions via a sum of many coarse approximations which need not be computed with a hierarchical basis. Study of the combination technique often assumes that approximations satisfy an error splitting formula. We adapt classical error splitting results to our slightly different convention of combination level. Literature on the application of the combination technique to hyperbolic PDEs is scarce, particularly when solved with explicit finite difference methods. We show a particular family of finite difference discretisations for the advection equation solved via the method of lines has solutions which satisfy an error splitting formula. As a consequence, classical error splitting based estimates are readily applied to finite difference solutions of many hyperbolic PDEs. Our analysis also reveals how repeated combinations throughout the computation leads to a reduction in approximation error. Generalisations of the combination technique are studied and developed at depth. The truncated combination technique is a modification of the classical method used in practical applications and we provide analogues of classical error estimates. Adaptive sparse grids are then studied via a lattice framework. A detailed examination reveals many results regarding combination coefficients and extensions of classical error estimates. The framework is also applied to the study of extrapolation formula. These extensions of the combination technique provide the foundations for the development of the general coefficient problem. Solutions to this problem allow one to combine any collection of coarse approximations on nested grids. Lastly, we show how the combination technique is made fault tolerant via application of the general coefficient problem. Rather than recompute coarse solutions which fail we instead find new coefficients to combine remaining solutions. This significantly reduces computational overheads in the presence of faults with only small loss of accuracy. The latter is established with a careful study of the expected error for some select cases. We perform numerical experiments by computing combination solutions of the scalar advection equation in a parallel environment with simulated faults. The results support the preceding analysis and show that the overheads are indeed small and a significant improvement over traditional checkpoint-restart methods
    corecore