15 research outputs found

    A formal architecture-centric and model driven approach for the engineering of science gateways

    Get PDF
    From n-Tier client/server applications, to more complex academic Grids, or even the most recent and promising industrial Clouds, the last decade has witnessed significant developments in distributed computing. In spite of this conceptual heterogeneity, Service-Oriented Architecture (SOA) seems to have emerged as the common and underlying abstraction paradigm, even though different standards and technologies are applied across application domains. Suitable access to data and algorithms resident in SOAs via so-called ‘Science Gateways’ has thus become a pressing need in order to realize the benefits of distributed computing infrastructures.In an attempt to inform service-oriented systems design and developments in Grid-based biomedical research infrastructures, the applicant has consolidated work from three complementary experiences in European projects, which have developed and deployed large-scale production quality infrastructures and more recently Science Gateways to support research in breast cancer, pediatric diseases and neurodegenerative pathologies respectively. In analyzing the requirements from these biomedical applications the applicant was able to elaborate on commonly faced issues in Grid development and deployment, while proposing an adapted and extensible engineering framework. Grids implement a number of protocols, applications, standards and attempt to virtualize and harmonize accesses to them. Most Grid implementations therefore are instantiated as superposed software layers, often resulting in a low quality of services and quality of applications, thus making design and development increasingly complex, and rendering classical software engineering approaches unsuitable for Grid developments.The applicant proposes the application of a formal Model-Driven Engineering (MDE) approach to service-oriented developments, making it possible to define Grid-based architectures and Science Gateways that satisfy quality of service requirements, execution platform and distribution criteria at design time. An novel investigation is thus presented on the applicability of the resulting grid MDE (gMDE) to specific examples and conclusions are drawn on the benefits of this approach and its possible application to other areas, in particular that of Distributed Computing Infrastructures (DCI) interoperability, Science Gateways and Cloud architectures developments

    Efficient multilevel scheduling in grids and clouds with dynamic provisioning

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 12-01-2016La consolidación de las grandes infraestructuras para la Computación Distribuida ha resultado en una plataforma de Computación de Alta Productividad que está lista para grandes cargas de trabajo. Los mejores exponentes de este proceso son las federaciones grid actuales. Por otro lado, la Computación Cloud promete ser más flexible, utilizable, disponible y simple que la Computación Grid, cubriendo además muchas más necesidades computacionales que las requeridas para llevar a cabo cálculos distribuidos. En cualquier caso, debido al dinamismo y la heterogeneidad presente en grids y clouds, encontrar la asignación ideal de las tareas computacionales en los recursos disponibles es, por definición un problema NP-completo, y sólo se pueden encontrar soluciones subóptimas para estos entornos. Sin embargo, la caracterización de estos recursos en ambos tipos de infraestructuras es deficitaria. Los sistemas de información disponibles no proporcionan datos fiables sobre el estado de los recursos, lo cual no permite la planificación avanzada que necesitan los diferentes tipos de aplicaciones distribuidas. Durante la última década esta cuestión no ha sido resuelta para la Computación Grid y las infraestructuras cloud establecidas recientemente presentan el mismo problema. En este marco, los planificadores (brokers) sólo pueden mejorar la productividad de las ejecuciones largas, pero no proporcionan ninguna estimación de su duración. La planificación compleja ha sido abordada tradicionalmente por otras herramientas como los gestores de flujos de trabajo, los auto-planificadores o los sistemas de gestión de producción pertenecientes a ciertas comunidades de investigación. Sin embargo, el bajo rendimiento obtenido con estos mecanismos de asignación anticipada (early-binding) es notorio. Además, la diversidad en los proveedores cloud, la falta de soporte de herramientas de planificación y de interfaces de programación estandarizadas para distribuir la carga de trabajo, dificultan la portabilidad masiva de aplicaciones legadas a los entornos cloud...The consolidation of large Distributed Computing infrastructures has resulted in a High-Throughput Computing platform that is ready for high loads, whose best proponents are the current grid federations. On the other hand, Cloud Computing promises to be more flexible, usable, available and simple than Grid Computing, covering also much more computational needs than the ones required to carry out distributed calculations. In any case, because of the dynamism and heterogeneity that are present in grids and clouds, calculating the best match between computational tasks and resources in an effectively characterised infrastructure is, by definition, an NP-complete problem, and only sub-optimal solutions (schedules) can be found for these environments. Nevertheless, the characterisation of the resources of both kinds of infrastructures is far from being achieved. The available information systems do not provide accurate data about the status of the resources that can allow the advanced scheduling required by the different needs of distributed applications. The issue was not solved during the last decade for grids and the cloud infrastructures recently established have the same problem. In this framework, brokers only can improve the throughput of very long calculations, but do not provide estimations of their duration. Complex scheduling was traditionally tackled by other tools such as workflow managers, self-schedulers and the production management systems of certain research communities. Nevertheless, the low performance achieved by these earlybinding methods is noticeable. Moreover, the diversity of cloud providers and mainly, their lack of standardised programming interfaces and brokering tools to distribute the workload, hinder the massive portability of legacy applications to cloud environments...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEsubmitte

    Integration of Data Mining into Scientific Data Analysis Processes

    Get PDF
    In recent years, using advanced semi-interactive data analysis algorithms such as those from the field of data mining gained more and more importance in life science in general and in particular in bioinformatics, genetics, medicine and biodiversity. Today, there is a trend away from collecting and evaluating data in the context of a specific problem or study only towards extensively collecting data from different sources in repositories which is potentially useful for subsequent analysis, e.g. in the Gene Expression Omnibus (GEO) repository of high throughput gene expression data. At the time the data are collected, it is analysed in a specific context which influences the experimental design. However, the type of analyses that the data will be used for after they have been deposited is not known. Content and data format are focused only to the first experiment, but not to the future re-use. Thus, complex process chains are needed for the analysis of the data. Such process chains need to be supported by the environments that are used to setup analysis solutions. Building specialized software for each individual problem is not a solution, as this effort can only be carried out for huge projects running for several years. Hence, data mining functionality was developed to toolkits, which provide data mining functionality in form of a collection of different components. Depending on the different research questions of the users, the solutions consist of distinct compositions of these components. Today, existing solutions for data mining processes comprise different components that represent different steps in the analysis process. There exist graphical or script-based toolkits for combining such components. The data mining tools, which can serve as components in analysis processes, are based on single computer environments, local data sources and single users. However, analysis scenarios in medical- and bioinformatics have to deal with multi computer environments, distributed data sources and multiple users that have to cooperate. Users need support for integrating data mining into analysis processes in the context of such scenarios, which lacks today. Typically, analysts working with single computer environments face the problem of large data volumes since tools do not address scalability and access to distributed data sources. Distributed environments such as grid environments provide scalability and access to distributed data sources, but the integration of existing components into such environments is complex. In addition, new components often cannot be directly developed in distributed environments. Moreover, in scenarios involving multiple computers, multiple distributed data sources and multiple users, the reuse of components, scripts and analysis processes becomes more important as more steps and configuration are necessary and thus much bigger efforts are needed to develop and set-up a solution. In this thesis we will introduce an approach for supporting interactive and distributed data mining for multiple users based on infrastructure principles that allow building on data mining components and processes that are already available instead of designing of a completely new infrastructure, so that users can keep working with their well-known tools. In order to achieve the integration of data mining into scientific data analysis processes, this thesis proposes an stepwise approach of supporting the user in the development of analysis solutions that include data mining. We see our major contributions as the following: first, we propose an approach to integrate data mining components being developed for a single processor environment into grid environments. By this, we support users in reusing standard data mining components with small effort. The approach is based on a metadata schema definition which is used to grid-enable existing data mining components. Second, we describe an approach for interactively developing data mining scripts in grid environments. The approach efficiently supports users when it is necessary to enhance available components, to develop new data mining components, and to compose these components. Third, building on that, an approach for facilitating the reuse of existing data mining processes based on process patterns is presented. It supports users in scenarios that cover different steps of the data mining process including several components or scripts. The data mining process patterns support the description of data mining processes at different levels of abstraction between the CRISP model as most general and executable workflows as most concrete representation

    Distributed EaaS simulation using TEEs: A case study in the implementation and practical application of an embedded computer cluster

    Get PDF
    Internet of Things (IoT) devices with limited resources struggle to generate the high-quality entropy required for high-quality randomness. This results in weak cryptographic keys. As keys are a single point of failure in modern cryptography, IoT devices performing cryptographic operations may be susceptible to a variety of attacks. To address this issue, we develop an Entropy as a Service (EaaS) simulation. The purpose of EaaS is to provide IoT devices with high-quality entropy as a service so that they can use it to generate strong keys. Additionally, we utilise Trusted Execution Environments (TEEs) in the simulation. TEE is a secure processor component that provides data protection, integrity, and confidentiality for select applications running on the processor by isolating them from other system processes (including the OS). TEE thereby enhances system security. The EaaS simulation is performed on a computer cluster known as the Magi cluster. Magi cluster is a private computer cluster that has been designed, built, configured, and tested as part of this thesis to meet the requirements of Tampere University's Network and Information Security Group (NISEC). In this thesis, we explain how the Magi cluster is implemented and how it is utilised to conduct a distributed EaaS simulation utilising TEEs.Esineiden internetin (Internet of Things, IoT) laitteilla on tyypillisesti rajallisten resurssien vuoksi haasteita tuottaa tarpeeksi korkealaatuista entropiaa vahvan satunnaisuuden luomiseen. Tämä johtaa heikkoihin salausavaimiin. Koska salausavaimet ovat modernin kryptografian heikoin lenkki, IoT-laitteilla tehtävät kryptografiset operaatiot saattavat olla haavoittuvaisia useita erilaisia hyökkäyksiä vastaan. Ratkaistaksemme tämän ongelman kehitämme simulaation, joka tarjoaa IoT-laitteille vahvaa entropiaa palveluna (Entropy as a Service, EaaS). EaaS-simulaation ideana on jakaa korkealaatuista entropiaa palveluna IoT-laitteille, jotta ne pystyvät luomaan vahvoja salausavaimia. Hyödynnämme simulaatiossa lisäksi luotettuja suoritusympäristöjä (Trusted Execution Environment, TEE). TEE on prosessorilla oleva erillinen komponentti, joka tarjoaa eristetyn ja turvallisen ajoympäristön valituille ohjelmille. TEE:tä hyödyntämällä ajonaikaiselle ohjelmalle voidaan taata datan suojaus, luottamuksellisuus sekä eheys eristämällä se muista järjestelmällä ajetuista ohjelmista (mukaan lukien käyttöjärjestelmä). Näin ollen TEE parantaa järjestelmän tietoturvallisuutta. EaaS-simulaatio toteutetaan Magi-nimisellä tietokoneklusterilla. Magi on Tampereen Yliopiston Network and Information Security Group (NISEC) -tutkimusryhmän oma yksityinen klusteri, joka on suunniteltu, rakennettu, määritelty ja testattu osana tätä diplomityötä. Tässä diplomityössä käymme läpi, kuinka Magi-klusteri on toteutettu ja kuinka sillä toteutetaan hajautettu EaaS-simulaatio hyödyntäen TEE:itä

    Proyecto Docente e Investigador, Trabajo Original de Investigación y Presentación de la Defensa, preparado por Germán Moltó para concursar a la plaza de Catedrático de Universidad, concurso 082/22, plaza 6708, área de Ciencia de la Computación e Inteligencia Artificial

    Full text link
    Este documento contiene el proyecto docente e investigador del candidato Germán Moltó Martínez presentado como requisito para el concurso de acceso a plazas de Cuerpos Docentes Universitarios. Concretamente, el documento se centra en el concurso para la plaza 6708 de Catedrático de Universidad en el área de Ciencia de la Computación en el Departamento de Sistemas Informáticos y Computación de la Universitat Politécnica de València. La plaza está adscrita a la Escola Técnica Superior d'Enginyeria Informàtica y tiene como perfil las asignaturas "Infraestructuras de Cloud Público" y "Estructuras de Datos y Algoritmos".También se incluye el Historial Académico, Docente e Investigador, así como la presentación usada durante la defensa.Germán Moltó Martínez (2022). Proyecto Docente e Investigador, Trabajo Original de Investigación y Presentación de la Defensa, preparado por Germán Moltó para concursar a la plaza de Catedrático de Universidad, concurso 082/22, plaza 6708, área de Ciencia de la Computación e Inteligencia Artificial. http://hdl.handle.net/10251/18903

    Offshore Wind Data Integration

    Get PDF
    Doktorgradsavhandling i informasjons- og kommunikasjonsteknologi, Universitetet i Agder, Grimstad, 2014Using renewable energy to meet the future electricity consumption and to reduce environmental impact is a significant target of many countries around the world. Wind power is one of the most promising renewable energy technologies. In particular, the development of offshore wind power is increasing rapidly due to large areas of wind resources. However, offshore wind is encountering big challenges such as effective use of wind power plants, reduced cost of installation as well as operation and maintenance (O&M). Improved O&M is likely to reduce the hazard exposure of the employees, increase income, and support offshore activities more efficiently. In order to optimize the O&M, the importance of data exchange and knowledge sharing within the offshore wind industry must be realized. With more data available and accessible, it is possible to make better decisions, and thereby improve the recovery rates and reduce the operational costs. This dissertation proposes a holistic way of improving remote operations for offshore wind farms by using data integration. Particularly, semantics and integration aspects of data integration are investigated. The research looks at both theoretical foundations and practical implementations. As the outcome of the research, a framework for data integration of offshore wind farms has been developed. The framework consists of three main components: the semantic model, the data source handling, and the information provisioning. In particular, an offshore wind ontology has been proposed to explore the semantics of wind data and enable knowledge sharing and data exchange. The ontology is aligned with semantic sensor network ontology to support management of metadata in smart grids. That is to say, the ontology-based approach has been proven to be useful in managing data and metadata in the offshore wind and in smart grids. A quality-based approach is proposed to manage, select, and provide the most suitable data source for users based upon their quality requirements and an approach to formally describing derived data in ontologies is investigated

    Satellite Communications

    Get PDF
    This study is motivated by the need to give the reader a broad view of the developments, key concepts, and technologies related to information society evolution, with a focus on the wireless communications and geoinformation technologies and their role in the environment. Giving perspective, it aims at assisting people active in the industry, the public sector, and Earth science fields as well, by providing a base for their continued work and thinking
    corecore