2,256 research outputs found

    The C++0x "Concepts" Effort

    Full text link
    C++0x is the working title for the revision of the ISO standard of the C++ programming language that was originally planned for release in 2009 but that was delayed to 2011. The largest language extension in C++0x was "concepts", that is, a collection of features for constraining template parameters. In September of 2008, the C++ standards committee voted the concepts extension into C++0x, but then in July of 2009, the committee voted the concepts extension back out of C++0x. This article is my account of the technical challenges and debates within the "concepts" effort in the years 2003 to 2009. To provide some background, the article also describes the design space for constrained parametric polymorphism, or what is colloquially know as constrained generics. While this article is meant to be generally accessible, the writing is aimed toward readers with background in functional programming and programming language theory. This article grew out of a lecture at the Spring School on Generic and Indexed Programming at the University of Oxford, March 2010

    Object Relational Mapping and API with Loopback 4

    Get PDF
    Actualmente les API forman parte de nuestro día a día. Están tan integradas en nuestras rutinas que las usamos sin ser ni tan solo conscientes de ello. Buscadores de internet, comercios digitales o foros digitales, son solo alguno de los ejemplos de este uso rutinario de las API. El Object Relational Mapping es una técnica usada con para persistir los datos usados por estas API. LoopBack 4 es un framework para desarollar API con el soporte de StrongLoop (IBM). Esta es la cuarta versión del software y la más nueva. Aún sé está desarrollando. El objetivo de este proyecto es entender como LoopBack 4 implementa los dos conceptos mencionados anteriormente. De conseguir una percepción de como funciona un back-end, de como funcionan las bases de datos, y de como el framework guarda allí los datos. Finalmente crearemos una documentación completa para que alguien pueda entender el ORM en Loopback 4 des de cero. Durante el proyecto hemos estudiado e implementado unos ejemplos para explicar como fun- cionan las relaciones en LoopBack4. Esto no solo implica modelos; también incluye datasources, controladores y repositorios. Una vez implementados en Loopback, veremos como estas rela- ciones quedar plasmadas dentro de una base de datos relacional y una de no relacional

    EPOS Security & GDPR Compliance

    Get PDF
    Since May 2018, companies have been required to comply with the General Data Protection Regulation (GDPR). This means that many companies had to change their methods of collecting and processing EU citizens’ data. The compliance process can be very expensive, for example, more specialized human resources are needed, who need to study the regulations and then implement the changes in the IT applications and infrastructures. As a result, new measures and methods need to be developed and implemented, making this process expensive. This project is part of the EPOS project. EPOS allows data on earth sciences from various research institutes in Europe to be shared and used. The data is stored in a database and in some file systems and in addition, there is web services for data mining and control. The EPOS project is a complex distributed system and therefore it is important to guarantee not only its security, but also that it is compatible with GDPR. The need to automate and facilitate this compliance and verification process was identified, in particular the need to develop a tool capable of analyzing applications web. This tool can provide companies in general an easier and faster way to check the degree of compliance with the GDPR in order to assess and implement any necessary changes. With this, PADRES was developed that contains the main points of GDPR organized by principles in the form of checklist which are answered manually. When submitted, a security analysis is also performed based on NMAP and ZAP together with the cookie analyzer. Finally, a report is generated with the information obtained together with a set of suggestions based on the responses obtained from the checklist. Applying this tool to EPOS, most of the points related to GDPR were answered as being in compliance although the rest of the suggestions were generated to help improve the level of compliance and also improve general data management. In the exploitation of vulnerabilities, some were found to be classified as high risk, but most were found to be classified as medium risk.Desde maio de 2018 que as empresas precisam de cumprir o Regulamento Geral de Proteção de Dados (GDPR). Isso significa que muitas empresas tiveram que mudar seus métodos de como recolhem e processam os dados dos cidadãos da UE. O processo de conformidade pode ser muito caro, por exemplo, são necessários recursos humanos mais especializados, que precisam estudar os regulamentos e depois implementar as alterações nos aplicativos e infraestruturas de TI. Com isso novas medidas e métodos precisam ser desenvolvidos e implementados, tornando esse processo caro. Este projeto está inserido no projeto European Plate Observing System (EPOS). O EPOS permite que dados sobre ciências da terra de vários institutos de pesquisa na Europa sejam compartilhados e usados. Os dados são armazenados em base de dados e em alguns sistema de ficheiros e além disso, existem web services para controle e mineração de dados. O projeto EPOS é um sistema distribuído complexo e portanto, é importante garantir não apenas sua segurança, mas também que seja compatível com o GDPR. Foi identificada a necessidade de automatizar e facilitar esse processo, em particular a necessidade de desenvolver uma ferramenta capaz de analisar aplicações web. Essa ferramenta, chamada PrivAcy, Data REgulation and Security (PADRES) pode fornecer às empresas uma maneira mais fácil e rápida de verificar o grau de conformidade com o GDPR com o objetivo de avaliar e implementar quaisquer alterações necessárias. Com isto, esta ferramenta contém os pontos principais do General Data Protection Regulation (GDPR) organizado por princípios em forma duma lista de verificação, os quais são respondidos manualmente. Como os conceitos de privacidade e segurança se complementam, foi também incluída a procura por vulnerabilidades em aplicações web. Ao integrar as ferramentas de código aberto como o Network Mapper (NMAP) ou Zed Attack Proxy (ZAP), é possível então testar a aplicações contra as vulnerabilidades mais frequentes segundo o Open Web Application Security Project (OWASP) Top 10. Aplicando esta ferramenta no EPOS, a maioria dos pontos relativos ao GDPR foram respondidos como estando em conformidade apesar de nos restantes terem sido geradas as respetivas sugestões para ajudar a melhorar o nível de conformidade e também melhorar o gerenciamento geral dos dados. Na exploração das vulnerabilidades foram encontradas algumas classificadas com risco elevado mas na maioria foram encontradas mais com classificação média

    Anpassen verteilter eingebetteter Anwendungen im laufenden Betrieb

    Get PDF
    The availability of third-party apps is among the key success factors for software ecosystems: The users benefit from more features and innovation speed, while third-party solution vendors can leverage the platform to create successful offerings. However, this requires a certain decoupling of engineering activities of the different parties not achieved for distributed control systems, yet. While late and dynamic integration of third-party components would be required, resulting control systems must provide high reliability regarding real-time requirements, which leads to integration complexity. Closing this gap would particularly contribute to the vision of software-defined manufacturing, where an ecosystem of modern IT-based control system components could lead to faster innovations due to their higher abstraction and availability of various frameworks. Therefore, this thesis addresses the research question: How we can use modern IT technologies and enable independent evolution and easy third-party integration of software components in distributed control systems, where deterministic end-to-end reactivity is required, and especially, how can we apply distributed changes to such systems consistently and reactively during operation? This thesis describes the challenges and related approaches in detail and points out that existing approaches do not fully address our research question. To tackle this gap, a formal specification of a runtime platform concept is presented in conjunction with a model-based engineering approach. The engineering approach decouples the engineering steps of component definition, integration, and deployment. The runtime platform supports this approach by isolating the components, while still offering predictable end-to-end real-time behavior. Independent evolution of software components is supported through a concept for synchronous reconfiguration during full operation, i.e., dynamic orchestration of components. Time-critical state transfer is supported, too, and can lead to bounded quality degradation, at most. The reconfiguration planning is supported by analysis concepts, including simulation of a formally specified system and reconfiguration, and analyzing potential quality degradation with the evolving dataflow graph (EDFG) method. A platform-specific realization of the concepts, the real-time container architecture, is described as a reference implementation. The model and the prototype are evaluated regarding their feasibility and applicability of the concepts by two case studies. The first case study is a minimalistic distributed control system used in different setups with different component variants and reconfiguration plans to compare the model and the prototype and to gather runtime statistics. The second case study is a smart factory showcase system with more challenging application components and interface technologies. The conclusion is that the concepts are feasible and applicable, even though the concepts and the prototype still need to be worked on in future -- for example, to reach shorter cycle times.Eine große Auswahl von Drittanbieter-Lösungen ist einer der Schlüsselfaktoren für Software Ecosystems: Nutzer profitieren vom breiten Angebot und schnellen Innovationen, während Drittanbieter über die Plattform erfolgreiche Lösungen anbieten können. Das jedoch setzt eine gewisse Entkopplung von Entwicklungsschritten der Beteiligten voraus, welche für verteilte Steuerungssysteme noch nicht erreicht wurde. Während Drittanbieter-Komponenten möglichst spät -- sogar Laufzeit -- integriert werden müssten, müssen Steuerungssysteme jedoch eine hohe Zuverlässigkeit gegenüber Echtzeitanforderungen aufweisen, was zu Integrationskomplexität führt. Dies zu lösen würde insbesondere zur Vision von Software-definierter Produktion beitragen, da ein Ecosystem für moderne IT-basierte Steuerungskomponenten wegen deren höherem Abstraktionsgrad und der Vielzahl verfügbarer Frameworks zu schnellerer Innovation führen würde. Daher behandelt diese Dissertation folgende Forschungsfrage: Wie können wir moderne IT-Technologien verwenden und unabhängige Entwicklung und einfache Integration von Software-Komponenten in verteilten Steuerungssystemen ermöglichen, wo Ende-zu-Ende-Echtzeitverhalten gefordert ist, und wie können wir insbesondere verteilte Änderungen an solchen Systemen konsistent und im Vollbetrieb vornehmen? Diese Dissertation beschreibt Herausforderungen und verwandte Ansätze im Detail und zeigt auf, dass existierende Ansätze diese Frage nicht vollständig behandeln. Um diese Lücke zu schließen, beschreiben wir eine formale Spezifikation einer Laufzeit-Plattform und einen zugehörigen Modell-basierten Engineering-Ansatz. Dieser Ansatz entkoppelt die Design-Schritte der Entwicklung, Integration und des Deployments von Komponenten. Die Laufzeit-Plattform unterstützt den Ansatz durch Isolation von Komponenten und zugleich Zeit-deterministischem Ende-zu-Ende-Verhalten. Unabhängige Entwicklung und Integration werden durch Konzepte für synchrone Rekonfiguration im Vollbetrieb unterstützt, also durch dynamische Orchestrierung. Dies beinhaltet auch Zeit-kritische Zustands-Transfers mit höchstens begrenzter Qualitätsminderung, wenn überhaupt. Rekonfigurationsplanung wird durch Analysekonzepte unterstützt, einschließlich der Simulation formal spezifizierter Systeme und Rekonfigurationen und der Analyse der etwaigen Qualitätsminderung mit dem Evolving Dataflow Graph (EDFG). Die Real-Time Container Architecture wird als Referenzimplementierung und Evaluationsplattform beschrieben. Zwei Fallstudien untersuchen Machbarkeit und Nützlichkeit der Konzepte. Die erste verwendet verschiedene Varianten und Rekonfigurationen eines minimalistischen verteilten Steuerungssystems, um Modell und Prototyp zu vergleichen sowie Laufzeitstatistiken zu erheben. Die zweite Fallstudie ist ein Smart-Factory-Demonstrator, welcher herausforderndere Applikationskomponenten und Schnittstellentechnologien verwendet. Die Konzepte sind den Studien nach machbar und nützlich, auch wenn sowohl die Konzepte als auch der Prototyp noch weitere Arbeit benötigen -- zum Beispiel, um kürzere Zyklen zu erreichen

    Indexed Induction And Coinduction, Fibrationally

    Get PDF
    This paper extends the fibrational approach to induction and coinductionpioneered by Hermida and Jacobs, and developed by the current authors, in two keydirections. First, we present a dual to the sound induction rule for inductive types thatwe developed previously. That is, we present a sound coinduction rule for any data typearising as the carrier of the final coalgebra of a functor, thus relaxing Hermida and Jacobs’restriction to polynomial functors. To achieve this we introduce the notion of a quotientcategory with equality (QCE) that i) abstracts the standard notion of a fibration of relationsconstructed from a given fibration; and ii) plays a role in the theory of coinduction dualto that played by a comprehension category with unit (CCU) in the theory of induction.Secondly, we show that inductive and coinductive indexed types also admit sound inductionand coinduction rules. Indexed data types often arise as carriers of initial algebras andfinal coalgebras of functors on slice categories, so we give sufficient conditions under which we can construct, from a CCU (QCE) U : E : B, a fibration with base B/I that models indexing by I and is also a CCU (resp., QCE). We finish the paper by considering themore general case of sound induction and coinduction rules for indexed data types whenthe indexing is itself given by a fibration

    Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing

    Full text link
    Tesis por compendio[ES] Las aplicaciones científicas implican generalmente una carga computacional variable y no predecible a la que las instituciones deben hacer frente variando dinámicamente la asignación de recursos en función de las distintas necesidades computacionales. Las aplicaciones científicas pueden necesitar grandes requisitos. Por ejemplo, una gran cantidad de recursos computacionales para el procesado de numerosos trabajos independientes (High Throughput Computing o HTC) o recursos de alto rendimiento para la resolución de un problema individual (High Performance Computing o HPC). Los recursos computacionales necesarios en este tipo de aplicaciones suelen acarrear un coste muy alto que puede exceder la disponibilidad de los recursos de la institución o estos pueden no adaptarse correctamente a las necesidades de las aplicaciones científicas, especialmente en el caso de infraestructuras preparadas para la ejecución de aplicaciones de HPC. De hecho, es posible que las diferentes partes de una aplicación necesiten distintos tipos de recursos computacionales. Actualmente las plataformas de servicios en la nube se han convertido en una solución eficiente para satisfacer la demanda de las aplicaciones HTC, ya que proporcionan un abanico de recursos computacionales accesibles bajo demanda. Por esta razón, se ha producido un incremento en la cantidad de clouds híbridos, los cuales son una combinación de infraestructuras alojadas en servicios en la nube y en las propias instituciones (on-premise). Dado que las aplicaciones pueden ser procesadas en distintas infraestructuras, actualmente la portabilidad de las aplicaciones se ha convertido en un aspecto clave. Probablemente, las tecnologías de contenedores son la tecnología más popular para la entrega de aplicaciones gracias a que permiten reproducibilidad, trazabilidad, versionado, aislamiento y portabilidad. El objetivo de la tesis es proporcionar una arquitectura y una serie de servicios para proveer infraestructuras elásticas híbridas de procesamiento que puedan dar respuesta a las diferentes cargas de trabajo. Para ello, se ha considerado la utilización de elasticidad vertical y horizontal desarrollando una prueba de concepto para proporcionar elasticidad vertical y se ha diseñado una arquitectura cloud elástica de procesamiento de Análisis de Datos. Después, se ha trabajo en una arquitectura cloud de recursos heterogéneos de procesamiento de imágenes médicas que proporciona distintas colas de procesamiento para trabajos con diferentes requisitos. Esta arquitectura ha estado enmarcada en una colaboración con la empresa QUIBIM. En la última parte de la tesis, se ha evolucionado esta arquitectura para diseñar e implementar un cloud elástico, multi-site y multi-tenant para el procesamiento de imágenes médicas en el marco del proyecto europeo PRIMAGE. Esta arquitectura utiliza un almacenamiento distribuido integrando servicios externos para la autenticación y la autorización basados en OpenID Connect (OIDC). Para ello, se ha desarrollado la herramienta kube-authorizer que, de manera automatizada y a partir de la información obtenida en el proceso de autenticación, proporciona el control de acceso a los recursos de la infraestructura de procesamiento mediante la creación de las políticas y roles. Finalmente, se ha desarrollado otra herramienta, hpc-connector, que permite la integración de infraestructuras de procesamiento HPC en infraestructuras cloud sin necesitar realizar cambios en la infraestructura HPC ni en la arquitectura cloud. Cabe destacar que, durante la realización de esta tesis, se han utilizado distintas tecnologías de gestión de trabajos y de contenedores de código abierto, se han desarrollado herramientas y componentes de código abierto y se han implementado recetas para la configuración automatizada de las distintas arquitecturas diseñadas desde la perspectiva DevOps.[CA] Les aplicacions científiques impliquen generalment una càrrega computacional variable i no predictible a què les institucions han de fer front variant dinàmicament l'assignació de recursos en funció de les diferents necessitats computacionals. Les aplicacions científiques poden necessitar grans requisits. Per exemple, una gran quantitat de recursos computacionals per al processament de nombrosos treballs independents (High Throughput Computing o HTC) o recursos d'alt rendiment per a la resolució d'un problema individual (High Performance Computing o HPC). Els recursos computacionals necessaris en aquest tipus d'aplicacions solen comportar un cost molt elevat que pot excedir la disponibilitat dels recursos de la institució o aquests poden no adaptar-se correctament a les necessitats de les aplicacions científiques, especialment en el cas d'infraestructures preparades per a l'avaluació d'aplicacions d'HPC. De fet, és possible que les diferents parts d'una aplicació necessiten diferents tipus de recursos computacionals. Actualment les plataformes de servicis al núvol han esdevingut una solució eficient per satisfer la demanda de les aplicacions HTC, ja que proporcionen un ventall de recursos computacionals accessibles a demanda. Per aquest motiu, s'ha produït un increment de la quantitat de clouds híbrids, els quals són una combinació d'infraestructures allotjades a servicis en el núvol i a les mateixes institucions (on-premise). Donat que les aplicacions poden ser processades en diferents infraestructures, actualment la portabilitat de les aplicacions s'ha convertit en un aspecte clau. Probablement, les tecnologies de contenidors són la tecnologia més popular per a l'entrega d'aplicacions gràcies al fet que permeten reproductibilitat, traçabilitat, versionat, aïllament i portabilitat. L'objectiu de la tesi és proporcionar una arquitectura i una sèrie de servicis per proveir infraestructures elàstiques híbrides de processament que puguen donar resposta a les diferents càrregues de treball. Per a això, s'ha considerat la utilització d'elasticitat vertical i horitzontal desenvolupant una prova de concepte per proporcionar elasticitat vertical i s'ha dissenyat una arquitectura cloud elàstica de processament d'Anàlisi de Dades. Després, s'ha treballat en una arquitectura cloud de recursos heterogenis de processament d'imatges mèdiques que proporciona distintes cues de processament per a treballs amb diferents requisits. Aquesta arquitectura ha estat emmarcada en una col·laboració amb l'empresa QUIBIM. En l'última part de la tesi, s'ha evolucionat aquesta arquitectura per dissenyar i implementar un cloud elàstic, multi-site i multi-tenant per al processament d'imatges mèdiques en el marc del projecte europeu PRIMAGE. Aquesta arquitectura utilitza un emmagatzemament integrant servicis externs per a l'autenticació i autorització basats en OpenID Connect (OIDC). Per a això, s'ha desenvolupat la ferramenta kube-authorizer que, de manera automatitzada i a partir de la informació obtinguda en el procés d'autenticació, proporciona el control d'accés als recursos de la infraestructura de processament mitjançant la creació de les polítiques i rols. Finalment, s'ha desenvolupat una altra ferramenta, hpc-connector, que permet la integració d'infraestructures de processament HPC en infraestructures cloud sense necessitat de realitzar canvis en la infraestructura HPC ni en l'arquitectura cloud. Es pot destacar que, durant la realització d'aquesta tesi, s'han utilitzat diferents tecnologies de gestió de treballs i de contenidors de codi obert, s'han desenvolupat ferramentes i components de codi obert, i s'han implementat receptes per a la configuració automatitzada de les distintes arquitectures dissenyades des de la perspectiva DevOps.[EN] Scientific applications generally imply a variable and an unpredictable computational workload that institutions must address by dynamically adjusting the allocation of resources to their different computational needs. Scientific applications could require a high capacity, e.g. the concurrent usage of computational resources for processing several independent jobs (High Throughput Computing or HTC) or a high capability by means of using high-performance resources for solving complex problems (High Performance Computing or HPC). The computational resources required in this type of applications usually have a very high cost that may exceed the availability of the institution's resources or they are may not be successfully adapted to the scientific applications, especially in the case of infrastructures prepared for the execution of HPC applications. Indeed, it is possible that the different parts that compose an application require different type of computational resources. Nowadays, cloud service platforms have become an efficient solution to meet the need of HTC applications as they provide a wide range of computing resources accessible on demand. For this reason, the number of hybrid computational infrastructures has increased during the last years. The hybrid computation infrastructures are the combination of infrastructures hosted in cloud platforms and the computation resources hosted in the institutions, which are named on-premise infrastructures. As scientific applications can be processed on different infrastructures, the application delivery has become a key issue. Nowadays, containers are probably the most popular technology for application delivery as they ease reproducibility, traceability, versioning, isolation, and portability. The main objective of this thesis is to provide an architecture and a set of services to build up hybrid processing infrastructures that fit the need of different workloads. Hence, the thesis considered aspects such as elasticity and federation. The use of vertical and horizontal elasticity by developing a proof of concept to provide vertical elasticity on top of an elastic cloud architecture for data analytics. Afterwards, an elastic cloud architecture comprising heterogeneous computational resources has been implemented for medical imaging processing using multiple processing queues for jobs with different requirements. The development of this architecture has been framed in a collaboration with a company called QUIBIM. In the last part of the thesis, the previous work has been evolved to design and implement an elastic, multi-site and multi-tenant cloud architecture for medical image processing has been designed in the framework of a European project PRIMAGE. This architecture uses a storage integrating external services for the authentication and authorization based on OpenID Connect (OIDC). The tool kube-authorizer has been developed to provide access control to the resources of the processing infrastructure in an automatic way from the information obtained in the authentication process, by creating policies and roles. Finally, another tool, hpc-connector, has been developed to enable the integration of HPC processing infrastructures into cloud infrastructures without requiring modifications in both infrastructures, cloud and HPC. It should be noted that, during the realization of this thesis, different contributions to open source container and job management technologies have been performed by developing open source tools and components and configuration recipes for the automated configuration of the different architectures designed from the DevOps perspective. The results obtained support the feasibility of the vertical elasticity combined with the horizontal elasticity to implement QoS policies based on a deadline, as well as the feasibility of the federated authentication model to combine public and on-premise clouds.López Huguet, S. (2021). Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/172327TESISCompendi

    Managing Cultural Heritage Sites in Southeastern Europe

    Get PDF
    The CHERPLAN project (CHERPLAN stands for “Enhancement of Cultural Heritage through Environmental Planning and Management”) aims to provide a strong basis for ensuring compatibility and synergy between cultural heritage conservation and socioeconomic growth by fostering the adoption of a modern environmental planning approach throughout southeast Europe (SEE). The aim of environmental planning is to integrate traditional urban/spatial planning with the concerns of environmentalism to ensure sustainable development~when innovatively applied to cultural heritage sites, environmental planning’s comprehensive perspective can be regarded as composed of three spheres: the built and historical environment, the socioeconomic and cultural environment, and the biophysical environment.Knjiga Upravljanje območij s kulturno dediščino v Jugovzhodni Evropi, kot eden od rezultatov projekta CHERPLAN, naslavlja ključna vprašanja upravljanja območij s kulturno dediščino. Predstavlja osnovni okvir, ki sta ga na tem področju izoblikovala Unesco in ICOMOS, ter usmeritve za dvajset različnih izzivov upravljanja, kjer vsakega pospremimo z uvodom, priporočili in primeri dobrih praks. Knjiga tako zagotavlja praktične informacije za uveljavljanje okoljskega planiranja na območjih s kulturno dediščino v Jugovzhodni Evropi, pri čemer je bil del priporočil pripravljen znotraj pilotnih območij, del pa smo jih prevzeli od drugje. V obeh primerih predstavljajo priporočila in dobre prakse preverjena lokalna znanja, saj so jih v opisanih primerih uspešno uporabili
    corecore