52 research outputs found

    What Does Aspect-Oriented Programming Mean for Functional Programmers?

    Get PDF
    Aspect-Oriented Programming (AOP) aims at modularising crosscutting concerns that show up in software. The success of AOP has been almost viral and nearly all areas in Software Engineering and Programming Languages have become "infected" by the AOP bug in one way or another. Interestingly the functional programming community (and, in particular, the pure functional programming community) seems to be resistant to the pandemic. The goal of this paper is to debate the possible causes of the functional programming community's resistance and to raise awareness and interest by showcasing the benefits that could be gained from having a functional AOP language. At the same time, we identify the main challenges and explore the possible design-space

    A UML Profile for Security and Code Generation

    Get PDF
    Recently, many research studies have suggested the integration of safety engineering at an early stage of modeling and system development using Model-Driven Architecture (MDA). This concept consists in deploying the UML (Unified Modeling Language) standard as aprincipal metamodel for the abstractions of different systems. To our knowledge, most of this work has focused on integrating security requirements after the implementation phase without taking them into account when designing systems. In this work, we focused our efforts on non-functional aspects such as the business logic layer, data flow monitoring, and high-quality service delivery. Practically, we have proposed a new UML profile for security integration and code generation for the Java platform. Therefore, the security properties will be described by a UML profile and the OCL language to verify the requirements of confidentiality, authorization, availability, data integrity, and data encryption. Finally, the source code such as the application security configuration, the method signatures and their bodies, the persistent entities and the security controllers generated from sequence diagram of system’s internal behavior after its extension with this profile and applying a set of transformations

    Gestion de métadonnées utilisant tissage et transformation de modèles

    Get PDF
    The interaction and interoperability between different data sources is a major concern in many organizations. The different formats of data, APIs, and architectures increases the incompatibilities, in a way that interoperability and interaction between components becomes a very difficult task. Model driven engineering (MDE) is a paradigm that enables diminishing interoperability problems by considering every entity as a model. MDE platforms are composed of different kinds of models. Some of the most important kinds of models are transformation models, which are used to define fixed operations between different models. In addition to fixed transformation operations, there are other kinds of interactions and relationships between models. A complete MDE solution must be capable of handling different kinds of relationships. Until now, most research has concentrated on studying transformation languages. This means additional efforts must be undertaken to study these relationships and their implications on a MDE platform. This thesis studies different forms of relationships between models elements. We show through extensive related work that the major limitation of current solutions is the lack of genericity, extensibility and adaptability. We present a generic MDE solution for relationship management called model weaving. Model weaving proposes to capture different kinds of relationships between model elements in a weaving model. A weaving model conforms to extensions of a core weaving metamodel that supports basic relationship management. After proposing the unification of the conceptual foundations related to model weaving, we show how weaving models and transformation models are used as a generic approach for data interoperability. The weaving models are used to produce model transformations. Moreover, we present an adaptive framework for creating weaving models in a semi-automatic way. We validate our approach by developing a generic and adaptive tool called ATLAS Model Weaver (AMW), and by implementing several use cases from different application scenarios.L'interaction et l'interopérabilité entre différentes sources de données sont une préoccupation majeure dans plusieurs organisations. Ce problème devient plus important encore avec la multitude de formats de données, APIs et architectures existants. L'ingénierie dirigée par modèles (IDM) est un paradigme relativement nouveau qui permet de diminuer ces problèmes d'interopérabilité. L'IDM considère toutes les entités d'un système comme un modèle. Les plateformes IDM sont composées par des types de modèles différents. Les modèles de transformation sont des acteurs majeurs de cette approche. Ils sont utilisés pour définir des opérations entre modèles. Par contre, il y existe d'autres types d'interactions qui sont définies sur la base des liens. Une solution d'IDM complète doit supporter des différents types de liens. Les recherches en IDM se sont centrées dans l'étude des transformations de modèles. Par conséquence, il y a beaucoup de travail concernant différents types des liens, ainsi que leurs implications dans une plateforme IDM. Cette thèse étudie des formes différentes de liens entre les éléments de modèles différents. Je montre, à partir d'une étude des nombreux travaux existants, que le point le plus critique de ces solutions est le manque de généricité, extensibilité et adaptabilité. Ensuite, je présente une solution d'IDM générique pour la gestion des liens entre les éléments de modèles. La solution s'appelle le tissage de modèles. Le tissage de modèles propose l'utilisation de modèles de tissage pour capturer des types différents de liens. Un modèle de tissage est conforme à un métamodèle noyau de tissage. J'introduis un ensemble des définitions pour les modèles de tissage et concepts liés. Ensuite, je montre comment les modèles de tissage et modèles de transformations sont une solution générique pour différents problèmes d'interopérabilité des données. Les modèles de tissage sont utilisés pour générer des modèles de transformations. Ensuite, je présente un outil adaptive et générique pour la création de modèles de tissage. L'approche sera validée en implémentant un outil de tissage appel

    Transactions and schema evolution in a persistent object-oriented programming system

    Get PDF
    Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework

    Unwoven Aspect Analysis

    Get PDF
    Various languages and tools supporting advanced separation of concerns (such as aspect-oriented programming) provide a software developer with the ability to separate functional and non-functional programmatic intentions. Once these separate pieces of the software have been specified, the tools automatically handle interaction points between separate modules, relieving the developer of this chore and permitting more understandable, maintainable code. Many approaches have left traditional compiler analysis and optimization until after the composition has been performed; unfortunately, analyses performed after composition cannot make use of the logical separation present in the original program. Further, for modular systems that can be configured with different sets of features, testing under every possible combination of features may be necessary and time-consuming to avoid bugs in production software. To solve this testing problem, we investigate a feature-aware compiler analysis that runs during composition and discovers features strongly independent of each other. When the their independence can be judged, the number of feature combinations that must be separately tested can be reduced. We develop this approach and discuss our implementation. We look forward to future programming languages in two ways: we implement solutions to problems that are conceptually aspect-oriented but for which current aspect languages and tools fail. We study these cases and consider what language designs might provide even more information to a compiler. We describe some features that such a future language might have, based on our observations of current language deficiencies and our experience with compilers for these languages

    Well-Formed and Scalable Invasive Software Composition

    Get PDF
    Software components provide essential means to structure and organize software effectively. However, frequently, required component abstractions are not available in a programming language or system, or are not adequately combinable with each other. Invasive software composition (ISC) is a general approach to software composition that unifies component-like abstractions such as templates, aspects and macros. ISC is based on fragment composition, and composes programs and other software artifacts at the level of syntax trees. Therefore, a unifying fragment component model is related to the context-free grammar of a language to identify extension and variation points in syntax trees as well as valid component types. By doing so, fragment components can be composed by transformations at respective extension and variation points so that always valid composition results regarding the underlying context-free grammar are yielded. However, given a language’s context-free grammar, the composition result may still be incorrect. Context-sensitive constraints such as type constraints may be violated so that the program cannot be compiled and/or interpreted correctly. While a compiler can detect such errors after composition, it is difficult to relate them back to the original transformation step in the composition system, especially in the case of complex compositions with several hundreds of such steps. To tackle this problem, this thesis proposes well-formed ISC—an extension to ISC that uses reference attribute grammars (RAGs) to specify fragment component models and fragment contracts to guard compositions with context-sensitive constraints. Additionally, well-formed ISC provides composition strategies as a means to configure composition algorithms and handle interferences between composition steps. Developing ISC systems for complex languages such as programming languages is a complex undertaking. Composition-system developers need to supply or develop adequate language and parser specifications that can be processed by an ISC composition engine. Moreover, the specifications may need to be extended with rules for the intended composition abstractions. Current approaches to ISC require complete grammars to be able to compose fragments in the respective languages. Hence, the specifications need to be developed exhaustively before any component model can be supplied. To tackle this problem, this thesis introduces scalable ISC—a variant of ISC that uses island component models as a means to define component models for partially specified languages while still the whole language is supported. Additionally, a scalable workflow for agile composition-system development is proposed which supports a development of ISC systems in small increments using modular extensions. All theoretical concepts introduced in this thesis are implemented in the Skeletons and Application Templates framework SkAT. It supports “classic”, well-formed and scalable ISC by leveraging RAGs as its main specification and implementation language. Moreover, several composition systems based on SkAT are discussed, e.g., a well-formed composition system for Java and a C preprocessor-like macro language. In turn, those composition systems are used as composers in several example applications such as a library of parallel algorithmic skeletons

    The Reflex Sandbox : an experimentation environment for an aspect-oriented Kernel

    Get PDF
    Reflex es un núcleo versátil para la programación orientada aspectos en Java. Provee de las abstracciones básicas, estructurales y de comportamiento, que permiten implementar una variedad de técnicas orientadas a aspectos. Esta tesis estudia dos tópicos fundamentales. En primer lugar, el desarrollo formal, utilizando el lenguaje Haskell, de las construcciones fundamentales del modelo Reflex para reflexión parcial de comportamiento. Este desarrollo abarca el diseño de un lenguaje, llamado Kernel, el cual es una extensión reflexiva de un lenguaje orientado a objetos simple. La semántica operacional del lenguaje Kernel es presentada mediante una máquina de ejecución abstracta. El otro tópico fundamental que estudia esta tesis es validar que el modelo de reflexión parcial de comportamiento es suficientemente expresivo para proveer de semántica a un subconjunto del lenguaje AspectJ. Con este fin, se desarrolló el Reflex Sandbox: un ambiente de experimentación en Haskell para el modelo Reflex. Tanto el desarrollo formal del modelo de reflexión parcial de comportamiento como la validación del soporte de AspectJ, son estudiados en el contexto del Reflex Sandbox. La validación abarca la definición de un lenguaje orientado a aspectos que caracteriza el enfoque de AspectJ a la programación orientada a aspectos, así como la definición de su máquina de ejecución abstracta. También se presenta un compilador que transforma programas escritos en este lenguaje al lenguaje Kernel. Este proceso de compilación provee los fundamentos para entender como dicha transformación puede ser realizada. El proceso de compilación también fue implementado en Java, pero transformando programas AspectJ a programas Reflex. También se presentan mediciones preliminares del desempeño de un programa compilado y ejecutado en Reflex y un programa compilado, y ejecutado con el compilador AspectJ

    On Language Processors and Software Maintenance

    Get PDF
    This work investigates declarative transformation tools in the context of software maintenance. Besides maintenance of the language specification, evolution of a software language requires the adaptation of the software written in that language as well as the adaptation of the software that transforms software written in the evolving language. This co-evolution is studied to derive automatic adaptations of artefacts from adaptations of the language specification. Furthermore, AOP for Prolog is introduced to improve maintainability of language specifications and derived tools.Die Arbeit unterstützt deklarative Transformationswerkzeuge im Kontext der Softwarewartung. Neben der Wartung der Sprachbeschreibung erfordert die Evolution einer Sprache sowohl die Anpassung der Software, die in dieser Sprache geschrieben ist als auch die Anpassung der Software, die diese Software transformiert. Diese Koevolution wird untersucht, um automatische Anpassungen von Artefakten von Anpassungen der Sprachbeschreibungen abzuleiten. Weiterhin wird AOP für Prolog eingeführt, um die Wartbarkeit von Sprachbeschreibungen und den daraus abgeleiteten Werkzeugen zu erhöhen

    Metodología dirigida por modelos para las pruebas de un sistema distribuido multiagente de fabricación

    Get PDF
    Las presiones del mercado han empujado a las empresas de fabricación a reducir costes a la vez que mejoran sus productos, especializándose en las actividades sobre las que pueden añadir valor y colaborando con especialistas de las otras áreas para el resto. Estos sistemas distribuidos de fabricación conllevan nuevos retos, dado que es difícil integrar los distintos sistemas de información y organizarlos de forma coherente. Esto ha llevado a los investigadores a proponer una variedad de abstracciones, arquitecturas y especificaciones que tratan de atacar esta complejidad. Entre ellas, los sistemas de fabricación holónicos han recibido una atención especial: ven las empresas como redes de holones, entidades que a la vez están formados y forman parte de varios otros holones. Hasta ahora, los holones se han implementado para control de fabricación como agentes inteligentes autoconscientes, pero su curva de aprendizaje y las dificultades a la hora de integrarlos con sistemas tradicionales han dificultado su adopción en la industria. Por otro lado, su comportamiento emergente puede que no sea deseable si se necesita que las tareas cumplan ciertas garantías, como ocurren en las relaciones de negocio a negocio o de negocio a cliente y en las operaciones de alto nivel de gestión de planta. Esta tesis propone una visión más flexible del concepto de holón, permitiendo que se sitúe en un espectro más amplio de niveles de inteligencia, y defiende que sea mejor implementar los holones de negocio como servicios, componentes software que pueden ser reutilizados a través de tecnologías estándar desde cualquier parte de la organización. Estos servicios suelen organizarse como catálogos coherentes, conocidos como Arquitecturas Orientadas a Servicios (‘Service Oriented Architectures’ o SOA). Una iniciativa SOA exitosa puede reportar importantes beneficios, pero no es una tarea trivial. Por este motivo, se han propuesto muchas metodologías SOA en la literatura, pero ninguna de ellas cubre explícitamente la necesidad de probar los servicios. Considerando que la meta de las SOA es incrementar la reutilización del software en la organización, es una carencia importante: tener servicios de alta calidad es crucial para una SOA exitosa. Por este motivo, el objetivo principal de la presente Tesis es definir una metodología extendida que ayude a los usuarios a probar los servicios que implementan a sus holones de negocio. Tras considerar las opciones disponibles, se tomó la metodología dirigida por modelos SODM como punto de partida y se reescribió en su mayor parte con el framework Epsilon de código abierto, permitiendo a los usuarios que modelen su conocimiento parcial sobre el rendimiento esperado de los servicios. Este conocimiento parcial es aprovechado por varios nuevos algoritmos de inferencia de requisitos de rendimiento, que extraen los requisitos específicos de cada servicio. Aunque el algoritmo de inferencia de peticiones por segundo es sencillo, el algoritmo de inferencia de tiempos límite pasó por numerosas revisiones hasta obtener el nivel deseado de funcionalidad y rendimiento. Tras una primera formulación basada en programación lineal, se reemplazó con un algoritmo sencillo ad-hoc que recorría el grafo y después con un algoritmo incremental mucho más rápido y avanzado. El algoritmo incremental produce resultados equivalentes y tarda mucho menos, incluso con modelos grandes. Para sacar más partidos de los modelos, esta Tesis también propone un enfoque general para generar artefactos de prueba para múltiples tecnologías a partir de los modelos anotados por los algoritmos. Para evaluar la viabilidad de este enfoque, se implementó para dos posibles usos: reutilizar pruebas unitarias escritas en Java como pruebas de rendimiento, y generar proyectos completos de prueba de rendimiento usando el framework The Grinder para cualquier Servicio Web que esté descrito usando el estándar Web Services Description Language. La metodología completa es finalmente aplicada con éxito a un caso de estudio basado en un área de fabricación de losas cerámicas rectificadas de un grupo de empresas español. En este caso de estudio se parte de una descripción de alto nivel del negocio y se termina con la implementación de parte de uno de los holones y la generación de pruebas de rendimiento para uno de sus Servicios Web. Con su soporte para tanto diseñar como implementar pruebas de rendimiento de los servicios, se puede concluir que SODM+T ayuda a que los usuarios tengan una mayor confianza en sus implementaciones de los holones de negocio observados en sus empresas

    Energy aware performance evaluation of WSNs

    Get PDF
    Distributed sensor networks have been discussed for more than 30 years, but the vision of Wireless Sensor Networks (WSNs) has been brought into reality only by the rapid advancements in the areas of sensor design, information technologies, and wireless networks that have paved the way for the proliferation of WSNs. The unique characteristics of sensor networks introduce new challenges, amongst which prolonging the sensor lifetime is the most important. Energy-efficient solutions are required for each aspect of WSN design to deliver the potential advantages of the WSN phenomenon, hence in both existing and future solutions for WSNs, energy efficiency is a grand challenge. The main contribution of this thesis is to present an approach considering the collaborative nature of WSNs and its correlation characteristics, providing a tool which considers issues from physical to application layer together as entities to enable the framework which facilitates the performance evaluation of WSNs. The simulation approach considered provides a clear separation of concerns amongst software architecture of the applications, the hardware configuration and the WSN deployment unlike the existing tools for evaluation. The reuse of models across projects and organizations is also promoted while realistic WSN lifetime estimations and performance evaluations are possible in attempts of improving performance and maximizing the lifetime of the network. In this study, simulations are carried out with careful assumptions for various layers taking into account the real time characteristics of WSN. The sensitivity of WSN systems are mainly due to their fragile nature when energy consumption is considered. The case studies presented demonstrate the importance of various parameters considered in this study. Simulation-based studies are presented, taking into account the realistic settings from each layer of the protocol stack. Physical environment is considered as well. The performance of the layered protocol stack in realistic settings reveals several important interactions between different layers. These interactions are especially important for the design of WSNs in terms of maximizing the lifetime of the network
    corecore