116 research outputs found

    Un environnement de spécification et de découverte pour la réutilisation des composants logiciels dans le développement des logiciels distribués

    Get PDF
    Notre travail vise Ă  Ă©laborer une solution efficace pour la dĂ©couverte et la rĂ©utilisation des composants logiciels dans les environnements de dĂ©veloppement existants et couramment utilisĂ©s. Nous proposons une ontologie pour dĂ©crire et dĂ©couvrir des composants logiciels Ă©lĂ©mentaires. La description couvre Ă  la fois les propriĂ©tĂ©s fonctionnelles et les propriĂ©tĂ©s non fonctionnelles des composants logiciels exprimĂ©es comme des paramĂštres de QoS. Notre processus de recherche est basĂ© sur la fonction qui calcule la distance sĂ©mantique entre la signature d'un composant et la signature d'une requĂȘte donnĂ©e, rĂ©alisant ainsi une comparaison judicieuse. Nous employons Ă©galement la notion de " subsumption " pour comparer l'entrĂ©e-sortie de la requĂȘte et des composants. AprĂšs sĂ©lection des composants adĂ©quats, les propriĂ©tĂ©s non fonctionnelles sont employĂ©es comme un facteur distinctif pour raffiner le rĂ©sultat de publication des composants rĂ©sultats. Nous proposons une approche de dĂ©couverte des composants composite si aucun composant Ă©lĂ©mentaire n'est trouvĂ©, cette approche basĂ©e sur l'ontologie commune. Pour intĂ©grer le composant rĂ©sultat dans le projet en cours de dĂ©veloppement, nous avons dĂ©veloppĂ© l'ontologie d'intĂ©gration et les deux services " input/output convertor " et " output Matching ".Our work aims to develop an effective solution for the discovery and the reuse of software components in existing and commonly used development environments. We propose an ontology for describing and discovering atomic software components. The description covers both the functional and non functional properties which are expressed as QoS parameters. Our search process is based on the function that calculates the semantic distance between the component interface signature and the signature of a given query, thus achieving an appropriate comparison. We also use the notion of "subsumption" to compare the input/output of the query and the components input/output. After selecting the appropriate components, the non-functional properties are used to refine the search result. We propose an approach for discovering composite components if any atomic component is found, this approach based on the shared ontology. To integrate the component results in the project under development, we developed the ontology integration and two services " input/output convertor " and " output Matching "

    XSLT Implementation in Relational Database Environment

    Get PDF
    XML is widely used format for storing all kinds of data and XSLT standard represents a standardized way, how to transform a XML document to a di erent structure. Many XSLT implementation has been introduced, but the most of them uses an in-memory representation of the transformed XML document. The implementation done in this thesis uses the relational database engine to store processed document and takes advantages of SQL to evaluate XPath expressions used by XSLT. First, importing source XML document into the generic relational mapping is described. For processing XPath expressions, the XPath to SQL convertor is introduced. Lastly, the processing of XSLT stylesheets by relational database engine is shown.XML je rozĆĄiƙitlnĂœ formĂĄt, kterĂœ je vyuĆŸĂ­vĂĄn k uklĂĄdĂĄnĂ­ vĆĄech druhĆŻ dat, a XSLT pƙedstavuje standardizovanĂœ jazyk pro transformaci XML dat a jejich struktury. Dnes existuje mnoho implementacĂ­ XSLT, ale větĆĄina z nich udrĆŸuje zdrojovĂ© XML dokumenty ve strukturĂĄch pƙímo v paměti. Implementace pƙedstavenĂĄ v tĂ©to prĂĄci pouĆŸĂ­vĂĄ pro udrĆŸovĂĄnĂ­ těchto dokumentĆŻ relačnĂ­ databĂĄzi a vyuĆŸĂ­vĂĄ jazyk SQL k vyhodnocovĂĄnĂ­ vĆĄech XPath dotazĆŻ, kterĂ© se pouĆŸĂ­vajĂ­ v XSLT. Nejdƙíve je pƙedstaveno nahrĂĄvĂĄnĂ­ zdrojovĂœch XML dokumentĆŻ do generickĂ©ho relačnĂ­ho mapovĂĄnĂ­. DĂĄle je pƙedstavena transformace XPath dotazĆŻ pƙímo na SQL dotazy a nakonec je popsĂĄno vyhodnocovĂĄnĂ­ XSLT transformacĂ­ pomocĂ­ prostƙedkĆŻ relačnĂ­ databĂĄze.Department of Software EngineeringKatedra softwarovĂ©ho inĆŸenĂœrstvĂ­Faculty of Mathematics and PhysicsMatematicko-fyzikĂĄlnĂ­ fakult

    The Partial Evaluation Approach to Information Personalization

    Get PDF
    Information personalization refers to the automatic adjustment of information content, structure, and presentation tailored to an individual user. By reducing information overload and customizing information access, personalization systems have emerged as an important segment of the Internet economy. This paper presents a systematic modeling methodology - PIPE (`Personalization is Partial Evaluation') - for personalization. Personalization systems are designed and implemented in PIPE by modeling an information-seeking interaction in a programmatic representation. The representation supports the description of information-seeking activities as partial information and their subsequent realization by partial evaluation, a technique for specializing programs. We describe the modeling methodology at a conceptual level and outline representational choices. We present two application case studies that use PIPE for personalizing web sites and describe how PIPE suggests a novel evaluation criterion for information system designs. Finally, we mention several fundamental implications of adopting the PIPE model for personalization and when it is (and is not) applicable.Comment: Comprehensive overview of the PIPE model for personalizatio

    Digital Preservation Services : State of the Art Analysis

    Get PDF
    Research report funded by the DC-NET project.An overview of the state of the art in service provision for digital preservation and curation. Its focus is on the areas where bridging the gaps is needed between e-Infrastructures and efficient and forward-looking digital preservation services. Based on a desktop study and a rapid analysis of some 190 currently available tools and services for digital preservation, the deliverable provides a high-level view on the range of instruments currently on offer to support various functions within a preservation system.European Commission, FP7peer-reviewe

    Migrating relational databases into object-based and XML databases

    Get PDF
    Rapid changes in information technology, the emergence of object-based and WWW applications, and the interest of organisations in securing benefits from new technologies have made information systems re-engineering in general and database migration in particular an active research area. In order to improve the functionality and performance of existing systems, the re-engineering process requires identifying and understanding all of the components of such systems. An underlying database is one of the most important component of information systems. A considerable body of data is stored in relational databases (RDBs), yet they have limitations to support complex structures and user-defined data types provided by relatively recent databases such as object-based and XML databases. Instead of throwing away the large amount of data stored in RDBs, it is more appropriate to enrich and convert such data to be used by new systems. Most researchers into the migration of RDBs into object-based/XML databases have concentrated on schema translation, accessing and publishing RDB data using newer technology, while few have paid attention to the conversion of data, and the preservation of data semantics, e.g., inheritance and integrity constraints. In addition, existing work does not appear to provide a solution for more than one target database. Thus, research on the migration of RDBs is not fully developed. We propose a solution that offers automatic migration of an RDB as a source into the recent database technologies as targets based on available standards such as ODMG 3.0, SQL4 and XML Schema. A canonical data model (CDM) is proposed to bridge the semantic gap between an RDB and the target databases. The CDM preserves and enhances the metadata of existing RDBs to fit in with the essential characteristics of the target databases. The adoption of standards is essential for increased portability, flexibility and constraints preservation. This thesis contributes a solution for migrating RDBs into object-based and XML databases. The solution takes an existing RDB as input, enriches its metadata representation with the required explicit semantics, and constructs an enhanced relational schema representation (RSR). Based on the RSR, a CDM is generated which is enriched with the RDB's constraints and data semantics that may not have been explicitly expressed in the RDB metadata. The CDM so obtained facilitates both schema translation and data conversion. We design sets of rules for translating the CDM into each of the three target schemas, and provide algorithms for converting RDB data into the target formats based on the CDM. A prototype of the solution has been implemented, which generates the three target databases. Experimental study has been conducted to evaluate the prototype. The experimental results show that the target schemas resulting from the prototype and those generated by existing manual mapping techniques were comparable. We have also shown that the source and target databases were equivalent, and demonstrated that the solution, conceptually and practically, is feasible, efficient and correct

    Software Perfomance Assessment at Architectural Level: A Methodology and its Application

    Get PDF
    Las arquitecturas software son una valiosa herramienta para la evaluaciĂłn de las propiedades cualitativas y cuantitativas de los sistemas en sus primeras fases de desarrollo. Conseguir el diseño adecuado es crĂ­tico para asegurar la bondad de dichas propiedades. Tomar decisiones tempranas equivocadas puede implicar considerables y costosos cambios en un futuro. Dichas decisiones afectarĂ­an a muchas propiedades del sistema, tales como su rendimiento, seguridad, fiabilidad o facilidad de mantenimiento. Desde el punto de vista del rendimiento software, la ingenierĂ­a del rendimiento del software (SPE) es una disciplina de investigaciĂłn madura y comĂșnmente aceptada que propone una evaluaciĂłn basada en modelos en las primeras fases del ciclo de vida de desarrollo software. Un problema en este campo de investigaciĂłn es que las metodologĂ­as hasta ahora propuestas no ofrecen una interpretaciĂłn de los resultados obtenidos durante el anĂĄlisis del rendimiento, ni utilizan dichos resultados para proponer alternativas para la mejora de la propia arquitectura software. Hasta la fecha, esta interpretaciĂłn y mejora requiere de la experiencia y pericia de los ingenieros software, en especial de expertos en ingenierĂ­a de prestaciones. AdemĂĄs, a pesar del gran nĂșmero de propuestas para evaluar el rendimiento de sistemas software, muy pocos de estos estudios teĂłricos son posteriormente aplicados a sistemas software reales. El objetivo de esta tesis es presentar una metodologĂ­a para el asesoramiento de decisiones arquitecturales para la mejora, desde el punto de vista de las prestaciones, de las sistemas software. La metodologĂ­a hace uso del Lenguaje Unificado de Modelado (UML) para representar las arquitecturas software y de mĂ©todos formales, concretamente redes de Petri, como modelo de prestaciones. El asesoramiento, basado en patrones y antipatrones, intenta detectar los principales problemas que afectan a las prestaciones del sistema y propone posibles mejoras para mejoras dichas prestaciones. Como primer paso, estudiamos y analizamos los resultados del rendimiento de diferentes estilos arquitectĂłnicos. A continuaciĂłn, sistematizamos los conocimientos previamente obtenidos para proponer una metodologĂ­a y comprobamos su aplicabilidad asesorando un caso de estudio real, una arquitectura de interoperabilidad para adaptar interfaces a personas con discapacidad conforme a sus capacidades y preferencias. Finalmente, se presenta una herramienta para la evaluaciĂłn del rendimiento como un producto derivado del propio ciclo de vida software

    Dynamic Integration of Evolving Distributed Databases using Services

    Get PDF
    This thesis investigates the integration of many separate existing heterogeneous and distributed databases which, due to organizational changes, must be merged and appear as one database. A solution to some database evolution problems is presented. It presents an Evolution Adaptive Service-Oriented Data Integration Architecture (EA-SODIA) to dynamically integrate heterogeneous and distributed source databases, aiming to minimize the cost of the maintenance caused by database evolution. An algorithm, named Relational Schema Mapping by Views (RSMV), is designed to integrate source databases that are exposed as services into a pre-designed global schema that is in a data integrator service. Instead of producing hard-coded programs, views are built using relational algebra operations to eliminate the heterogeneities among the source databases. More importantly, the definitions of those views are represented and stored in the meta-database with some constraints to test their validity. Consequently, the method, called Evolution Detection, is then able to identify in the meta-database the views affected by evolutions and then modify them automatically. An evaluation is presented using case study. Firstly, it is shown that most types of heterogeneity defined in this thesis can be eliminated by RSMV, except semantic conflict. Secondly, it presents that few manual modification on the system is required as long as the evolutions follow the rules. For only three types of database evolutions, human intervention is required and some existing views are discarded. Thirdly, the computational cost of the automatic modification shows a slow linear growth in the number of source database. Other characteristics addressed include EA-SODIA’ scalability, domain independence, autonomy of source databases, and potential of involving other data sources (e.g.XML). Finally, the descriptive comparison with other data integration approaches is presented. It shows that although other approaches may provide better performance of query processing in some circumstances, the service-oriented architecture provide better autonomy, flexibility and capability of evolution

    Integration of and Access to Distributed Data and Tools in Genomics

    Get PDF
    One of the important data sources in bioinformatics is protein or nucleotide sequences that are used as input to many programs to collectively or individually analyze them. There exists an ample amount of protein sequences scattered over many different databases. This division complicates the process of feeding them into existing programs to be further analyzed. Moreover, there exists a program integration portal, namely Mobyle that makes the common programs available with unified interface to the users; in addition, it provides the functionality of chaining the results from one program to another. The two existing programs in Mobyle fetch sequences to feed the other programs, however, they fetch sequences from limited number of databases that are statically defined by the Mobyle administrator. In addition, neither of these tools have access to the DAS servers, resulting in the loss of a major data source. In this work, a program was developed and integrated, namely DasSeqFetcher, for use in Mobyle to dynamically fetch sequences from all available sequence databases providing a DAS reference server. Also, both DAS reference and annotation servers were developed for a database made by our research group which holds experimentally characterized lignocellulose-active proteins. The reference servers can then be added to DAS registry to be used by DAS client tools, e.g. DasSeqFetcher

    Migrating relational databases into object-based and XML databases

    Get PDF
    Rapid changes in information technology, the emergence of object-based and WWW applications, and the interest of organisations in securing benefits from new technologies have made information systems re-engineering in general and database migration in particular an active research area. In order to improve the functionality and performance of existing systems, the re-engineering process requires identifying and understanding all of the components of such systems. An underlying database is one of the most important component of information systems. A considerable body of data is stored in relational databases (RDBs), yet they have limitations to support complex structures and user-defined data types provided by relatively recent databases such as object-based and XML databases. Instead of throwing away the large amount of data stored in RDBs, it is more appropriate to enrich and convert such data to be used by new systems. Most researchers into the migration of RDBs into object-based/XML databases have concentrated on schema translation, accessing and publishing RDB data using newer technology, while few have paid attention to the conversion of data, and the preservation of data semantics, e.g., inheritance and integrity constraints. In addition, existing work does not appear to provide a solution for more than one target database. Thus, research on the migration of RDBs is not fully developed. We propose a solution that offers automatic migration of an RDB as a source into the recent database technologies as targets based on available standards such as ODMG 3.0, SQL4 and XML Schema. A canonical data model (CDM) is proposed to bridge the semantic gap between an RDB and the target databases. The CDM preserves and enhances the metadata of existing RDBs to fit in with the essential characteristics of the target databases. The adoption of standards is essential for increased portability, flexibility and constraints preservation. This thesis contributes a solution for migrating RDBs into object-based and XML databases. The solution takes an existing RDB as input, enriches its metadata representation with the required explicit semantics, and constructs an enhanced relational schema representation (RSR). Based on the RSR, a CDM is generated which is enriched with the RDB's constraints and data semantics that may not have been explicitly expressed in the RDB metadata. The CDM so obtained facilitates both schema translation and data conversion. We design sets of rules for translating the CDM into each of the three target schemas, and provide algorithms for converting RDB data into the target formats based on the CDM. A prototype of the solution has been implemented, which generates the three target databases. Experimental study has been conducted to evaluate the prototype. The experimental results show that the target schemas resulting from the prototype and those generated by existing manual mapping techniques were comparable. We have also shown that the source and target databases were equivalent, and demonstrated that the solution, conceptually and practically, is feasible, efficient and correct.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A Service Late Binding Enabled Solution for Data Integration from Autonomous and Evolving Databases

    Get PDF
    Integrating data from autonomous, distributed and heterogeneous data sources to provide a unified vision is a common demand for many businesses. Since the data sources may evolve frequently to satisfy their own independent business needs, solutions which use hard coded queries to integrate participating databases may cause high maintenance costs when evolution occurs. Thus a new solution which can handle database evolution with lower maintenance effort is required. This thesis presents a new solution: Service Late binding Enabled Data Integration (SLEDI) which is set into a framework modeling the essential processes of the data integration activity. It integrates schematic heterogeneous relational databases with decreased maintenance costs for handling database evolution. An algorithm, named Information Provision Unit Describing (IPUD) is designed to describe each database as a set of Information Provision Units (IPUs). The IPUs are represented as Directed Acyclic Graph (DAG) structured data instead of hard coded queries, and further realized as data services. Hence the data integration is achieved through service invocations. Furthermore, a set of processes is defined to handle the database evolution through automatically identifying and modifying the IPUs which are affected by the evolution. An extensive evaluation based on a case study is presented. The result shows that the schematic heterogeneities defined in this thesis can be solved by IPUD except the relation isomorphism discrepancy. Ten out of thirteen types of schematic database evolution can be automatically handled by the evolution handling processes as long as the evolution is represented by the designed data model. The computational costs of the automatic evolution handling show a slow linear growth with the number of participating databases. Other characteristics addressed include SLEDI’s scalability, independence of application domain and databases model. The descriptive comparison with other data integration approaches shows that although the Data as a Service approach may result in lower performance under some circumstances, it supports better flexibility for integrating data from autonomous and evolving data sources
    • 

    corecore