11 research outputs found

    A meta-semantic language for smart component-adapters

    Get PDF
    The issues confronting the software development community today are significantly different from the problems it faced only a decade ago. Advances in software development tools and technologies during the last two decades have greatly enhanced the ability to leverage large amounts of software for creating new applications through the reuse of software libraries and application frameworks. The problems facing organizations today are increasingly focused around systems integration and the creation of information flows. Software modeling based on the assembly of reusable components to support software development has not been successfully implemented on a wide scale. Several models for reusable software components have been suggested which primarily address the wiring-level connectivity problem. While this is considered necessary, it is not sufficient to support an automated process of component assembly. Two critical issues that remain unresolved are: (1) semantic modeling of components, and (2) deployment process that supports automated assembly. The first issue can be addressed through domain-based standardization that would make it possible for independent developers to produce interoperable components based on a common set of vocabulary and understanding of the problem domain. This is important not only for providing a semantic basis for developing components but also for the interoperability between systems. The second issue is important for two reasons: (a) eliminate the need for developers to be involved in the final assembly of software components, and (b) provide a basis for the development process to be potentially driven by the user. To resolve the above remaining issues (1) and (2) a late binding mechanism between components based on meta-protocols is required. In this dissertation we address the above issues by proposing a generic framework for the development of software components and an interconnection language, COMPILE, for the specification of software systems from components. The computational model of the COMPILE language is based on late and dynamic binding of the components\u27 control, data, and function properties. The use of asynchronous callbacks for method invocation allows control binding among components to be late and dynamic. Data exchanged between components is defined through the use of a meta- language that can describe the semantics of the information but without being bound to any specific programming language type representation. Late binding to functions is accomplished by maintaining domain-based semantics as component metainformation. This information allows clients of components to map generic requested service to specific functions

    Component based development: A methodology proposal

    Get PDF
    Enterprise architecture is undergoing considerable change with recent developments in client server technologies and middleware. Each in turn has a significant impact on the way systems are designed, and a more component-based approach to development is beginning to emerge. While we now have the Unified Modelling Language as universal notation for design modelling, there is currently no consistent standard for the definition of components. A pragmatic architecture for application development is needed that delivers business benefit without the need for significant investment in tools and training. It should minimise risk and maximise return on investment by leveraging investment in legacy systems, and provide the means to more closely relate business requirements to each phase of the development process. This paper suggests that a better way of controlling technology is by adopting a service based approach to design and development, concentrating on pragmatic techniques and models that add value through reuse within a sound architectural framework. Given a set of business requirements, the focus is on business oriented component modelling techniques (e.g. process models, use cases, service allocation), and the delivery of a complete component design specification (e.g service definitions, service package architecture). Unusually, this does not involve the definition of a domain class model, but rather a definition of implementation independent, and therefore reusable, services (or contracts) that component packages will deliver. The component package is regarded as a ‘black box’ from which components will be designed and built by specialists in the technology of the component domain. This approach also provides the means for legacy and packaged applications to be reused in the same way. The methodology was evaluated by a peer group of six senior IT professionals from the insurance and IT services sectors, who together represent over 110 years of industry experience. The methodology was presented in the form of a case study and questionnaire, and from the feedback it was concluded that there was merit in the approach. Reservations over how it would scale to larger systems have been addressed by the agreed need for a suitable repository for the documentation of data and business rules, and the need to separate the definition of technical non-functional requirements

    A Programming System for End-user Functional Programming

    Get PDF
    This research involves the construction of a programming system, HASKEU, to support end-user programming in a purely functional programming language. An end-user programmer is someone who may program a computer to get their job done, but has no interest in becoming a computer programmer. A purely functional programming language is one that does not require the expression of statement sequencing or variable updating. The end-user is offered two views of their functional program. The primary view is a visual one, in which the program is presented as a collection of boxes (representing processes) and lines (representing data flow). The secondary view is a textual one, in which the program is presented as a collection of written function definitions. It is expected that the end-user programmer will begin with the visual view, perhaps later moving on to the textual view. The task of the programming system is to ensure that the visual and textual views are kept consistent as the program is constructed. The foundation of the programming system is a implementation of the Model-View-Controller (MVC) design pattern as a reactive program using the elegant Functional Reactive Programming (FRP) framework. Human-Computer Interaction (HCI) principles and methods are considered in all design decisions. A usabilty study was made to find out the effectiveness of the new system

    A framework for the analysis and evaluation of enterprise models

    Get PDF
    Bibliography: leaves 264-288.The purpose of this study is the development and validation of a comprehensive framework for the analysis and evaluation of enterprise models. The study starts with an extensive literature review of modelling concepts and an overview of the various reference disciplines concerned with enterprise modelling. This overview is more extensive than usual in order to accommodate readers from different backgrounds. The proposed framework is based on the distinction between the syntactic, semantic and pragmatic model aspects and populated with evaluation criteria drawn from an extensive literature survey. In order to operationalize and empirically validate the framework, an exhaustive survey of enterprise models was conducted. From this survey, an XML database of more than twenty relatively large, publicly available enterprise models was constructed. A strong emphasis was placed on the interdisciplinary nature of this database and models were drawn from ontology research, linguistics, analysis patterns as well as the traditional fields of data modelling, data warehousing and enterprise systems. The resultant database forms the test bed for the detailed framework-based analysis and its public availability should constitute a useful contribution to the modelling research community. The bulk of the research is dedicated to implementing and validating specific analysis techniques to quantify the various model evaluation criteria of the framework. The aim for each of the analysis techniques is that it can, where possible, be automated and generalised to other modelling domains. The syntactic measures and analysis techniques originate largely from the disciplines of systems engineering, graph theory and computer science. Various metrics to measure model hierarchy, architecture and complexity are tested and discussed. It is found that many are not particularly useful or valid for enterprise models. Hence some new measures are proposed to assist with model visualization and an original "model signature" consisting of three key metrics is proposed.Perhaps the most significant contribution ofthe research lies in the development and validation of a significant number of semantic analysis techniques, drawing heavily on current developments in lexicography, linguistics and ontology research. Some novel and interesting techniques are proposed to measure, inter alia, domain coverage, model genericity, quality of documentation, perspicuity and model similarity. Especially model similarity is explored in depth by means of various similarity and clustering algorithms as well as ways to visualize the similarity between models. Finally, a number of pragmatic analyses techniques are applied to the models. These include face validity, degree of use, authority of model author, availability, cost, flexibility, adaptability, model currency, maturity and degree of support. This analysis relies mostly on the searching for and ranking of certain specific information details, often involving a degree of subjective interpretation, although more specific quantitative procedures are suggested for some of the criteria. To aid future researchers, a separate chapter lists some promising analysis techniques that were investigated but found to be problematic from methodological perspective. More interestingly, this chapter also presents a very strong conceptual case on how the proposed framework and the analysis techniques associated vrith its various criteria can be applied to many other information systems research areas. The case is presented on the grounds of the underlying isomorphism between the various research areas and illustrated by suggesting the application of the framework to evaluate web sites, algorithms, software applications, programming languages, system development methodologies and user interfaces

    IDE for SCADA Development at CERN

    Get PDF
    CĂ­lem tĂ©to magisterskĂ© prĂĄce je navrhnout a implementovat IDE (integrovanĂ© vĂœvojovĂ© prostƙedĂ­), kterĂ© zvĂœĆĄĂ­ efektivitu a bezpečnost vĂœvoje pro SIMATIC WinCC Open Architecture. Tato prĂĄce je zaloĆŸena na vĂœzkumu provedenĂ©m tĂœmem z TechnickĂ© univerzity v Eindhovenu a splƈuje poĆŸadavky pochĂĄzejĂ­cĂ­ ze SCD sekce v CERN (EvropskĂ© organizace pro jadernĂœ vĂœzkum). VyvinutĂ© IDE je postaveno na platformě Eclipse, pƙičemĆŸ pro syntaktickou analĂœzu, linkovĂĄnĂ­ a sĂ©mantickou analĂœzu kĂłdu pouĆŸĂ­vĂĄ Xtext framework. IDE nabĂ­zĂ­ takĂ© podporu pro nově vytvoƙenĂœ programovacĂ­ jazyk, kterĂœ umoĆŸĆˆuje programĂĄtorĆŻm jednoduĆĄe nadefinovat ĆĄablonu pro konfiguračnĂ­ soubory pouĆŸĂ­vanĂ© WinCC OA. Interpret tohoto novĂ©ho jazyka je schopen provĂ©st syntaktickou analĂœzu ĆĄablony a konfiguračnĂ­ho souboru a rozhodnout, zdali konfiguračnĂ­ soubor odpovĂ­dĂĄ ĆĄabloně. PraktickĂœm vĂœstupem tĂ©to prĂĄce je integrovanĂ© vĂœvojovĂ© prostƙedĂ­, kterĂ© podporuje vĂœvoj WinCC OA aplikacĂ­ v CERN a periodicky provĂĄdĂ­ analĂœzu kĂłdu těchto aplikacĂ­ napsanĂ©ho v jazyce Control script.The goal of this master's thesis is to design and implement an IDE (Integrated Development Environment) that makes development for SIMATIC WinCC Open Architecture more effective and secure. This thesis is based on a research made by Eindhoven University of Technology and it meets needs of CERN EN ICE SCD section. The developed IDE is built on top of the Eclipse Platform and it uses Xtext for code parsing, scoping, linking and static code analysis. The IDE also supports a new programming language that allows programmers to easily define templates for WinCC OA configuration files. The interpreter of this new language is able to parse a template and a configuration file and decide whether the configuration file matches the template. The practical result of this thesis is an IDE that supports WinCC OA developers at CERN and performs periodical analysis of CERN code written in Control script Language.

    Adaptive Caching of Distributed Components

    Get PDF
    Die ZugriffslokalitĂ€t referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. GegenwĂ€rtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum UnterstĂŒtzung fĂŒr diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frĂŒhzeitige Modellierung und spĂ€tere Wiederverwendung caching-spezifischer Metadaten gewĂ€hrleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezĂŒglich der Cachebarkeit von Daten adaptiv an geĂ€ndertes Nutzungsverhalten anpassen.Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach

    The advantages and cost effectiveness of database improvement methods

    Get PDF
    Relational databases have proved inadequate for supporting new classes of applications, and as a consequence, a number of new approaches have been taken (Blaha 1998), (Harrington 2000). The most salient alternatives are denormalisation and conversion to an object-oriented database (Douglas 1997). Denormalisation can provide better performance but has deficiencies with respect to data modelling. Object-oriented databases can provide increased performance efficiency but without the deficiencies in data modelling (Blaha 2000). Although there have been various benchmark tests reported, none of these tests have compared normalised, object oriented and de-normalised databases. This research shows that a non-normalised database for data containing type code complexity would be normalised in the process of conversion to an objectoriented database. This helps to correct badly organised data and so gives the performance benefits of de-normalisation while improving data modelling. The costs of conversion from relational databases to object oriented databases were also examined. Costs were based on published benchmark tests, a benchmark carried out during this study and case studies. The benchmark tests were based on an engineering database benchmark. Engineering problems such as computer-aided design and manufacturing have much to gain from conversion to object-oriented databases. Costs were calculated for coding and development, and also for operation. It was found that conversion to an object-oriented database was not usually cost effective as many of the performance benefits could be achieved by the far cheaper process of de-normalisation, or by using the performance improving facilities provided by many relational database systems such as indexing or partitioning or by simply upgrading the system hardware. It is concluded therefore that while object oriented databases are a better alternative for databases built from scratch, the conversion of a legacy relational database to an object oriented database is not necessarily cost effective

    Using the San Francisco frameworks with VisualAge for Java [Technical note]

    No full text

    Uma proposta de arquitetura de resiliĂȘncia computacional para infraestruturas baseadas em SOA de empresas virtuais

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro TecnolĂłgico, Programa de PĂłs-Graduação em Engenharia de Automação e Sistemas, FlorianĂłpolis, 2019Uma Empresa Virtual (EV) Ă© um tipo de rede colaborativa de organizaçÔes na qual grupos de empresas se unem dinĂąmica, lĂłgica e temporariamente para melhor atender a demandas de mercado. Atuando como se fossem uma Ășnica empresa, compartilham recursos, custos e riscos de um negĂłcio, representando assim um proeminente modelo de sustentabilidade, especialmente para pequenas e mĂ©dias empresas. Uma das prĂ©-condiçÔes para atuar numa EV Ă© que os sistemas computacionais dos seus membros interoperem para que os processos de negĂłcio associados Ă  EV possam ser executados sem problemas pelos mais diversos sistemas envolvidos. Esta tese explora um cenĂĄrio onde todos os sistemas das empresas sĂŁo implementados de uma forma que possam ser expostos como serviços de software numa perspectiva SOA (Service Oriented Architecture), serem invocados pelos processos de negĂłcio da EV em questĂŁo e, ao mesmo tempo, possam ser compartilhados com os outros membros. Desta forma, quando uma EV Ă© formada, um grande sistema distribuĂ­do baseado em serviços Ă© dinamicamente criado. Dado que em uma EV novas empresas podem entrar e outras sair ao longo de sua existĂȘncia, tal sistema nĂŁo Ă© estĂĄtico, mas sim deve alterar sua composição, tanto em tempo de projeto, quanto em tempo de execução. Como cada empresa pode participar simultaneamente em mais do que uma EV, isso tambĂ©m significa que cada um dos seus serviços poderĂĄ estar envolvido ao mesmo tempo em inĂșmeras orquestraçÔes, porĂ©m em diferentes contextos de negĂłcio e requisitos de qualidade de serviço. Este sistema computacional (e seus inĂșmeros serviços) deve permanecer operacional ao longo de todo ciclo de vida da EV de forma a sustentar a execução dos processos e, assim, do negĂłcio. Em um sistema como esse, largamente distribuĂ­do e com serviços implementados em diferentes tecnologias, vĂĄrias falhas podem ocorrer. Esta tese propĂ”e uma arquitetura computacional para um sistema de resiliĂȘncia para esse cenĂĄrio, fazendo com que o sistema como um todo se recupere diante das falhas e mantenha o nĂ­vel de qualidade de serviço geral do negĂłcio da EV. ApĂłs pesquisa na literatura, nĂŁo foram encontrados trabalhos que cobrissem a ĂĄrea de intersecção entre resiliĂȘncia, SOA e EV. Baseado no modelo de referĂȘncia de computação autonĂŽmica MAPE-K, a arquitetura proposta Ă© auto resiliente e foi concebida ela mesma como SOA; portanto distribuĂ­da, com baixo acoplamento e escalĂĄvel. AlĂ©m disso, seu projeto contempla as modernas visĂ”es de economia orientada a serviços, compreendendo ecossistemas de provedores de serviços de software. Para garantir a permanĂȘncia da EV em operação, vĂĄrias tĂ©cnicas consolidadas de tolerĂąncia a falhas foram empregadas, combinadas e adaptadas ao cenĂĄrio em questĂŁo, atuando tanto reativamente como proativamente, e respeitando os nĂ­veis de responsabilidade das camadas de negĂłcio, TI e de infraestrutura computacional. Um robusto protĂłtipo de software foi implementado como prova de conceito, onde se buscou utilizar o maior nĂșmero possĂ­vel de padrĂ”es abertos de TI. Ele foi avaliado experimentalmente em um cenĂĄrio controlado de EV. Ao se aplicar indicadores de desempenho de referĂȘncia, a arquitetura mostrou-se promissora, suportando a execução dos sistemas da EV na quase totalidade dos casos mesmo diante de inĂșmeras falhas. A implementação teve algumas simplificaçÔes e o desenho da arquitetura partiu de uma sĂ©rie de pressupostos. Ao final, conclusĂ”es finais do trabalho sĂŁo apresentadas bem como um conjunto de trabalhos futuros Ă© sugerido.Abstract: Virtual Enterprises (VE) is a type of collaborative networked organization in which groups of companies are dynamically, logically and temporally joined to better meet market demands. Acting as a single company, they share resources, costs and business risks, thus representing a prominent sustainability model, especially for small and medium-sized enterprises. One of the preconditions for operating as an EV is that the members? IT systems should interoperate in way the business processes associated with EV can be executed by the most diverse involved systems without problems. This thesis exploits a scenario where all company systems are implemented in way they can be exposed as software services in the SOA (Service Oriented Architecture) perspective, being invoked by the VE?s business processes and, at the same time, can be shared with the other members. In this way, when an EV is formed, a large distributed service-based system is dynamically created. Given that new companies can enter and other leave a VE during the general VE process, such system is not static, but rather can change its composition, both at design and run time. Yet, since given companies can participate in many EV simultaneously, this also means that their services can be involved in diverse orchestrations although in different business contexts and quality of services requirements. This computational system (and its many services) should remain operating throughout the VE?s life cycle in order to sustain the execution of the processes and thus of the business. In a system like this, widely distributed and with services implemented in different technologies, several failures can occur. This thesis proposes a resilience system computing architecture for this scenario, making the system able to recover from failures while maintaining the level of general service quality of the VE business. After a literature research, no works were found out that covered the intersection area of resilience, SOA and VE. Based on the MAPE-K autonomic computing reference model, the proposed architecture is self-resilient and was conceived as a SOA itself; therefore, it is distributed, loosely coupled and scalable. In addition, its design adopts the modern vision of service-oriented economy, comprising ecosystems of software service providers. In order to guarantee the VE operation, several fault tolerance techniques were used, combined and adapted to that scenario, acting both reactively and pro-actively, and respecting the responsibility levels of the business, IT and computing infrastructure layers. A robust software prototype was implemented as proof of concept, using as many open IT standards as possible. It was evaluated experimentally in a controlled VE scenario. After the application of reference performance indicators, the architecture showed to be promising, supporting the VE?s systems execution in almost all cases in the presence of numerous failures. The implementation has simplifications and the architecture design has adopted several assumptions. Conclusions are presented in the end, including suggestions for future work
    corecore