78 research outputs found

    A Service-Oriented Approach for Network-Centric Data Integration and Its Application to Maritime Surveillance

    Get PDF
    Maritime-surveillance operators still demand for an integrated maritime picture better supporting international coordination for their operations, as looked for in the European area. In this area, many data-integration efforts have been interpreted in the past as the problem of designing, building and maintaining huge centralized repositories. Current research activities are instead leveraging service-oriented principles to achieve more flexible and network-centric solutions to systems and data integration. In this direction, this article reports on the design of a SOA platform, the Service and Application Integration (SAI) system, targeting novel approaches for legacy data and systems integration in the maritime surveillance domain. We have developed a proof-of-concept of the main system capabilities to assess feasibility of our approach and to evaluate how the SAI middleware architecture can fit application requirements for dynamic data search, aggregation and delivery in the distributed maritime domain

    SPAWN: Service Provision in Ad-hoc Wireless Networks

    Get PDF
    The increasing ubiquity of wireless mobile computing platforms has opened up the potential for unprecedented levels of communication, coordination and collaboration among mobile computing devices, most of which will occur in an ad hoc, on-demand manner. This paper describes SPAWN, a middleware supporting service provision in ad-hoc wireless networks. The aim of SPAWN is to provide the software resources on mobile devices that facilitate electronic collaboration. This is achieved by applying the principles of service oriented computing (SOC), an emerging paradigm that has seen success in wired settings. SPAWN is an adaptation and extension of the Jini model of SOC to ad-hoc networks. The key contributions of SPAWN are (1) a completely decentralized service advertisement and request system that is geared towards handling the unpredictability and dynamism of mobile ad-hoc networks, (2) an automated code management system that can fetch, use and dispose of binaries on an on-demand basis, (3) a mechanism supporting the logical mobility of services, (4) an upgrade mechanism to extend the life cycle of services, and (5) a lightweight security model that secures all interactions, which is essential in an open environment. We discuss the software architecture, a Java implementation, sample applications and an empirical evaluation of the system

    An object-oriented framework to organize genomic data

    Get PDF
    Bioinformatics resources should provide simple and flexible support for genomics research. A huge amount of gene mapping data, micro-array expression data, expressed sequence tags (EST), BAC sequence data and genome sequence data are already, or will soon be available for a number of livestock species. These species will have different requirements compared to typical biomedical model organisms and will need an informatics framework to deal with the data. In term of exploring complex-intertwined genomic data, the way to organize them will be addressed in this study. Therefore, we investigated two issues in this study: one is an independent informatics framework including both back end and front end; another is how an informatics framework simplifies the user interface to explore data. We have developed a fundamental informatics framework that makes it easy to organize and manipulate the complex relations between genomic data, and allow for query results to be presented via a user friendly web interface. A genome object-oriented framework (GOOF) was proposed with object-oriented Java technology and is independent of any database system. This framework seamlessly links the database system and web presentation components. The data models of GOOF collect the data relationships in order to provide users with access to relations across different types of data, meaning that users avoid constructing queries within the interface layer. Moreover, the module-based interface provided by GOOF could allow different users to access data in different interfaces and ways. In another words, GOOF not only gives a whole solution to informatics infrastructure, but also simplifies the organization of data modeling and presentation. In order to be a fast development solution, GOOF provides an automatic code engine by using meta-programming facilities in Java, which could allow users to generate a large amount of routine program codes. Moreover, the pre-built data layer in GOOF connecting with Chado simplifies the process to manage genomic data in the Chado schema. In summary, we studied the way to model genomic data into an informatics framework, a one-stop approach, to organize the data and addressed how GOOF constructs a bioinformatics infrastructure for users to access genomic data

    Resolving feature convolution in middleware systems

    Full text link
    Middleware provides simplicity and uniformity for the development of distributed applications. However, the modularity of the architecture of middleware is starting to disintegrate and to become complicated due to the interaction of too many orthogonal concerns imposed from a wide range of application requirements. This is not due to bad design but rather due to the limitations of the conventional architectural decomposition methodologies. We introduce the principles of horizontal decomposition (HD) which addresses this problem with a mixed-paradigm middleware architecture. HD provides guidance for the use of conventional decomposition methods to implement the core functionalities of middleware and the use of aspect orientation to address its orthogonal properties. Our evaluation of the horizontal decomposition principles focuses on refactoring major middleware functionalities into aspects in order to modularize and isolate them from the core architecture. New versions of the middleware platform can be created through combining the core and the flexible selection of middleware aspects such as IDL data types, the oneway invocation style, the dynamic messaging style, and additional character encoding schemes. As a result, the primary functionality of the middleware is supported with a much simpler architecture and enhanced performance. Moreover, customization and configuration of the middleware for a wide-range of requirements becomes possible

    Adaptive Caching of Distributed Components

    Get PDF
    Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen.Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach

    A framework for abstracting complexities in service delivery platforms

    Get PDF
    The telecommunication (telco) and Information Technology (IT) industries are converging into a single highly competitive market, where service diversity is the critical success factor. To provide diverse services, the telco network operator must evolve the traditional voice service centric network into a generic service centric network. An appropriate, but incomplete, architecture for this purpose is the Service Delivery Platform (SDP). The SDP represents an IT-based system that simplifies access to telco capabilities using services. SDP services offer technology independent interfaces to external entities. The SDP has vendor-specific interpretations that mix standards-based and proprietary interfaces to satisfy specific requirements. In addition, SDP architectural representations are technology-specific. To be widely adopted the SDP must provide standardised interfaces. This work contributes toward SDP standardisation by defining a technology independent and extendable architecture, called the SDP Framework. To define the framework we first describe telecom-IT convergence and a strategy to manage infrastructure integration. Second, we provide background on the SDP and its current limitations. Third, we treat the SDP as a complex system and determine a viewpoint methodology to define its framework. Fourth, we apply viewpoints by extracting concepts and abstractions from various standard-based telecom and IT technologies: the Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA), Parlay, enhanced Telecommunications Operations Map (eTOM), Service Oriented Architecture (SOA) and Internet Protocol Multimedia Subsystem (IMS). Fifth, by extending the concepts and abstractions we define the SDP framework. The framework is based on a generic business model and reference model. The business model shows relationships between SDP, telco and external entities using business relationships points. The reference model extends the business model by formalising relationships as reference points. Reference points expand into interfaces exposed by services. Applications orchestrate service functions via their interfaces. Service and application distribution is abstracted by middleware that operates across business model domains. Services, interfaces, applications and middleware are managed in Generic Service Oriented Architectures (GSOA). Multiple layered GSOAs structure the SDP framework. Last, we implement the SDP framework using standard-based technologies with open service interfaces. The implementation proves framework concepts, promotes SDP standardisation and identifies research areas

    Programming distributed and adaptable autonomous components--the GCM/ProActive framework

    Get PDF
    International audienceComponent-oriented software has become a useful tool to build larger and more complex systems by describing the application in terms of encapsulated, loosely coupled entities called components. At the same time, asynchronous programming patterns allow for the development of efficient distributed applications. While several component models and frameworks have been proposed, most of them tightly integrate the component model with the middleware they run upon. This intertwining is generally implicit and not discussed, leading to entangled, hard to maintain code. This article describes our efforts in the development of the GCM/ProActive framework for providing distributed and adaptable autonomous components. GCM/ProActive integrates a component model designed for execution on large-scale environments, with a programming model based on active objects allowing a high degree of distribution and concurrency. This new integrated model provides a more powerful development, composition, and execution environment than other distributed component frameworks. We illustrate that GCM/ProActive is particularly adapted to the programming of autonomic component systems, and to the integration into a service-oriented environment

    A secure client / server interface protocol for the electricity prepayment vending industry

    Get PDF
    Electricity prepayment systems have been successfully implemented by South Africa’s national electricity utility (Eskom) and local municipalities for more than 17 years. The prepayment vending sub-system is a critical component of prepayment systems. It provides convenient locations for customers to purchase electricity. It predominantly operates in an “offline” mode, however, electricity utilities are now opting for systems that operate in an “online” mode. “Online” mode of operation or online vending is when a prepayment token is requested from a centralised server that is remote from the client at the actual point of sale (POS). The token is only generated by the server and transferred to the POS client, once the transaction, the POS client and the payment mechanism has been authenticated and authorised. The connection between the POS client and the server is a standard computer network channel (like Internet, direct dial-up link, X.25, GPRS, etc) The lack of online vending system standardisation was a concern and significant risk for utilities, as they faced the problem of being locked into proprietary online vending systems. Thus the South African prepayment industry, lead by Eskom, initiated a project to develop an industry specification for online vending systems. The first critical project task was a current state analysis of the South African prepayment industry, technology and specifications. The prepayment industry is built around the Standard Transfer Specification (STS). STS has become the de-facto industry standard to securely transfer electricity credit from a Point of Sale (POS) to the prepaid meter. STS is supported by several “offline” vending system specifications. The current state analysis was followed by the requirements analysis phase. The requirements analysis confirmed the need for a standard interface protocol specification rather than a full systems specification. The interface specification focuses on the protocol between a vending client and vending server and does not specify the client and server application layer functionality and performance requirements. This approach encourages innovation and competitiveness amongst client and server suppliers while ensuring interoperability between these systems. The online vending protocol design was implemented using the web services framework and therefore appropriately named, XMLVend. The protocol development phase was an iterative process with two major releases, XMLVend 1.22 and XMLVend 2.1. XMLVend 2.1 is the current version of the protocol. XMLVend 2.1 addressed the shortcomings identified in XMLVend 1.22, updated the existing use cases and added several new use cases. It was also modelled as a unified modelling language (UML) interface or contract for prepayment vending services. Therefore, clients using the XMLVend interface are able to request services from any service provider (server) that implements the XMLVend interface. The UML modelled interface and use case message pairs were mapped to Web Service Definition Language (WSDL) and schema (XSD) definitions respectively. XMLVend 2.1 is a secure and open web service based protocol that facilitates prepayment vending functionality between a single logical vending server and ‘n’ number of clients. It has become a key enabler for utilities to implement standardised, secure, interoperable and flexible online vending systems. AFRIKAANS : Voorafbetaalde elektrisiteitstelsels is suksesvol deur Suid-Afrika se nasionale elektrisiteitsverskaffer (Eskom) en plaaslike munisipaliteite geïmplementeer vir meer as 17 jaar. Die Voorafbetaal verkoop-subsisteem is 'n esensiële komponent van voorafbetaal elektrisiteitstelsels. Dit laat gebruikers toe om elektrisiteit te koop by ‘n verskeidenheid van verkooppunte. In die verlede het hierdie stelsels meestal bestaan as alleenstaande verkooppunte maar elektrisiteitsverskaffers is besig om hulle stelsels te verander om in n aanlyn modus te werk. Aanlyn verkoop is wanneer 'n voorafbetaalkoepon versoek word vanaf ‘n sentrale bediener wat vêr verwydered is van die kliënt se verkooppunt. Die koepon word slegs gegenereer deur die bediener en gestuur aan die kliënt nadat die transaksie, die kliënt self, en die betaling meganisme, gemagtig is. Die koppeling tussen verkooppuntkliënt en die bediener is ‘n standaard kommunikasie kanaal, (byvoorbeeld; Internettoegang, direkte inbel skakel, X.25 en “GPRS”) Die gebrek aan 'n standaard vir aanlynverkoopstelsels was 'n bekommernis en beduidende risiko vir elektrisiteitsverskaffers, aangesien hulle ‘n probleem ondervind dat hulle ingeperk sal word tot ‘n eksklusiewe ontwerp vir so ‘n aanlynverkoopstelsel. Dus het die Suid Afrikaanse voorafbetaal industrie, gelei deur Eskom, 'n projek begin om 'n industriespesifikasie te ontwikkel vir aanlyn verkoopstelsels. Die eerste kritiese projek taak was 'n analise van die huidige stand van die Suid-Afrikaanse vooruitbetaling industrie, die tegnologie en spesifikasies. Die voorafbetaal sektor is gebou rondom die Standaard Oordrag Spesifikasie, bekend as “Standard Transfer Specification” (STS). STS word algemeen aanvaar as die industrie standaard vir die oordrag van elektrisiteit krediet vanaf 'n Verkooppunt na die voorafbetaalmeter. STS word ondersteun deur verskeie alleenstaande verkoopstelsel spesifikasies. Die analise vir die huidige status was opgevolg deur ‘n studie van die vereistes vir so ‘n stelsel. Die vereistes analise het die behoefte bevestig vir 'n standaard koppelvlak protokol spesifikasie, eerder as 'n nuwe spesifikasie vir ‘n volledige oorafbetaalstelsel. Dit bepaal alleenlik die protokol koppelvlak tussen 'n voorafbetaalkliënt en die bediener. Dit spesifiseer nie die program vlak funksionaliteit of prestasie vereistes, vir die kliënt en bediener nie. Hierdie benadering bevorder innovasie en mededingendheid onder kliënt- en bediener-verskaffers, terwyl dit nog steeds verseker dat die stelsels wedersyds aanpasbaar bly. Die aanlyn verkoopprotokol ontwerp is geïmplementeer met die webdienste raamwerk en staan bekend as XMLVend. Die protokol vir die ontwikkeling fase was 'n iteratiewe proses met die twee groot weergawes, XMLVend 1.22 en XMLVend 2.1. Die huidige weergawe van die protokol - XMLVend 2.1, adresseer die tekortkominge wat geïdentifiseer is met XMLVend 1.22, terwyl dit ook die bestaande gebruiksgevalle opdatteer en verskeie nuwe gebruiksgevalle byvoeg. Dit was ook geskoei as 'n verenigde modelleringtaal (UML) koppelvlak, of 'n kontrak, vir die voorafbetaal verkoopsdienste. Kliënte is daarom in staat om, met behulp van die XMLVend koppelvlak, dienste te versoek van enige diensverskaffer wat die XMLVend koppelvlak ondersteun. Die UML gemodelleerde koppelvlak- en gebruiksgevalle- boodskappare was gemodeleer in die Web Dienste Definisie Taal (WSDL) en skema (XSD) definisies onderskeidelik. XMLVend 2.1 is 'n sekure en oop webdienste-gebaseerde protokol wat dit moontlik maak om voorafbetaalfunksies te fasilliteer tussen 'n enkele logiese verkoopbediener en 'x' aantal kliënte. Dit het 'n sleutelrol aangeneem vir verskaffers om ‘n gestandaardiseerde, veilige, wedersyds-aanpasbare en buigsame aanlyn verkoopstelsels moontlik te maak. CopyrightDissertation (MSc)--University of Pretoria, 2010.Electrical, Electronic and Computer Engineeringunrestricte

    Refactoring of Security Antipatterns in Distributed Java Components

    Get PDF
    The importance of JAVA as a programming and execution environment has grown steadily over the past decade. Furthermore, the IT industry has adapted JAVA as a major building block for the creation of new middleware as well as a technology facilitating the migration of existing applications towards web-driven environments. Parallel in time, the role of security in distributed environments has gained attention, as a large amount of middleware applications has replaced enterprise-level mainframe systems. The protection of confidentiality, integrity and availability are therefore critical for the market success of a product. The vulnerability level of every product is determined by the weakest embedded component, and selling vulnerable products can cause enormous economic damage to software vendors. An important goal of this work is to create the awareness that the usage of a programming language, which is designed as being secure, is not sufficient to create secure and trustworthy distributed applications. Moreover, the incorporation of the threat model of the programming language improves the risk analysis by allowing a better definition of the attack surface of the application. The evolution of a programming language leads towards common patterns for solutions for recurring quality aspects. Suboptimal solutions, also known as ´antipatterns´, are typical causes for quality weaknesses such as security vulnerabilities. Moreover, the exposure to a specific environment is an important parameter for threat analysis, as code considered secure in a specific scenario can cause unexpected risks when switching the environment. Antipatterns are a well-established means on the abstractional level of system modeling to inform about the effects of incomplete solutions, which are also important in the later stages of the software development process. Especially on the implementation level, we see a deficit of helpful examples, that would give programmers a better and holistic understanding. In our basic assumption, we link the missing experience of programmers regarding the security properties of patterns within their code to the creation of software vulnerabilities. Traditional software development models focus on security properties only on the meta layer. To transfer these efficiently to the practical level, we provide a three-stage approach: First, we focus on typical security problems within JAVA applications, and develop a standardized catalogue of ´antipatterns´ with examples from standard software products. Detecting and avoiding these antipatterns positively influences software quality. We therefore focus, as second element of our methodology, on possible enhancements to common models for the software development process. These help to control and identify the occurrence of antipatterns during development activities, i. e. during the coding phase and during the phase of component assembly, integrating one´s own and third party code. Within the third part, and emphasizing the practical focus of this research, we implement prototypical tools for support of the software development phase. The practical findings of this research helped to enhance the security of the standard JAVA platforms and JEE frameworks. We verified the relevance of our methods and tools by applying these to standard software products leading to a measurable reduction of vulnerabilities and an information exchange with middleware vendors (Sun Microsystems, JBoss) targeting runtime security. Our goal is to enable software architects and software developers developing end-user applications to apply our findings with embedded standard components on their environments. From a high-level perspective, software architects profit from this work through the projection of the quality-of-service goals to protection details. This supports their task of deriving security requirements when selecting standard components. In order to give implementation-near practitioners a helpful starting point to benefit from our research we provide tools and case-studies to achieve security improvements within their own code base.Die Bedeutung der Programmiersprache JAVA als Baustein für Softwareentwicklungs- und Produktionsinfrastrukturen ist im letzten Jahrzehnt stetig gestiegen. JAVA hat sich als bedeutender Baustein für die Programmierung von Middleware-Lösungen etabliert. Ebenfalls evident ist die Verwendung von JAVA-Technologien zur Migration von existierenden Arbeitsplatz-Anwendungen hin zu webbasierten Einsatzszenarien. Parallel zu dieser Entwicklung hat sich die Rolle der IT-Sicherheit nicht zuletzt aufgrund der Verdrängung von mainframe-basierten Systemen hin zu verteilten Umgebungen verstärkt. Der Schutz von Vertraulichkeit, Integrität und Verfügbarkeit ist seit einigen Jahren ein kritisches Alleinstellungsmerkmal für den Markterfolg von Produkten. Verwundbarkeiten in Produkten wirken mittlerweile indirekt über kundenseitigen Vertrauensverlust negativ auf den wirtschaftlichen Erfolg der Softwarehersteller, zumal der Sicherheitsgrad eines Systems durch die verwundbarste Komponente bestimmt wird. Ein zentrales Ziel dieser Arbeit ist die Erkenntnis zu vermitteln, dass die alleinige Nutzung einer als ´sicher´ eingestuften Programmiersprache nicht als alleinige Grundlage zur Erstellung von sicheren und vertrauenswürdigen Anwendungen ausreicht. Vielmehr führt die Einbeziehung des Bedrohungsmodells der Programmiersprache zu einer verbesserten Risikobetrachtung, da die Angriffsfläche einer Anwendung detaillierter beschreibbar wird. Die Entwicklung und fortschreitende Akzeptanz einer Programmiersprache führt zu einer Verbreitung von allgemein anerkannten Lösungsmustern zur Erfüllung wiederkehrender Qualitätsanforderungen. Im Bereich der Dienstqualitäten fördern ´Gegenmuster´, d.h. nichtoptimale Lösungen, die Entstehung von Strukturschwächen, welche in der Domäne der IT-Sicherheit ´Verwundbarkeiten´ genannt werden. Des Weiteren ist die Einsatzumgebung einer Anwendung eine wichtige Kenngröße, um eine Bedrohungsanalyse durchzuführen, denn je nach Beschaffenheit der Bedrohungen im Zielszenario kann eine bestimmte Benutzeraktion eine Bedrohung darstellen, aber auch einen erwarteten Anwendungsfall charakterisieren. Während auf der Modellierungsebene ein breites Angebot von Beispielen zur Umsetzung von Sicherheitsmustern besteht, fehlt es den Programmierern auf der Implementierungsebene häufig an ganzheitlichem Verständnis. Dieses kann durch Beispiele, welche die Auswirkungen der Verwendung von ´Gegenmustern´ illustrieren, vermittelt werden. Unsere Kernannahme besteht darin, dass fehlende Erfahrung der Programmierer bzgl. der Sicherheitsrelevanz bei der Wahl von Implementierungsmustern zur Entstehung von Verwundbarkeiten führt. Bei der Vermittlung herkömmlicher Software-Entwicklungsmodelle wird die Integration von praktischen Ansätzen zur Umsetzung von Sicherheitsanforderungen zumeist nur in Meta-Modellen adressiert. Zur Erweiterung des Wirkungsgrades auf die praktische Ebene wird ein dreistufiger Ansatz präsentiert. Im ersten Teil stellen wir typische Sicherheitsprobleme von JAVA-Anwendungen in den Mittelpunkt der Betrachtung, und entwickeln einen standardisierten Katalog dieser ´Gegenmuster´. Die Relevanz der einzelnen Muster wird durch die Untersuchung des Auftretens dieser in Standardprodukten verifiziert. Der zweite Untersuchungsbereich widmet sich der Integration von Vorgehensweisen zur Identifikation und Vermeidung der ´Sicherheits-Gegenmuster´ innerhalb des Software-Entwicklungsprozesses. Hierfür werden zum einen Ansätze für die Analyse und Verbesserung von Implementierungsergebnissen zur Verfügung gestellt. Zum anderen wird, induziert durch die verbreitete Nutzung von Fremdkomponenten, die arbeitsintensive Auslieferungsphase mit einem Ansatz zur Erstellung ganzheitlicher Sicherheitsrichtlinien versorgt. Da bei dieser Arbeit die praktische Verwendbarkeit der Ergebnisse eine zentrale Anforderung darstellt, wird diese durch prototypische Werkzeuge und nachvollziehbare Beispiele in einer dritten Perspektive unterstützt. Die Relevanz der Anwendung der entwickelten Methoden und Werkzeuge auf Standardprodukte zeigt sich durch die im Laufe der Forschungsarbeit entdeckten Sicherheitsdefizite. Die Rückmeldung bei führenden Middleware-Herstellern (Sun Microsystems, JBoss) hat durch gegenseitigen Erfahrungsaustausch im Laufe dieser Forschungsarbeit zu einer messbaren Verringerung der Verwundbarkeit ihrer Middleware-Produkte geführt. Neben den erreichten positiven Auswirkungen bei den Herstellern der Basiskomponenten sollen Erfahrungen auch an die Architekten und Entwickler von Endprodukten, welche Standardkomponenten direkt oder indirekt nutzen, weitergereicht werden. Um auch dem praktisch interessierten Leser einen möglichst einfachen Einstieg zu bieten, stehen die Werkzeuge mit Hilfe von Fallstudien in einem praktischen Gesamtzusammenhang. Die für das Tiefenverständnis notwendigen Theoriebestandteile bieten dem Software-Architekten die Möglichkeit sicherheitsrelevante Auswirkungen einer Komponentenauswahl frühzeitig zu erkennen und bei der Systemgestaltung zu nutzen
    corecore