119 research outputs found

    Integrated XML andGML in Geographical Information System

    Get PDF
    This project basically concentrated on the study of extensible Markup Language (XML) and Geography Markup Language (GML) in Geographical Information System (GIS). The objective of the project is to convert the spatial data (e.g.: coordinates, area, etc) by using the XML and GML and then coding will be integrated and viewed in the web browser by using the Scalable Vector Graphic (SVG)technology. Basically, this project is done to find a new way to overcomethe weaknesses of map digitizing and taking advantage of the GML technology in Geographical Information System. The project scope is concentrate on the usage of XML andGML in GIS. Research is done onXML technologies, which are provided for GML. The technologies included technology for encoding and data modeling (Data Type Definition, XML Schema), technology for transforming (XSLT) and technology for graphic rendering (SVG). Research on GML is focused on manipulation of spatial data to convert to simple features such as point, line and polygon. This project combines XML, GML and SVG technologies in order to meet the project objectives. In completing this project, waterfall model is use as the methodology for the system development. The project is developed according tothe four phases of system development, which are planning, analysis, design and implementation. The discussion ofthis project will be more on GML compatibility and the advantages of using SVG to view the map. The simple display of map created will be able to show thatGML is suits for handling geo-spatial data overthe Internet. The user would be able to view the map and zooming feature is provided by SVG

    Meeting the Challenge of Dynamic User Requirements Using Data-Driven Techniques on a 4GL-Database Environment

    Get PDF
    Accompanying the ever-growing reliance on computers within contemporary organisations, the task of software maintenance is, increasingly, becoming a resource burden. The author has identified that there is a need for proven techniques to allow the modelling of flexible/changing user requirement, to enable systems to cope with requirements creep without suffering major code change and associated down-time from rebuilds of the database. This study ascertains the applicability of extension to current data modelling techniques that allows innate flexibility within the data model. The extension of the data model is analysed for potential benefits in the provision of such a dynamic/flexible base to realise \u27maintenance friendly\u27 systems and, in consequence, alleviate the cost of later, expensive maintenance

    An Interactive Community Microgrid Model to aid Design Exploration

    Get PDF
    This dissertation presents a novel approach to microgrid design and exploration through the development of an innovative software application. The primary objective of this research is to empower users to configure and simulate community microgrids, offering multiple time-frame insights into performance metrics, statistical analyses, and pertinent factors aligned with their specific configurations. In collaboration with Ahuora under their third Aim, a key focus is on optimizing factory process heat management while experimenting with renewable energy sources to effectively mitigate overall power consumption. The application developed for this thesis focuses on representing these assets accurately, facilitating the setup/internal optimization of these assets, and allowing the users to tailor the configuration to explore consequences and effects of their changes. Employing a multidisciplinary methodology, the study seeks to integrate Digital Twin technology tenets, Object-Oriented Programming, Polymorphism, Data Modelling and Visualization, and Complex Power Calculations to formulate a robust and user-friendly software interface. Furthermore, meticulous GUI design and advanced graphing techniques ensure data visualization is both informative and intuitive. This research culminates in a comprehensive demonstration of the software's capabilities, successfully showcasing its efficacy in meeting outlined objectives. By affording users the ability to dynamically shape and assess microgrid configurations, this study advances the field of sustainable energy management and underscores the potential for optimized factory operations within a renewable energy framework

    Application of Deadlock Risk Evaluation of Architectural Models

    Full text link
    Software architectural evaluation is a key discipline used to identify, at early stages of a real-time system (RTS) development, the problems that may arise during its operation. Typical mechanisms supporting concurrency, such as semaphores, mutexes or monitors, usually lead to concurrency problems in execution time that are difficult to be identified, reproduced and solved. For this reason, it is crucial to understand the root causes of these problems and to provide support to identify and mitigate them at early stages of the system lifecycle. This paper aims to present the results of a research work oriented to the development of the tool called ‘Deadlock Risk Evaluation of Architectural Models’ (DREAM) to assess deadlock risk in architectural models of an RTS. A particular architectural style, Pipelines of Processes in Object-Oriented Architectures–UML (PPOOA) was used to represent platform-independent models of an RTS architecture supported by the PPOOA –Visio tool. We validated the technique presented here by using several case studies related to RTS development and comparing our results with those from other deadlock detection approaches, supported by different tools. Here we present two of these case studies, one related to avionics and the other to planetary exploration robotics. Copyright © 2011 John Wiley & Sons, Ltd

    Handling Data Consistency through Spatial Data Integrity Rules in Constraint Decision Tables

    Get PDF

    Helena

    Get PDF
    Ensemble-based systems are software-intensive systems consisting of large numbers of components which can dynamically form goal-oriented communication groups. The goal of an ensemble is usually achieved through interaction of some components, but the contributing components may simultaneously participate in several collaborations. With standard component-based techniques, such systems can only be described by a complex model specifying all ensembles and participants at the same time. Thus, ensemble-based systems lack a development methodology which particularly addresses the dynamic formation and concurrency of ensembles as well as transparency of participants. This thesis proposes the Helena development methodology. It slices an ensemble-based system in two dimensions: Each kind of ensemble is considered separately. This allows the developer to focus on the relevant parts of the system only and abstract away those parts which are non-essential to the current ensemble. Furthermore, an ensemble itself is not defined solely in terms of participating components, but in terms of roles which components adopt in that ensemble. A role is the logical entity needed to contribute to the ensemble while a component provides the technical functionalities to actually execute a role. By simultaneously adopting several roles, a component can concurrently participate in several ensembles. Helena addresses the particular challenges of ensemble-based systems in the main development phases: The domain of an ensemble-based system is described as an ensemble structure of roles built on top of a component-based platform. Based on the ensemble structure, the goals of ensembles are specified as linear temporal logic formulae. With these goals in mind, the dynamic behavior of the system is designed as a set of role behaviors. To show that the ensemble participants actually achieve the global goals of the ensemble by collaboratively executing the specified behaviors, the Helena model is verified against its goals with the model-checker Spin. For that, we provide a translation of Helena models to Promela, the input language of Spin, which is proven semantically correct for a kernel part of Helena. Finally, we provide the Java framework jHelena which realizes all Helena concepts in Java. By implementing a Helena model with this framework, Helena models can be executed according to the formal Helena semantics. To support all activities of the Helena development methodology, we provide the Helena workbench as a tool for specification and automated verification and code generation. The general applicability of Helena is backed by a case study of a larger software system, the Science Cloud Platform. Helena is able to capture, verify and implement the main characteristics of the system. Looking at Helena from a different angle shows that the Helena idea of roles is also well-suited to realize adaptive systems changing their behavioral modes based on perceptions. We extend the Helena development methodology to adaptive systems and illustrate its applicability at an adaptive robotic search-and-rescue example

    Engineering security into distributed systems: a survey of methodologies

    Get PDF
    Rapid technological advances in recent years have precipitated a general shift towards software distribution as a central computing paradigm. This has been accompanied by a corresponding increase in the dangers of security breaches, often causing security attributes to become an inhibiting factor for use and adoption. Despite the acknowledged importance of security, especially in the context of open and collaborative environments, there is a growing gap in the survey literature relating to systematic approaches (methodologies) for engineering secure distributed systems. In this paper, we attempt to fill the aforementioned gap by surveying and critically analyzing the state-of-the-art in security methodologies based on some form of abstract modeling (i.e. model-based methodologies) for, or applicable to, distributed systems. Our detailed reviews can be seen as a step towards increasing awareness and appreciation of a range of methodologies, allowing researchers and industry stakeholders to gain a comprehensive view of the field and make informed decisions. Following the comprehensive survey we propose a number of criteria reflecting the characteristics security methodologies should possess to be adopted in real-life industry scenarios, and evaluate each methodology accordingly. Our results highlight a number of areas for improvement, help to qualify adoption risks, and indicate future research directions.Anton V. Uzunov, Eduardo B. Fernandez, Katrina Falkne

    Aspect-oriented domain analysis

    Get PDF
    Dissertação de Mestrado em Engenharia InformáticaDomain analysis (DA) consists of analyzing properties, concepts and solutions for a given domain of application. Based on that information, decisions are made concerning the software development for future application within that domain. In DA, feature modeling is used to describe common and variable requirements for software systems. Nevertheless, they show a limited view of the domain. In the mean time, requirement approaches can be integrated to specify the domain requirements. Among them, we have viewpoint oriented approaches that stand out by their simplicity, and efficiency organizing requirements. However, none of them deals with modularization of crosscutting subjects. A crosscutting subject can be spread out in several requirement documents. In this work we will use a viewpoint oriented approach to describe the domain requirements extended with aspects. Aspect-oriented domain analysis (AODA) is a growing area of interest as it addresses the problem of specifying crosscutting properties at the domain analysis level. The goal of this area is to obtain a better reuse at this abstraction level through the advantages of aspect orientation. The aim of this work is to propose an approach that extends domain analysis with aspects also using feature modeling and viewpoint

    Automatic performance optimisation of component-based enterprise systems via redundancy

    Get PDF
    Component technologies, such as J2EE and .NET have been extensively adopted for building complex enterprise applications. These technologies help address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components. Static component testing and tuning procedures provide insufficient performance guarantees for components deployed and run in diverse assemblies, under unpredictable workloads and on different platforms. Often, there is no single component implementation or deployment configuration that can yield optimal performance in all possible conditions under which a component may run. Manually optimising and adapting complex applications to changes in their running environment is a costly and error-prone management task. The thesis presents a solution for automatically optimising the performance of component-based enterprise systems. The proposed approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different execution environment. A management framework automatically administers the available redundant variants and adapts the system to external changes. The framework uses runtime monitoring data to detect performance anomalies and significant variations in the application's execution environment. It automatically adapts the application so as to use the optimal component configuration under the current running conditions. An automatic clustering mechanism analyses monitoring data and infers information on the components' performance characteristics. System administrators use decision policies to state high-level performance goals and configure system management processes. A framework prototype has been implemented and tested for automatically managing a J2EE application. Obtained results prove the framework's capability to successfully manage a software system without human intervention. The management overhead induced during normal system execution and through management operations indicate the framework's feasibility
    corecore