38 research outputs found

    An Integrated Engineering-Computation Framework for Collaborative Engineering: An Application in Project Management

    Get PDF
    Today\u27s engineering applications suffer from a severe integration problem. Engineering, the entire process, consists of a myriad of individual, often complex, tasks. Most computer tools support particular tasks in engineering, but the output of one tool is different from the others\u27. Thus, the users must re-enter the relevant information in the format required by another tool. Moreover, usually in the development process of a new product/process, several teams of engineers with different backgrounds/responsibilities are involved, for example mechanical engineers, cost estimators, manufacturing engineers, quality engineers, and project manager. Engineers need a tool(s) to share technical and managerial information and to be able to instantly access the latest changes made by one member, or more, in the teams to determine right away the impacts of these changes in all disciplines (cost, time, resources, etc.). In other words, engineers need to participate in a truly collaborative environment for the achievement of a common objective, which is the completion of the product/process design project in a timely, cost effective, and optimal manner. In this thesis, a new framework that integrates the capabilities of four commercial software, Microsoft Excel™ (spreadsheet), Microsoft Project™ (project management), What\u27s Best! (an optimization add-in), and Visual Basic™ (programming language), with a state-of-the-art object-oriented database (knowledge medium), InnerCircle2000™ is being presented and applied to handle the Cost-Time Trade-Off problem in project networks. The result was a vastly superior solution over the conventional solution from the viewpoint of data handling, completeness of solution space, and in the context of a collaborative engineering-computation environment

    Synthesis of Specifications and Refinement Maps for Real-Time Object Code Verification

    Get PDF
    Formal verification methods have been shown to be very effective in finding corner-case bugs and ensuring the safety of embedded software systems. The use of formal verification requires a specification, which is typically a high-level mathematical model that defines the correct behavior of the system to be verified. However, embedded software requirements are typically described in natural language. Transforming these requirements into formal specifications is currently a big gap. While there is some work in this area, we proposed solutions to address this gap in the context of refinement-based verification, a class of formal methods that have shown to be effective for embedded object code verification. The proposed approach also addresses both functional and timing requirements and has been demonstrated in the context of safety requirements for software control of infusion pumps. The next step in the verification process is to develop the refinement map, which is a mapping function that can relate an implementation state (in this context, the state of the object code program to be verified) with the specification state. Actually, constructing refinement maps often requires deep understanding and intuitions about the specification and implementation, it is shown very difficult to construct refinement maps manually. To go over this obstacle, the construction of refinement maps should be automated. As a first step toward the automation process, we manually developed refinement maps for various safety properties concerning the software control operation of infusion pumps. In addition, we identified possible generic templates for the construction of refinement maps. Recently, synthesizing procedures of refinement maps for functional and timing specifications are proposed. The proposed work develops a process that significantly increases the automation in the generation of these refinement maps. The refinement maps can then be used for refinement-based verification. This automation procedure has been successfully applied on the transformed safety requirements in the first part of our work. This approach is based on the identified generic refinement map templates which can be increased in the future as the application required

    Web page performance analysis

    Get PDF
    Computer systems play an increasingly crucial and ubiquitous role in human endeavour by carrying out or facilitating tasks and providing information and services. How much work these systems can accomplish, within a certain amount of time, using a certain amount of resources, characterises the systems’ performance, which is a major concern when the systems are planned, designed, implemented, deployed, and evolve. As one of the most popular computer systems, the Web is inevitably scrutinised in terms of performance analysis that deals with its speed, capacity, resource utilisation, and availability. Performance analyses for the Web are normally done from the perspective of the Web servers and the underlying network (the Internet). This research, on the other hand, approaches Web performance analysis from the perspective of Web pages. The performance metric of interest here is response time. Response time is studied as an attribute of Web pages, instead of being considered purely a result of network and server conditions. A framework that consists of measurement, modelling, and monitoring (3Ms) of Web pages that revolves around response time is adopted to support the performance analysis activity. The measurement module enables Web page response time to be measured and is used to support the modelling module, which in turn provides references for the monitoring module. The monitoring module estimates response time. The three modules are used in the software development lifecycle to ensure that developed Web pages deliver at worst satisfactory response time (within a maximum acceptable time), or preferably much better response time, thereby maximising the efficiency of the pages. The framework proposes a systematic way to understand response time as it is related to specific characteristics of Web pages and explains how individual Web page response time can be examined and improved

    Software for malicious macro detection

    Get PDF
    The objective of this work is to give a detailed study of the development process of a software tool for the detection of the Emotet virus in Microsoft Office files, Emotet is a virus that has been wreaking havoc mainly in the business environment, from its beginnings as a banking Trojan to nowadays. In fact, this polymorphic family has managed to generate evident, incalculable and global inconveniences in the business activity without discriminating by corporate typology, affecting any company regardless of its size or sector, even entering into government agencies, as well as the citizens themselves as a whole. The existence of two main obstacles for the detection of this virus, constitute an intrinsic reality to it, on the one hand, the obfuscation in its macros and on the other, its polymorphism, are essential pieces of the analysis, focusing our tool in facing precisely two obstacles, descending to the analysis of the macros features and the creation of a neuron network that uses machine learning to recognize the detection patterns and deliberate its malicious nature. With Emotet's in-depth nature analysis, our goal is to draw out a set of features from the malicious macros and build a machine learning model for their detection. After the feasibility study of this project, its design and implementation, the results that emerge endorse the intention to detect Emotet starting only from the static analysis and with the application of machine learning techniques. The detection ratios shown by the tests performed on the final model, present a accuracy of 84% and only 3% of false positives during this detection process.Grado en Ingeniería Informátic

    Design and Implementation of Automated Mapping System: With Emphasis on Image Handling

    Get PDF
    This thesis is concerned with the design and the implementation of an automated mapping system. Automation of the mapping process is a major concern in the geomatics discipline as the role of geospatial information is becoming more important in the 'information society'. Automation has been successful in some tasks of the mapping process, whereas it is faced with difficulties in other more complex tasks. In all of these automation efforts, software development is inevitably involved. In software development, software design should be carried out prior to its implementation but at present, software design is not widely practiced by research organisations in the geomatics discipline. Research into systematic design of software for the geomatics discipline will prove beneficial for both the research organisations and the end users in the long run. An Object Oriented software design using the Unified Modeling Language (UML) is presented in this thesis for some of the subsystems of the automated mapping system, namely: the image acquisition subsystem; the positioning subsystem; and, the image point referencing subsystem. For each of these subsystems, the domain knowledge and the software implementation aspects were investigated and analysed. Based on the results of this analysis, the software design for the subsystems was produced using UML. The design was then implemented in C++ programming language and tested with practical data. This study concludes that the software design of this study enhances the implementation effort. For example, the same classes which had already been implemented in the positioning subsystem were reused in the image point referencing subsystem with only little changes. It is also emphasized in this study that the Object Oriented design using UML should be used by research organisations of the geomatics discipline. The software design of the automated mapping system presented in this thesis will lay a foundation for further development of software which could be effectively used for geospatial research

    A Software Architecture Framework for Home Service Robots

    Get PDF
    Over the last years, home service robots have a wide range of potential applications, such as home security, patient caring, cleaning, etc. When developing robot software, one of the main challenges is to build the software architectural model. Software architecture is used throughout the software life-cycle for supporting analysis, guiding development, and acting as a roadmap for designers and implementers. Though many software architectures for robotic systems have been defined, none of them have reached all its objectives due to the great variability among systems behaviors, and still lack systematic techniques to derive the robot software architecture from its requirements model. In this paper, we present a generic architectural model for home service robots allowing for software architecture design, and preserving a continuous architectural view all along the development cycle. While avoiding the predominant decomposition problems, our approach allows for integration of the architectural components in a systematic and comprehensive way for efficient maintainability and reusability

    Concept and implementation of a pluggable framework for storage, transformation, and analysis of large-scale enterprise topology graphs

    Get PDF
    The addition of on-demand cloud computing offerings increases the complexity of IT systems rapidly. Enterprise Topology Graphs depict all the components of enterprise IT and their relations, to regain insight into enterprise IT. The focus of this work is the research, design and implementation of a framework to store and manage these graphs in an efficient way. The difficulties are the enormous graph sizes and a lot of meta information, leading to a complex design to offer a good performing solution. The framework is based on a graph database to store the Enterprise Topology Graph efficiently and offers a pluggable architecture to be able to extend the functionality, e.g. with transformation operations the graphs. With the reference implementation of this framework, the complex structures of enterprise IT systems can be stored, managed and easily manipulated to gain a more detailed view on the IT components and their dependencies

    Mapping the economic potential of wave energy: grid connected and off-grid systems

    Get PDF
    In recent times there has been a surge in renewable energy investment, as costs fall and the full danger of global warming is realised by policymakers. As well as more established industries, like wind and solar power, there is also high interest in pre-commercial technologies with significant potential. Wave energy fits into this category and has a number of advantages that make it a subject of ongoing research and industrial activity. An energy dense resource, it is easier to forecast than wind and fits the seasonal demand profile well. A global capacity of the order of hundreds of gigawatts has been estimated, with a particularly strong resource in the UK. Despite these characteristics the industry has yet to reach a commercial level. No company has been able to demonstrate consistent energy production at a cost effective rate. Viable project locations must balance an energetic resource with conditions that allow devices to be accessed for maintenance, while also trying to minimise system costs. While utility scale farms are seen as the long term future for the technology, off-grid hybrid systems could supply cheaper and dispatchable energy at local levels. This market, while smaller, is made up of more costly forms of energy so provides a better entry market. Conventional economic analyses for both types of systems tend to be performed for single locations at a time. While useful for benchmarking the technology, these methods are of limited use for site scoping as energy production and costs can show large variation over relatively short distances (<10 km). This research thesis describes a geospatial economic model that has been created to address the above issues. It was developed in collaboration with Albatern, a wave energy developer, who provided their expertise and helped to guide the research activities. The targeted application was to allow economic assessment of Albatern's "WaveNET" device, either as a power station for grid connection or an off-grid hybrid solution for aquaculture applications. The model has a number of aspects that are of significant interest to the industry. These include computational model design and geographic calculation of energy production, costs and Levelised Cost of Energy (LCOE). The spatial approach is valuable as a whole area can be evaluated at a time, indicating deployment locations particularly suitable for the technology at hand. Sensitivity analysis is also easily carried out, to build understanding of the cost drivers at specific locations. The theory underpinning the model and its implementation is described. It is then demonstrated with two representative case studies: considering grid-connected and off-grid WaveNET device demonstrators on the West Coast of Scotland. The results show the strengths of the approach as a way of identifying economically viable hotspots and the main cost drivers. For the grid-connected case, examining an area of 150 by 250 km, the model was able to identify a significant LCOE hotspot between the Isle of Skye and the Outer Hebrides. The potential for the device to power a fish farm, when combined with a battery bank and diesel generator, was then analysed. Two regions were examined and real fish farm locations considered. The output results allow easy comparison between the two system types, emphasising the advantages of investigating both to inform business activity

    Design and Simulation of RFID-Enabled Aircraft Reverse Logistics Network via Agent-Based Modeling

    Get PDF
    Reverse Logistics (RL) has become increasingly popular in different industries especially aerospace industry over the past decade due to the fact that RL can be a profitable and sustainable business strategy for many organizations. However, executing and fulfilling an efficient recovery network needs constructing appropriate logistics system for flows of new, used, and recovered products. On the other hand, successful RL network requires a reliable monitoring and control system. A key factor for the success and effectiveness of RL system is to conduct real-time monitoring system such as radio frequency identification (RFID) technology. The RFID system can evaluate and analyze RL performance timely so that in the case of deviation in any areas of RL, the appropriate corrective actions can be taken in a quick manner. An automated data capturing system like RFID and computer simulation techniques such as agent-based (AB), system dynamic (SD) and discrete event (DE) provide a reliable platform for effective RL tracking and control, as they can respectively decrease the time needed to obtain data and simulate various scenarios for suitable best corrective actions. The functionality of the RL system can be noticeably elevated by integrating these two systems and techniques. Besides, each computer simulation approach has its own benefits for understanding the RL network from different aspects. Therefore, in this study, after designing and constructing the RL system through the real case study from Bell Helicopter Company with the aid of unified modeling language (UML), three simulation techniques were proposed for the model. Afterwards the results of all three simulation approaches (AB, SD and DE) were compared with considering two scenarios of RL RFID-enabled and RL without RFID. The computer simulation models were developed using “AnyLogic 7.1” software. The results of the research present that with exploiting RFID technology, the total disassembly time of a single helicopter was decreased. The comparison of all three simulation methods was performed as well. Keywords: Reverse logistics (RL), RFID, aerospace industry, agent-based simulation, system dynamic simulation, discrete event simulation, AnyLogi
    corecore