2,067 research outputs found

    An artefact repository to support distributed software engineering

    Get PDF
    The Open Source Component Artefact Repository (OSCAR) system is a component of the GENESIS platform designed to non-invasively inter-operate with work-flow management systems, development tools and existing repository systems to support a distributed software engineering team working collaboratively. Every artefact possesses a collection of associated meta-data, both standard and domain-specific presented as an XML document. Within OSCAR, artefacts are made aware of changes to related artefacts using notifications, allowing them to modify their own meta-data actively in contrast to other software repositories where users must perform all and any modifications, however trivial. This recording of events, including user interactions provides a complete picture of an artefact's life from creation to (eventual) retirement with the intention of supporting collaboration both amongst the members of the software engineering team and agents acting on their behalf

    Development of an ontology supporting failure analysis of surface safety valves used in Oil & Gas applications

    Get PDF
    Treball desenvolupat dins el marc del programa 'European Project Semester'.The project describes how to apply Root Cause Analysis (RCA) in the form of a Failure Mode Effect and Criticality Analysis (FMECA) on hydraulically actuated Surface Safety Valves (SSVs) of Xmas trees in oil and gas applications, in order to be able to predict the occurrence of failures and implement preventive measures such as Condition and Performance Monitoring (CPM) to improve the life-span of a valve and decrease maintenance downtime. In the oil and gas industry, valves account for 52% of failures in the system. If these failures happen unexpectedly it can cause a lot of problems. Downtime of the oil well quickly becomes an expensive problem, unscheduled maintenance takes a lot of extra time and the lead-time for replacement parts can be up to 6 months. This is why being able to predict these failures beforehand is something that can bring a lot of benefits to a company. To determine the best course of action to take in order to be able to predict failures, a FMECA report is created. This is an analysis where all possible failures of all components are catalogued and given a Risk Priority Number (RPN), which has three variables: severity, detectability and occurrence. Each of these is given a rating between 0 and 10 and then the variables are multiplied with each other, resulting in the RPN. The components with an RPN above an acceptable risk level are then further investigated to see how to be able to detect them beforehand and how to mitigate the risk that they pose. Applying FMECA to the SSV mean breaking the system down into its components and determining the function, dependency and possible failures. To this end, the SSV is broken up into three sub-systems: the valve, the actuator and the hydraulic system. The hydraulic system is the sub-system of the SSV responsible for containing, transporting and pressurizing of the hydraulic fluid and in turn, the actuator. It also contains all the safety features, such as pressure pilots, and a trip system in case a problem is detected in the oil line. The actuator is, as the name implies, the sub-system which opens and closes the valve. It is made up of a number of parts such as a cylinder, a piston and a spring. These parts are interconnected in a number of ways to allow the actuator to successfully perform its function. The valve is the actual part of the system which interacts with the oil line by opening and closing. Like the actuator, this sub-system is broken down into a number of parts which work together to perform its function. After breaking down and defining each subsystem on a functional level, a model was created using a functional block diagram. Each component also allows for the defining of dependencies and interactions between the different components and a failure diagram for each component. This model integrates the three sub-systems back into one, creating a complete picture of the entire system which can then be used to determine the effects of different failures in components to the rest of the system. With this model completed we created a comprehensive FMECA report and test the different possible CPM solutions to mitigate the largest risks

    Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics

    Get PDF
    The accurate and efficient solution of Maxwell's equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwell's equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications

    GIS for Public Health Assessment A CASPER Methodology Framework

    Get PDF
    One of the greatest challenges in managing emergencies is to determine the impact of the disaster and to respond effectively to the primary needs of those affected. The Community Assessment for Public Health Emergency Response (CASPER) is a specific methodology designed to quickly and effectively estimate the health status and basic needs that must be addressed by the health agency. The present study discusses the application of a complete GIS-based framework for the improvement of each of the main phases of the CASPER methodology: preparation, conduction, analyzing data, and writing a report. The results show the GIS approach to implementing CASPER can significantly reduce the time required for data collection and processing, improve the quality of the collected data, and allows the agencies to make a real-time decision based on the situational awareness of the communitie

    Application for attendees of a talk show event

    Get PDF
    Treball final de Grau en Disseny i Desenvolupament de Videojocs. Codi: VJ1241. Curs acadèmic: 2020/2021This document refers to the technical proposal for a final degree project in the Degree of Videogame Design and Development. The work carried out during the development of the project aims to create a mobile application for the iOS platform, which allows those attending a talk show type event to be able to participate actively. This is intended to be achieved by creating an in-app gamification system that will offer a messaging wall, a program feed, and live quizzes. Attendees within the application will have a username, email (access) and password, a user profile, an agenda, a list of sponsors, a list of guests and attendees, a wall where they can leave comments, a place in where the program team can share stories and a leaderboard with a ranking of the attendees which will count the participation to score points

    A methodology for the efficient computer representation of dynamic power systems : application to wind parks

    Get PDF
    This contribution presents a methodology to efficiently obtain the numerical and computer solution of dynamic power systems with high penetration of wind turbines. Due to the excessive computational load required to solve the abc models that represent the behavior of the wind turbines, a parallel processing scheme is proposed to enhance the solution of the overall system. Case studies are presented which demonstrate the effectiveness and applications of the proposed methodology

    Tools for managing knowledge in SMEs and laggard regions

    Get PDF
    As the biggest producers and employers of the system, in general SMEs in laggard regions need to keep pace with the expanding rate of technological change that is taking place. To achieve that goal, adequate tools must be developed to generate sufficient transfer of knowledge to those regions and SMEs. The paper will analyze past development and the current situation of Technological Institutes (TI) as a feasible tool for transfering and managing technical change in SMEs. Analyzing their characteristics and proposing how they should be related to the industrial fabric of the region will be the main output of the paper.

    Euronet Lab, A Cloud V-Lab Enviroment

    Get PDF
    In this paper we present a proposal for the creation of a European V-labs web space. In its essence it would result in an open online laboratory, with a primarily practical nature. In this laboratory students will have the opportunity to develop skills in the “know-how-to-do” area, enabling them to conduct a series of practical experiences in “try-out” philosophy that will substantiate and consolidate all knowledge that the students acquired in lectures. It is quite possible that these resources aren’t available in all universities and institutions, specifically in the university where the student is. This environment provides substance to the Directive stated in the declarations of Bologna and Prague that expresses “the teaching process is therefore student-centered”, strengthening the final pedagogical aim of “learning to learn”, as lifelong learning is assumed as an indispensable stage. What we propose is the creation of a virtual environment for e-learning where a series of virtual labs in many areas of electronics, automation and robotics are available, in this environment it will do possible for any student of any of these universities to scheduling of experience in any institution that belongs to this cloud, and thus can perform is work for anytime that will be available and with technical resources available or not available in its own university.Com o apoio RAADRI
    corecore