588,310 research outputs found

    An Approach to Evaluate Software Effectiveness

    Get PDF
    The Air Force Operational Test and Evaluation Center (AFOTEC) is tasked with the evaluation of operational effectiveness of new systems for the Air Force. Currently, the software analysis team within AFOTEC has no methodology to directly address the effectiveness of the software portion of these new systems. This research develops a working definition for software effectiveness, then outlines an approach to evaluate software effectiveness-- the Software Effectiveness Traceability Approach (SETA). Effectiveness is defined as the degree to which the software requirements are satisfied and is therefore application-independent. With SETA, requirements satisfaction is measured by the degree of traceability throughout the software development effort. A degree of traceability is determined for specific pairs of software life-cycle phases, such as the traceability from software requirements to high-level design and low-level design to code. The degrees of traceability are combined for an overall software effectiveness value. It is shown that SETA can be implemented in a simplified database, and basic database operations are described to retrieve traceability information and quantify the software\u27s effectiveness. SETA is demonstrated using actual software development data from a small software component of the avionics subsystem of the C-17, the Air Force\u27s newest transport aircraft

    Development of Computational Analysis Criteria Based on Laser Sensor Device to Identify the Surface Status of Micro-structured Coatings for Aerospace Industry

    Get PDF
    In 2009, the European Union decided that CO2 emissions on ship and aerospace industries must be reduced by 10% and 20% respectively. To fulfill these requirements, aerospace industries looking for new approaches to overcome this challenge and take technological gains. The use of bio-inspired micro-structured coatings, e.g.riblet structured based on shark skin; this is one of new technologies applied. In this context, quality control in manufacturing and maintenance of structured coatings is of extreme relevance specifically for aerospace industries to ensure the optimal surface structuring, to assist maintenance, to predict life time, and consequently to decide when to perform surface renewal. The mentioned requirements are met with an experimental fast sampling sensor setup using noncontacting laser probing. The theory behind the laser sensor device is based on Huygens-Fresnel diffraction theory combined with ray-tracing calculation methods. A computational tool was developed to perform analysis and treatment of output data provided by the laser sensor. This dissertation aims to present a methodology to evaluate the calculations implemented in this computational tool. It is used to interpret the obtained diffraction patterns, and as a tool to simulate the structured surface status from a given pattern. Thus, the software is developed to generate consistent information for analysis and decision-making regarding the surface structure and its maintenance. The software development is performed by using Object Oriented programming (OOP) and it is also integrated with database management systems (DBMS).Optics theory is discussed and applied to specific target, graphical renderization of pre-determined geometric micro-structured coatings are implemented and the fundamental outputs to evaluate the real status of the surface are described and treated to be a reliable knowledge database. Overcomming the experimental requirements to build a reliable theoretical base to implement consistent outputs to the proposed technique, this dissertation brings the analysis criteria identified to be applied in further studies, the investigation and application of well-known theoretical founding to be a starting to new techonolgies contributing to open new perspectives on this field, joining the necessary interdisciplinarity on computer science, physics and engineering to improve the knowledge on the field of quality assurance, applied, in this particular case, on aerospace industry

    Developing A New Decision Support System for University Student Recruitment

    Get PDF
    This paper investigates the practical issues surrounding the development and implementation of Decision Support Systems (DSS). The paper describes the traditional development approaches analyzing their drawbacks and introduces a new DSS development methodology. The proposed DSS methodology is based upon four modules; needs’ analysis, data warehouse (DW), knowledge discovery in database (KDD), and a DSS module. The proposed DSS methodology is applied to and evaluated using the admission and registration functions in Egyptian Universities. The paper investigates the organizational requirements that are required to underpin these functions in Egyptian Universities. These requirements have been identified following an in-depth survey of the recruitment process in the Egyptian Universities. This survey employed a multi-part admission and registration DSS questionnaire (ARDSSQ) to identify the required data sources together with the likely users and their information needs. The questionnaire was sent to senior managers within the Egyptian Universities (both private and government) with responsibility for student recruitment, in particular admission and registration. Further, access to a large database has allowed the evaluation of the practical suitability of using a DW structure and knowledge management tools within the decision making framework. 2000 records have been used to build and test the data mining techniques within the KDD process. The records were drawn from the Arab Academy for Science and Technology and Maritime Transport (AASTMT) students’ database (DB). Moreover, the paper has analyzed the key characteristics of DW and explored the advantages and disadvantages of such data structures. This evaluation has been used to build a DW for the Egyptian Universities that handle their admission and registration related archival data. The decision makers’ potential benefits of the DW within the student recruitment process will be explored. The design of the proposed admission and registration DSS (ARDSS) will be developed and tested using Cool: Gen (5.0) CASE tools by Computer Associates (CA), connected to a MS-SQL Server (6.5), in a Windows NT (4.0) environment. Crystal Reports (4.6) by Seagate will be used as a report generation tool. CLUSTAN Graphics (5.0) by CLUSTAN software will also be used as a clustering package. The ARDSS software could be adjusted for usage in different countries for the same purpose, it is also scalable to handle new decision situations and can be integrated with other systems

    BUILDING DSS USING KNOWLEDGE DISCOVERY IN DATABASE APPLIED TO ADMISSION & REGISTRATION FUNCTIONS

    Get PDF
    This research investigates the practical issues surrounding the development and implementation of Decision Support Systems (DSS). The research describes the traditional development approaches analyzing their drawbacks and introduces a new DSS development methodology. The proposed DSS methodology is based upon four modules; needs' analysis, data warehouse (DW), knowledge discovery in database (KDD), and a DSS module. The proposed DSS methodology is applied to and evaluated using the admission and registration functions in Egyptian Universities. The research investigates the organizational requirements that are required to underpin these functions in Egyptian Universities. These requirements have been identified following an in-depth survey of the recruitment process in the Egyptian Universities. This survey employed a multi-part admission and registration DSS questionnaire (ARDSSQ) to identify the required data sources together with the likely users and their information needs. The questionnaire was sent to senior managers within the Egyptian Universities (both private and government) with responsibility for student recruitment, in particular admission and registration. Further, access to a large database has allowed the evaluation of the practical suitability of using a data warehouse structure and knowledge management tools within the decision making framework. 1600 students' records have been analyzed to explore the KDD process, and another 2000 records have been used to build and test the data mining techniques within the KDD process. Moreover, the research has analyzed the key characteristics of data warehouses and explored the advantages and disadvantages of such data structures. This evaluation has been used to build a data warehouse for the Egyptian Universities that handle their admission and registration related archival data. The decision makers' potential benefits of the data warehouse within the student recruitment process will be explored. The design of the proposed admission and registration DSS (ARDSS) will be developed and tested using Cool: Gen (5.0) CASE tools by Computer Associates (CA), connected to a MSSQL Server (6.5), in a Windows NT (4.0) environment. Crystal Reports (4.6) by Seagate will be used as a report generation tool. CLUST AN Graphics (5.0) by CLUST AN software will also be used as a clustering package. Finally, the contribution of this research is found in the following areas: A new DSS development methodology; The development and validation of a new research questionnaire (i.e. ARDSSQ); The development of the admission and registration data warehouse; The evaluation and use of cluster analysis proximities and techniques in the KDD process to find knowledge in the students' records; And the development of the ARDSS software that encompasses the advantages of the KDD and DW and submitting these advantages to the senior admission and registration managers in the Egyptian Universities. The ARDSS software could be adjusted for usage in different countries for the same purpose, it is also scalable to handle new decision situations and can be integrated with other systems

    Correcting remaining truncations in hybrid life cycle assessment database compilation

    Get PDF
    Hybrid life cycle assessment (HLCA) strives to combine process‐based life cycle assessment (PLCA) and environmentally extended input–output (EEIO) analysis to bridge gaps of both methodologies. The recent development of HLCA databases constitutes a major step forward in achieving complete system coverage. Nevertheless, current applications of HLCA still suffer from issues related to incompleteness of the inventory and data gaps: (1) hybridization without endogenizing the capital inputs of the EEIO database leads to underestimations, (2) the unreliability of price data hinders the application of streamlined HLCA for processes in some sectors, and (3) the sparse coverage of pollutants in multiregional EEIO databases limits the application of HLCA to a handful of impact categories. This paper aims at offering a methodology for tackling these issues in a streamlined manner and visualizing their effects on impact scores across an entire PLCA database and multiple impact categories. Data reconciliation algorithms are demonstrated on the PLCA database ecoinvent3.5 and the multiregional EEIO database EXIOBASE3. Instead of performing hybridization solely with annual product requirements, this hybridization approach incorporates endogenized capital requirements, demonstrates a novel hybridization methodology to bypass issues of price unavailability, estimates new pollutants to EXIOBASE3 environmental extensions, and thus yields improved inventories characterized in terms of 13 impact categories from the IMPACT World+ methodology. The effect of hybridization on the impact score of each process of ecoinvent3.5 varied from a few percentages to three‐fold increases, depending on the impact category and the process studied, displaying in which cases hybridization should be prioritized. This article met the requirements for a Gold—Gold JIE data openness badge described at http://jie.click/badges

    The ESPA (Enhanced Structural Path Analysis) method: a solution to an implementation challenge for dynamic life cycle assessment studies

    Get PDF
    This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.International audiencePurpose : By analyzing the latest developments in the dynamic life cycle assessment (DLCA) methodology, we identify an implementation challenge with the management of new temporal information to describe each system we might want to model. To address this problem, we propose a new method to differentiate elementary and process flows on a temporal level, and explain how it can generate temporally differentiated life cycle inventories (LCI), which are necessary inputs for dynamic impact assessment methods. Methods : First, an analysis of recent DLCA studies is used to identify the relevant temporal characteristics for an LCI. Then, we explain the implementation challenge of handling additional temporal information to describe processes in life cycle assessment (LCA) databases. Finally, a new format of temporal description is proposed to minimize the current implementation problem for DLCA studies. Results and discussion : A new format of process-relative temporal distributions is proposed to obtain a temporal differentiation of LCA database information (elementary flows and product flows). A new LCI calculation method is also proposed since the new format for temporal description is not compatible with the traditional LCI calculation method. Description of the requirements and limits for this new method, named enhanced structural path analysis (ESPA), is also presented. To conclude the description of the ESPA method, we illustrate its use in a strategically chosen scenario. The use of the proposed ESPA method for this scenario reveals the need for the LCA community to reach an agreement on common temporal differentiation strategies for future DLCA studies. Conclusions : We propose the ESPA method to obtain temporally differentiated LCIs, which should then require less implementation effort for the system-modeling step (LCA database definition), even if such concepts cannot be applied to every process

    A BIM-Integrated Relational Database Management System for Evaluating Building Life-Cycle Costs

    Get PDF
    Sustainable procurement is an important policy for mitigating environmental impacts attributing to construction projects. Life-cycle cost analysis (LCCA), which is an essential requirement in sustainable procurement, is a principal tool for evaluating the economic efficiency for the total life-cycle budget of a building project. LCCA is a complex and time-consuming process due to repetitive complicated calculations, which are based on various legal and regulatory requirements. It also requires a large amount of data from different sources throughout the project life cycle. For conventional data management systems, data are usually stored in the form of papers and are input into the systems manually. This results in data loss and inconsistent data, which subsequently contribute to inaccurate life-cycle costs (LCCs). Building information modeling (BIM) is a modern technology, which can potentially overcome the asperities of the conventional building LCCA. However, existing BIM tools cannot carry out building LCCA due to their limited capabilities. The relational database management system (RDBMS) can be integrated with BIM for organizing, storing, and exchanging LCCA data in a logical and systematic manner. In this paper, a BIM-integrated RDBMS is developed for compiling and organizing the required data and information from BIM models to compute building LCCs. The system integrates the BIM authoring program, the database management system, the spreadsheet system, and the visual programming interface. It is part of the BIM-database-integrated system for building LCCA using a multi-parametric model. It represents a new automated methodology for performing building LCCA, which can facilitate the implementation of sustainable procurement in building projects

    Detecting indicators for startup business success: Sentiment analysis using text data mining

    Get PDF
    The main aim of this study is to identify the key factors in User Generated Content (UGC) on the Twitter social network for the creation of successful startups, as well as to identify factors for sustainable startups and business models. New technologies were used in the proposed research methodology to identify the key factors for the success of startup projects. First, a Latent Dirichlet Allocation (LDA) model was used, which is a state-of-the-art thematic modeling tool that works in Python and determines the database topic by analyzing tweets for the #Startups hashtag on Twitter (n = 35.401 tweets). Secondly, a Sentiment Analysis was performed with a Supervised Vector Machine (SVM) algorithm that works with Machine Learning in Python. This was applied to the LDA results to divide the identiïŹed startup topics into negative, positive, and neutral sentiments. Thirdly, a Textual Analysis was carried out on the topics in each sentiment with Text Data Mining techniques using Nvivo software. This research has detected that the topics with positive feelings for the identiïŹcation of key factors for the startup business success are startup tools, technology-based startup, the attitude of the founders, and the startup methodology development. The negative topics are the frameworks and programming languages, type of job offers, and the business angels’ requirements. The identiïŹed neutral topics are the development of the business plan, the type of startup project, and the incubator’s and startup’s geolocation. The limitations of the investigation are the number of tweets in the analyzed sample and the limited time horizon. Future lines of research could improve the methodology used to determine key factors for the creation of successful startups and could also study sustainable issues

    Leachate treatment by conventional coagulation, electrocoagulation and two-stage coagulation (conventional coagulation and electrocoagulation)

    Get PDF
    Leachate is widely explored and investigated due to highly polluted and difficult to treat. Leachate treatment commonly involves advanced, complicated and high cost activities. Conventional coagulation is widely used in the treatment of wastewater but the sludge production becomes the biggest constraint in this treatment. Electrocoagulation is an alternative to conventional method because it has the same application but produce less sludge and requires simple equipment. Thus, combination of conventional coagulation and electrocoagulation can improve the efficiency of coagulation process in leachate treatment. This article is focusing on the efficiency of single and combined treatment as well as the improvement made by combined treatment. Based on review, the percentage reduction of current density and dose of coagulant was perceptible. As much 50% reduction of current density, duration of treatment, and dose of coagulant able to be obtained by using combined treatment. This combined treatment is able to reduce the cost and at the same time reduce the duration of treatment. Hence, the combined treatment offers an alternative technique for landfill leachate treatment on the removal of pollutants

    Creating Responsive Information Systems with the Help of SSADM

    Get PDF
    In this paper, a program for a research is outlined. Firstly, the concept of responsive information systems is defined and then the notion of the capacity planning and software performance engineering is clarified. Secondly, the purpose of the proposed methodology of capacity planning, the interface to information systems analysis and development methodologies (SSADM), the advantage of knowledge-based approach is discussed. The interfaces to CASE tools more precisely to data dictionaries or repositories (IRDS) are examined in the context of a certain systems analysis and design methodology (e.g. SSADM)
    • 

    corecore