19,033 research outputs found

    COMPUTER ASSISTED TEST UNTUK PENGELOMPOKAN JABATAN TEKNISI MENGGUNAKAN ALGORITMA K-MEANS (STUDI KASUS : PT. XYZ)

    Get PDF
    Computer assisted test to classify technician positions using the K-means algorithm is a system built with the ability to measure abilities as well as determine the area of expertise of technicians at PT. XYZ. The purpose of making this computer-assisted test information system is to improve the technical exam system that is still manual into a computerized technician exam system by changing the procedure into a computer that uses storage media in the form of a database. The computer assisted test information system was created by referring to the SDLC (System Development Life Cycle) method which has the characteristic of working on each phase in Waterfall, which must be completed first before proceeding to the next phase. The system built is a website-based application that has two main interfaces, namely the admin interface and the technician interface. Admin interface that functions to manage users or technicians, manage question banks and manage test scores. While the User interface serves to work on exam questions. The test score data will be processed by the K-means algorithm to determine the area of expertise of each technician. This research was conducted by Muhamad Ramadhan, a student of the 2020 POLSUB information systems study program. M. Ramadhan assessed the suitability of the system with the design made by testing the system using black box testing which showed 100% success of system functionality and User Acceptences Testing (UAT) which showed ergonomic values. which is quite high at 93.25%

    The ecoinvent Database: Overview and Methodological Framework (7 pp)

    Get PDF
    Introduction: This paper provides an overview on the content of the ecoinvent database and of selected metholodogical issues applied on the life cycle inventories implemented in the ecoinvent database. Goal, Scope and Background: In the year 2000, several Swiss Federal Offices and research institutes of the ETH domain agreed to a joint effort to harmonise and update life cycle inventory (LCI) data for its use in life cycle assessment (LCA). With the ecoinvent data-base and its actual data v1.1, a consistent set of more than 2'500 product and service LCIs is now available. Method: Nearly all process datasets are transparently documented on the level of unit process inputs and outputs. Methodological approaches have been applied consistently throughout the entire database content and thus guarantee for a coherent set of LCI data. This is particularly true for market and trade modelling (see, for example, electricity modelling), for the treatment of multi-out-put and of recycling processes, but also for the recording and reporting of elementary flows. The differentiation of diameter size for particulate matter emissions, for instance, allows for a more comprehensive impact assessment of human health effects. Data quality is quantitatively reported in terms of standard deviations of the amounts of input and output flows. In many cases qualitative indicators are reported additionally on the level of each individual input and output. The information sources used vary from extensive statistical works to individual (point) measurements or assumptions derived from process descriptions. However, all datasets passed the same quality control procedure and all information relevant and necessary to judge the suitability of a dataset in a certain context are provided in the database. Data documentation and exchange is based on the EcoSpold data format, which complies with the technical specification ISO/TS 14048. Free access to process information via the Internet helps the user to judge the appropriateness of a dataset. Concluding Remarks: The existence of the ecoinvent database proves that it is possible and feasible to build up a large interlinked system of LCI unit processes. The project work proved to be demanding in terms of co-ordination efforts required and consent identification. One main characteristic of the database is its transparency in reporting to enable individual assessment of data appropriateness and to support the plurality in methodological approaches. Outlook: Further work on the ecoinvent database may comprise work on the database content (new or more detailed data-sets covering existing or new economic sectors), LCI (modelling) methodology, the structure and features of the data-base system (e.g. extension of Monte Carlo simulation to the impact assessment phase) or improvements in eco-invent data supply and data query. Furthermore, the deepening and building up of international co-operations in LCI data collection and supply is in the focus of future activitie

    Review of Life Cycle Assessment in Agro-Chemical Processes

    Get PDF
    Life Cycle Assessment (LCA) is a method used to evaluate the potential impacts on the environment of a product, process, or activity throughout its life cycle. Today’s LCA users are a mixture of individuals with skills in different disciplines who want to evaluate their products, processes, or activities in a life cycle context. This study attempts to present some of the LCA studies on agro-chemical processes, recent advances in LCA and their application on food products and non-food products. Due to the recent development of LCA methodologies and dissemination programs by international and local bodies, use of LCA is rapidly increasing in agricultural and industrial products. The literatures suggest that LCA coupled with other environmental approaches provides much more reliable and comprehensive information to environmentally conscious policy makers, producers, and consumers in selecting sustainable products and production processes. For this purpose, a field study of LCA of biodiesel from Jatropha curcas has been taken as an example in the study. In the past, LCA has been applied primarily to products but recent literature suggests that it has also the potential as an analysis and design tool for processes and services. In general, all primary industries use energy and water resources and emit pollutants gases. LCA is a method to report on and analyze these resource issues across the life cycle of agro-chemical processes. This review has the importance as a first part of a research project to develop a life cycle assessment methodology for agro-chemical industries. It presents the findings of a literature review that focuses on LCA of agriculture and chemical engineering literatur

    Knowledge-Intensive Processes: Characteristics, Requirements and Analysis of Contemporary Approaches

    Get PDF
    Engineering of knowledge-intensive processes (KiPs) is far from being mastered, since they are genuinely knowledge- and data-centric, and require substantial flexibility, at both design- and run-time. In this work, starting from a scientific literature analysis in the area of KiPs and from three real-world domains and application scenarios, we provide a precise characterization of KiPs. Furthermore, we devise some general requirements related to KiPs management and execution. Such requirements contribute to the definition of an evaluation framework to assess current system support for KiPs. To this end, we present a critical analysis on a number of existing process-oriented approaches by discussing their efficacy against the requirements

    Sustainability in design: now! Challenges and opportunities for design research, education and practice in the XXI century

    Get PDF
    Copyright @ 2010 Greenleaf PublicationsLeNS project funded by the Asia Link Programme, EuropeAid, European Commission

    Fault Localization Models in Debugging

    Full text link
    Debugging is considered as a rigorous but important feature of software engineering process. Since more than a decade, the software engineering research community is exploring different techniques for removal of faults from programs but it is quite difficult to overcome all the faults of software programs. Thus, it is still remains as a real challenge for software debugging and maintenance community. In this paper, we briefly introduced software anomalies and faults classification and then explained different fault localization models using theory of diagnosis. Furthermore, we compared and contrasted between value based and dependencies based models in accordance with different real misbehaviours and presented some insight information for the debugging process. Moreover, we discussed the results of both models and manifested the shortcomings as well as advantages of these models in terms of debugging and maintenance.Comment: 58-6

    Economic evaluation of LIFE methodology

    Get PDF
    Background: The LIFE project (Lifecycle Information For E-Literature) was carried out during 2004-2006 by a consortium consisting of The British Library and University College London Library Services . The project was joint venture funded by JISC under the programme area Institutional Management Support and Collaboration. The project has received favourable feedback, for instance during a workshop organised at the end of it, and JISC has agreed to fund a second phase during 2007-2008. The consortium has been strengthened by three associate partners (SHERPA-LEAP Consortium, SHERPA-DP and the Medical Research Council). In addition some funds were reserved for the use of an outside economic consultant for an evaluation of the life-cycle models that emerged as the key results from the first phase. The LIFE-2 project consists of five work packages, and this report is part of the first of these. The objective of WP 1 is formulated in the LIFE 2 Project proposal as follows: Validation of the economic modelling and methodology for the Lifecycle and Generic Preservation formulae developed in Phase 1 of the LIFE project, with technical and presentational development of the models. Cloudlake Consulting Oy has been commissioned by the consortium to carry out this validation. The report has been written by Bo-Christer Björk. He is professor of Information Systems Science at the Swedish School of Economics and Business Administration in Helsinki, Finland. He has been conducting research concerning the scientific publishing process since 2000 and has published several peer reviewed journal articles as well as conference papers on the subject. He is often an invited speaker at international workshops in this area

    Smart Asset Management for Electric Utilities: Big Data and Future

    Full text link
    This paper discusses about future challenges in terms of big data and new technologies. Utilities have been collecting data in large amounts but they are hardly utilized because they are huge in amount and also there is uncertainty associated with it. Condition monitoring of assets collects large amounts of data during daily operations. The question arises "How to extract information from large chunk of data?" The concept of "rich data and poor information" is being challenged by big data analytics with advent of machine learning techniques. Along with technological advancements like Internet of Things (IoT), big data analytics will play an important role for electric utilities. In this paper, challenges are answered by pathways and guidelines to make the current asset management practices smarter for the future.Comment: 13 pages, 3 figures, Proceedings of 12th World Congress on Engineering Asset Management (WCEAM) 201

    Decision support system for choosing a model for a software development life cycle

    Get PDF
    The aim of this paper is to present selected models of a Software Development Life Cycle as a set of possible alternatives. The article also includes the characteristics of IT projects which are used as the basis for selection criteria, according to which an appropriate model should be chosen. These characteristics are divided into two groups; one of them deals with the product, the other one deals with the project. Based on both a literature study and statistical surveys, a list of criteria is derived, to be later applied in the process of developing a knowledge-based system. The rules and search algorithms for selecting the best models are described by a flowchart. Finally, the method of presentation and the interpretation of the results are discussed.algorithm, knowledge base, sequential model, evolutionary model, IT project, selection criteria, risk, project complexity
    • 

    corecore