5,172 research outputs found

    Vehicle Booking System for Human Resource Management UTP

    Get PDF
    In line with the advancement of current technology, every manual process being converted into computer-based process. The rapid changes of technology bring positive effects where people are easy to be contacted everywhere and at anytime. Thus, many of human activities may be accelerate in speed including booking process which usually takes several days to become two hour process. The purpose of this project is to develop a web based application system for Vehicle Booking Process (VBS) of Human Resource Management UTP (HRM UTP). VBS has been developed to automate the currently manual business processes. This portal is act as a one stop centre for staff and HRM Officer. By login to this website, it allows free registration and online application for requester. It is a web-based application system which allows data centralization. In order to achieve this objective, the author has done a lot of research in order to have a deep understanding about online booking system and how to design and develop a bets website. The methodology used for designing and developing this website is Iterative and Incremental Development Method with the integration of the test driven development model. The website interface designs are also included based on the comparison of existing website and user feedback. The author concludes with few recommendations in developing this website

    The Data Processing Pipeline for the Herschel-HIFI Instrument

    Get PDF
    The HIFI data processing pipeline was developed to systematically process diagnostic, calibration and astronomical observations taken with the HIFI science instrumentas part of the Herschel mission. The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment, as well as, within an interactive environment. A common software framework was developed to best support the use cases required by the instrument teams and by the general astronomers. The HIFI pipeline was built on top of that and was designed with a high degree of modularity. This modular design provided the necessary flexibility and extensibility to deal with the complexity of batch-processing eighteen different observing modes, to support the astronomers in the interactive analysis and to cope with adjustments necessary to improve the pipeline and the quality of the end-products. This approach to the software development and data processing effort was arrived at by coalescing the lessons learned from similar research based projects with the understanding that a degree of foresight was required given the overall length of the project. In this article, both the successes and challenges of the HIFI software development process are presented. To support future similar projects and retain experience gained lessons learned are extracted.Comment: 18 pages, 5 figure

    Bridging test and model-driven approaches in web engineering

    Get PDF
    In the last years there has been a growing interest in agile methods and their integration into the so called "unified" approaches. In the field of Web Engineering, agile approaches such as test-driven development are appealing because of the very nature of Web applications, while model-driven approaches provide a less error-prone code derivation; however the integration of both approaches is not easy. In this paper, we present a method-independent approach to combine the agile, iterative and incremental style of test-driven development with the more formal, transformation-based model-driven Web engineering approaches. We focus not only in the development process but also in the evolution of the application, and show how tests can be transformed together with model refactoring. As a proof of concept we show an illustrative example using WebRatio, the WebML design tool.Publicado en Lecture Notes in Computer Science book series (LNCS, vol. 5648).Laboratorio de Investigación y Formación en Informática Avanzad

    Plan validation and mixed-initiative planning in space operations

    Get PDF
    Bringing artificial intelligence planning and scheduling applications into the real world is a hard task that is receiving more attention every day by researchers and practitioners from many fields. In many cases, it requires the integration of several underlying techniques like planning, scheduling, constraint satisfaction, mixed-initiative planning and scheduling, temporal reasoning, knowledge representation, formal models and languages, and technological issues. Most papers included in this book are clear examples on how to integrate several of these techniques. Furthermore, the book also covers many interesting approaches in application areas ranging from industrial job shop to electronic tourism, environmental problems, virtual teaching or space missions. This book also provides powerful techniques that allow to build fully deployable applications to solve real problems and an updated review of many of the most interesting areas of application of these technologies, showing how powerful these technologies are to overcome the expresiveness and efficiency problems of real world problems

    Reverse Engineering and Testing of Rich Internet Applications

    Get PDF
    The World Wide Web experiences a continuous and constant evolution, where new initiatives, standards, approaches and technologies are continuously proposed for developing more effective and higher quality Web applications. To satisfy the growing request of the market for Web applications, new technologies, frameworks, tools and environments that allow to develop Web and mobile applications with the least effort and in very short time have been introduced in the last years. These new technologies have made possible the dawn of a new generation of Web applications, named Rich Internet Applications (RIAs), that offer greater usability and interactivity than traditional ones. This evolution has been accompanied by some drawbacks that are mostly due to the lack of applying well-known software engineering practices and approaches. As a consequence, new research questions and challenges have emerged in the field of web and mobile applications maintenance and testing. The research activity described in this thesis has addressed some of these topics with the specific aim of proposing new and effective solutions to the problems of modelling, reverse engineering, comprehending, re-documenting and testing existing RIAs. Due to the growing relevance of mobile applications in the renewed Web scenarios, the problem of testing mobile applications developed for the Android operating system has been addressed too, in an attempt of exploring and proposing new techniques of testing automation for these type of applications

    Methodology and automated metadata extraction from multiple volume shadow copies

    Get PDF
    Modern day digital forensics investigations rely on timelines as a principal method for normalizing and chronologically categorizing artifacts recovered from computer systems. Timelines provide investigators with a chronological representation of digital evidence so they can depict altered and unaltered digital forensics data in-context to drive conclusions about system events and/or user activities. While investigators rely on many system artifacts such as file system time/date stamps, operating system artifacts, program artifacts, logs, and/or registry artifacts as input for deriving chronological representations, using only the available or most recent version of the artifacts may provide a limited picture of historical changes on a system. For instance, if previous versions of artifacts and/or previous artifact metadata changes are overwritten and/or are not retained on a system, analysis of current versions of artifacts and artifact metadata, such as time/date stamps and operating system/program/registry artifacts, may provide only a limited picture of activities for the system. Recently, the Microsoft Windows Operating System implemented a backup mechanism that is capable of retaining multiple versions of data storage units for a system, effectively providing a highly-detailed record of system changes. This backup mechanism, the Windows Volume Shadow Copy Service (VSS), exists as a service of modern Microsoft Windows Operating Systems and allows data backups to be performed while applications on a system continue to write to the system\u27s live volume(s). This allows a running system to preserve the system\u27s state to backup media at any given point while the system continues to change in real-time. After multiple VSS backups are recorded, digital investigators now have the ability to incorporate multiple versions of a system\u27s artifacts into a chronological representation, which provides a more comprehensive picture of the system\u27s historical changes. In order to effectively incorporate VSS backup, or Volume Shadow Copy (VSC), data into a chronological representation, the data must be accessed and extracted in a consistent, repeatable, and, if possible, automated manner. Previous efforts have produced a variety of manual and semi-automated methods for accessing and extracting VSC data in a repeatable manner. These methods are time consuming and often require significant storage resources if dealing with multiple VSCs. The product of this research effort is the advancement of the methodology to automate accessing and extracting directory-tree and file attribute metadata from multiple VSCs of the Windows 7 Operating System. The approach extracts metadata from multiple VSCs and combines it as one conglomerate data set. By capturing the historical changes recorded within VSC metadata, this approach enhances timeline generation. Additionally, it supports other projects which could use the metadata to visualize change-over-time by depicting how the individual metadata and the conglomerate data set changed (or remained unchanged) throughout an arbitrary snapshot of time

    Temporal meta-model framework for Enterprise Information Systems (EIS) development

    Get PDF
    This thesis has developed a Temporal Meta-Model Framework for semi-automated Enterprise System Development, which can help drastically reduce the time and cost to develop, deploy and maintain Enterprise Information Systems throughout their lifecycle. It proposes that the analysis and requirements gathering can also perform the bulk of the design phase, stored and available in a suitable model which would then be capable of automated execution with the availability of a set of specific runtime components
    corecore