228 research outputs found

    RANCANG BANGUN E-INVENTORY SPARE PART KAPAL BERBASIS CODEIGNITER PADA PT PELAYARAN NASIONAL SANDICO OCEAN LINE BATAM

    Get PDF
    With the rapid advancement of information technology, computers have become highly useful devices for humans in various fields. In the context of companies, an inventory system is needed to ensure smooth operational activities. Inventory encompasses a complete list of items owned by an office, institution, factory, or company, and is used to carry out specific tasks. PT Pelayaran Nasional Sandico Ocean Line Batam is one company that still uses a manual or conventional inventory system to record ship spare parts inventory. Therefore, a web-based information system is proposed to replace the existing system. In the development of this website, the PHP programming language is used to create dynamic and interactive web pages. For the framework, Codeigniter is chosen based on the previously explained concepts. Codeigniter is selected as one of the best frameworks for building web applications because it provides various tools that aid in faster development, a stable structure, and ease of maintenance. The development of this Codeigniter-based website aims to assist developers in efficiently building web applications. By using Codeigniter, developers can easily handle complex PHP code, seamlessly integrate components, and achieve useful functionality with simple configuration. &nbsp

    State of Refactoring Adoption: Towards Better Understanding Developer Perception of Refactoring

    Get PDF
    Context: Refactoring is the art of improving the structural design of a software system without altering its external behavior. Today, refactoring has become a well-established and disciplined software engineering practice that has attracted a significant amount of research presuming that refactoring is primarily motivated by the need to improve system structures. However, recent studies have shown that developers may incorporate refactoring strategies in other development-related activities that go beyond improving the design especially with the emerging challenges in contemporary software engineering. Unfortunately, these studies are limited to developer interviews and a reduced set of projects. Objective: We aim at exploring how developers document their refactoring activities during the software life cycle. We call such activity Self-Affirmed Refactoring (SAR), which is an indication of the developer-related refactoring events in the commit messages. After that, we propose an approach to identify whether a commit describes developer-related refactoring events, to classify them according to the refactoring common quality improvement categories. To complement this goal, we aim to reveal insights into how reviewers develop a decision about accepting or rejecting a submitted refactoring request, what makes such review challenging, and how to the efficiency of refactoring code review. Method: Our empirically driven study follows a mixture of qualitative and quantitative methods. We text mine refactoring-related documentation, then we develop a refactoring taxonomy, and automatically classify a large set of commits containing refactoring activities, and identify, among the various quality models presented in the literature, the ones that are more in-line with the developer\u27s vision of quality optimization, when they explicitly mention that they are refactoring to improve them to obtain an enhanced understanding of the motivation behind refactoring. After that, we performed an industrial case study with professional developers at Xerox to study the motivations, documentation practices, challenges, verification, and implications of refactoring activities during code review. Result: We introduced SAR taxonomy on how developers document their refactoring strategies in commit messages and proposed a SAR model to automate the detection of refactoring. Our survey with code reviewers has revealed several difficulties related to understanding the refactoring intent and implications on the functional and non-functional aspects of the software. Conclusion: Our SAR taxonomy and model, can work in conjunction with refactoring detectors, to report any early inconsistency between refactoring types and their documentation and can serve as a solid background for various empirical investigations. In light of our findings of the industrial case study, we recommended a procedure to properly document refactoring activities, as part of our survey feedback

    Understanding Programmers' Working Context by Mining Interaction Histories

    Get PDF
    Understanding how software developers do their work is an important first step to improving their productivity. Previous research has generally focused either on laboratory experiments or coarsely-grained industrial case studies; however, studies that seek a finegrained understanding of industrial programmers working within a realistic context remain limited. In this work, we propose to use interaction histories — that is, finely detailed records of developers’ interactions with their IDE — as our main source of information for understanding programmer’s work habits. We develop techniques to capture, mine, and analyze interaction histories, and we present two industrial case studies to show how this approach can help to better understand industrial programmers’ work at a detailed level: we explore how the basic characteristics of software maintenance task structures can be better understood, how latent dependence between program artifacts can be detected at interaction time, and show how patterns of interaction coupling can be identified. We also examine the link between programmer interactions and some of the contextual factors of software development, such as the nature of the task being performed, the design of the software system, and the expertise of the developers. In particular, we explore how task boundaries can be automatically detected from interaction histories, how system design and developer expertise may affect interaction coupling, and whether newcomer and expert developers differ in their interaction history patterns. These findings can help us to better reason about the multidimensional nature of software development, to detect potential problems concerning task, design, expertise, and other contextual factors, and to build smarter tools that exploit the inherent patterns within programmer interactions and provide improved support for task-aware and expertise-aware software development

    A Technique for Evaluating the Health Status of a Software Module Using Process Metrics

    Get PDF
    Identifying error-prone files in large software systems has been an area where significant attention has been paid over the years. In this thesis, we propose a process-metrics based method for predicting the health status of a file based on its commit profile in its GitHub repository. Precisely, for each file and each bug fixing commit a file participates, we compute a dependency score of the committed file with its other co-committed files. The calculated score is appropriately decayed if the file does not participate in the new bug-fixing commits in the system. By examining the trend of the dependency score for each file as time elapses, we try to deduce whether this file will be participating in a bug fixing commit in the immediately following commits. This approach has been evaluated in 21 medium to large open-source systems by correlating the dependency metric trend against the known outcome obtained from a data set we use as a gold standard

    Empirically-Grounded Construction of Bug Prediction and Detection Tools

    Get PDF
    There is an increasing demand on high-quality software as software bugs have an economic impact not only on software projects, but also on national economies in general. Software quality is achieved via the main quality assurance activities of testing and code reviewing. However, these activities are expensive, thus they need to be carried out efficiently. Auxiliary software quality tools such as bug detection and bug prediction tools help developers focus their testing and reviewing activities on the parts of software that more likely contain bugs. However, these tools are far from adoption as mainstream development tools. Previous research points to their inability to adapt to the peculiarities of projects and their high rate of false positives as the main obstacles of their adoption. We propose empirically-grounded analysis to improve the adaptability and efficiency of bug detection and prediction tools. For a bug detector to be efficient, it needs to detect bugs that are conspicuous, frequent, and specific to a software project. We empirically show that the null-related bugs fulfill these criteria and are worth building detectors for. We analyze the null dereferencing problem and find that its root cause lies in methods that return null. We propose an empirical solution to this problem that depends on the wisdom of the crowd. For each API method, we extract the nullability measure that expresses how often the return value of this method is checked against null in the ecosystem of the API. We use nullability to annotate API methods with nullness annotation and warn developers about missing and excessive null checks. For a bug predictor to be efficient, it needs to be optimized as both a machine learning model and a software quality tool. We empirically show how feature selection and hyperparameter optimizations improve prediction accuracy. Then we optimize bug prediction to locate the maximum number of bugs in the minimum amount of code by finding the most cost-effective combination of bug prediction configurations, i.e., dependent variables, machine learning model, and response variable. We show that using both source code and change metrics as dependent variables, applying feature selection on them, then using an optimized Random Forest to predict the number of bugs results in the most cost-effective bug predictor. Throughout this thesis, we show how empirically-grounded analysis helps us achieve efficient bug prediction and detection tools and adapt them to the characteristics of each software project

    Exchange rate regime and exchange rate performance: evidence from East Asia

    Get PDF
    This thesis is intended to be part of a vigorous debate currently going on in the international community of exchange rate regime, monetary policy and related core issues in East Asian economies. From different angles and aspects, this thesis contributes to the related literature, and provides fresh theoretical arguments and comprehensive study on the exchange rate regime and exchange rate performance in East Asia. This thesis firstly investigates the performance and characteristics of exchange rate regimes in a group of East Asian economies during the 1990s. The determination of local currency, the flexibility of exchange rate regime, as well as the regional coordination of exchange rate management have been thoroughly examined. This thesis then considers the implications of exchange rate regimes on the monetary policy. It examines whether the adoption of new exchange rate regime has affected monetary autonomy, concerning the sensitivity of domestic interest rates to international interest rates under different currency regimes, from the cases of the selected East Asian economies during 1994-2004. One of the aspects of the choices of exchange rate regime is its implications for the magnitude of exchange rate volatility and the transmission of this volatility into other countries in the region. This thesis thus carries out an empirical investigation on the exchange rate volatility and cross-country contagion/spillover effect within foreign exchange markets for a group of East Asian countries in the context of the 1997/98 financial crisis. In addition, this thesis provides an investigation on the measurement of foreign exchange market pressure and currency crisis proneness, as well as examines interrelations between exchange market pressure and monetary policy. The post-crisis interactions among EMP, domestic credit growth, and the interest rate differential between domestic and foreign interest rates, in particular, have been investigated for a representative group of East Asian countries. Finally, this thesis provides further evidence on the relationship between stock prices and exchange rates, from the typical case of Hong Kong, to realise what kind of causality prevailed over the period 1995-2001. Based on the high frequency weekly data, both long-run and short-run dynamics between stock prices and exchange rates in Hong Kong are addressed. Various forms of evidence and empirical techniques are extensively applied and fully evaluated for the specific questions addressed in this research. These practical methodologies include Ordinary Least Square (OLS), Generalised Method of Movements (GMM), Generalised Autoregressive Conditional Heteroskedasticity (GARCH), Exponential GARCH (EGARCH), Vector Autoregressions (VAR) and their Impulse Response Functions (IRF), Unit Root Tests, Cointegration, and Granger Causality Tests. All kinds of data sets and sample periods employed in this research provide an interesting comparison to the existing related studies. The main findings and key ideas drawn from this research have important implications for policy markers on the exchange rate management. The study on specific research topics and the comprehensive and thorough applications of various econometric methodologies provide valuable insight in characteristics and patterns of East Asian foreign exchange markets
    • …
    corecore