667 research outputs found
A Taxonomy of Supply Chain Collaboration
Supply chain collaboration has emerged as an important cooperative strategy leading to new focus on interorganisational boundaries as the determinants of performance. Although collaboration increasingly receives great attention both from practitioners and academics, relatively little attention has been given to systematically reviewing the research literature that has appeared about supply chain collaboration. The purpose of this paper is to examine previous studies on supply chain collaboration based on a taxonomy. The proposed taxonomy is composed of four different research streams of describing specific subjects of interorganisational settings, namely information sharing, business processes, incentive schemes, and performance systems. The analysis includes the assessment of research ideas and key findings. Results show the great variability of key concepts across the four components of the taxonomy and an increased awareness of complementarity amongst research streams. Several recommendations for future research are also identified in this paper
A Taxonomy of Supply Chain Collaboration
Supply chain collaboration has emerged as an important cooperative strategy leading to new focus on interorganisational boundaries as the determinants of performance. Although collaboration increasingly receives great attention both from practitioners and academics, relatively little attention has been given to systematically reviewing the research literature that has appeared about supply chain collaboration. The purpose of this paper is to examine previous studies on supply chain collaboration based on a taxonomy. The proposed taxonomy is composed of four different research streams of describing specific subjects of interorganisational settings, namely information sharing, business processes, incentive schemes, and performance systems. The analysis includes the assessment of research ideas and key findings. Results show the great variability of key concepts across the four components of the taxonomy and an increased awareness of complementarity amongst research streams. Several recommendations for future research are also identified in this paper. Keywords: supply chain collaboration, literature review, supply chain management, taxonom
Social Media As Management Fashion - A Discourse Perspective
Social media platforms and services have rapidly grown into an important societal phenomenon, lately also with increased impact on business. The relative novelty of its occurrence in a business context, the lack of well-grounded best practice and the scarcity of research, result in organizational decision-makers having to rely on vendor descriptions and trade press articles to make sense of social media. By using management fashion theory and discourse analysis, we examine how a management fashion discourse on social media unfolds and enacts social media as a disruptive force that managers must consider in the form of e.g. strategies, normative guidelines and policies. Our analysis shows that social media discourse differs somewhat from how previous IT fashions have developed, primarily due to the fact that social media discourse is propelled by forces outside the company. We analyze the discourse constructs identified in the data using management fashion theory and position social media discourse as a particular form of management fashion. The ‘problem discourse’ defines hinders towards strategic development of social media and the reasons for their existence, which provides an agenda for change. The ‘solution discourse’ theorizes social media as a business case and provides arguments for how managers should organize internally to meet the new demands. The ‘bandwagon discourse’ provides role models, policies and codes of conduct for a successful dissemination of social media into the organization
A Survey on Automated Program Repair Techniques
With the rapid development and large-scale popularity of program software,
modern society increasingly relies on software systems. However, the problems
exposed by software have also come to the fore. Software defect has become an
important factor troubling developers. In this context, Automated Program
Repair (APR) techniques have emerged, aiming to automatically fix software
defect problems and reduce manual debugging work. In particular, benefiting
from the advances in deep learning, numerous learning-based APR techniques have
emerged in recent years, which also bring new opportunities for APR research.
To give researchers a quick overview of APR techniques' complete development
and future opportunities, we revisit the evolution of APR techniques and
discuss in depth the latest advances in APR research. In this paper, the
development of APR techniques is introduced in terms of four different patch
generation schemes: search-based, constraint-based, template-based, and
learning-based. Moreover, we propose a uniform set of criteria to review and
compare each APR tool, summarize the advantages and disadvantages of APR
techniques, and discuss the current state of APR development. Furthermore, we
introduce the research on the related technical areas of APR that have also
provided a strong motivation to advance APR development. Finally, we analyze
current challenges and future directions, especially highlighting the critical
opportunities that large language models bring to APR research.Comment: This paper's earlier version was submitted to CSUR in August 202
Visual representation of a customizable software maintenance process model
Managing the evolution of complex and large software systems involves many different types of resources and knowledge such as software artefacts, user expertise, tools and techniques, etc. Variations and interrelationships among these types of resources and knowledge create well-known challenges for maintainers. Current research mainly focuses on establishing comprehension model, and developing tools to tackle a specific aspect of maintenance problems. Little research has been conducted to study how resources and knowledge work collaboratively together to provide guidance to maintainers to complete specific maintenance tasks in a given context. In this research, we introduce a customizable maintenance process model, which extends an existing IEEE standard process model, to allow visually link various resources (e.g. tools, artifacts, maintainers etc.) and knowledge to relevant maintenance process elements. A visual metaphor has been created to graphically represent the process model. Finally, a tool environment has been developed to provide utilities for maintainers to create, customize and apply our maintenance process to provide guidance for maintainers for their maintenance tasks
Recommended from our members
Improving Information Retrieval Bug Localisation Using Contextual Heuristics
Software developers working on unfamiliar systems are challenged to identify where and how high-level concepts are implemented in the source code prior to performing maintenance tasks. Bug localisation is a core program comprehension activity in software maintenance: given the observation of a bug, e.g. via a bug report, where is it located in the source code?
Information retrieval (IR) approaches see the bug report as the query, and the source files as the documents to be retrieved, ranked by relevance. Current approaches rely on project history, in particular previously fixed bugs and versions of the source code. Existing IR techniques fall short of providing adequate solutions in finding all the source code files relevant for a bug. Without additional help, bug localisation can become a tedious, time- consuming and error-prone task.
My research contributes a novel algorithm that, given a bug report and the application’s source files, uses a combination of lexical and structural information to suggest, in a ranked order, files that may have to be changed to resolve the reported bug without requiring past code and similar reports.
I study eight applications for which I had access to the user guide, the source code, and some bug reports. I compare the relative importance and the occurrence of the domain concepts in the project artefacts and measure the effectiveness of using only concept key words to locate files relevant for a bug compared to using all the words of a bug report.
Measuring my approach against six others, using their five metrics and eight projects, I position an effected file in the top-1, top-5 and top-10 ranks on average for 44%, 69% and 76% of the bug reports respectively. This is an improvement of 23%, 16% and 11% respectively over the best performing current state-of-the-art tool.
Finally, I evaluate my algorithm with a range of industrial applications in user studies, and found that it is superior to simple string search, as often performed by developers. These results show the applicability of my approach to software projects without history and offers a simpler light-weight solution
Improving Automated Software Testing while re-engineering legacy systems in the absence of documentation
Legacy software systems are essential assets that contain an organizations' valuable business logic. Because
of outdated technologies and methods used in these systems, they are challenging to maintain and expand.
Therefore, organizations need to decide whether to redevelop or re-engineer the legacy system. Although
in most cases, re-engineering is the safer and less expensive choice, it has risks such as failure to meet the
expected quality and delays due to testing blockades. These risks are even more severe when the legacy
system does not have adequate documentation. A comprehensive testing strategy, which includes automated
tests and reliable test cases, can substantially reduce the risks. To mitigate the hazards associated with
re-engineering, we have conducted three studies in this thesis to improve the testing process.
Our rst study introduces a new testing model for the re-engineering process and investigates test automation
solutions to detect defects in the early re-engineering stages. We implemented this model on the
Cold Region Hydrological Model (CRHM) application and discovered bugs that would not likely have been
found manually. Although this approach helped us discover great numbers of software defects, designing test
cases is very time-consuming due to the lack of documentation, especially for large systems. Therefore, in
our second study, we investigated an approach to generate test cases from user footprints automatically. To
do this, we extended an existing tool to collect user actions and legacy system reactions, including database
and le system changes. Then we analyzed the data based on the order of user actions and time of them
and generated human-readable test cases. Our evaluation shows that this approach can detect more bugs
than other existing tools. Moreover, the test cases generated using this approach contain detailed oracles
that make them suitable for both black-box and white-box testing. Many scienti c legacy systems such as
CRHM are data-driven; they take large amounts of data as input and produce massive data after applying
mathematical models. Applying test cases and nding bugs is more demanding when we are dealing with
large amounts of data. Hence in our third study, we created a comparative visualization tool (ComVis) to
compare a legacy system's output after each change. Visualization helps testers to nd data issues resulting
from newly introduced bugs. Twenty participants took part in a user study in which they were asked to nd
data issued using ComVis and embedded CRHM visualization tool. Our user study shows that ComVis can
nd 51% more data issues than embedded visualization tools in the legacy system can. Also, results from
the NASA-TLX assessment and thematic analysis of open-ended questions about each task show users prefer
to use ComVis over the built-in visualization tool. We believe our introduced approaches and developed
systems will signi cantly reduce the risks associated with the re-engineering process.
i
Solving the year 2000 dilemma
https://egrove.olemiss.edu/aicpa_guides/1551/thumbnail.jp
Pitfalls and guide lines in the transition to object oriented software design methodologies
A research report submitted to the Faculty of Engineering,
University of the Witwatersrand, Johannesburg, in partial
fulfilment of the requirements for the degree of Master of
Science in Engineering.Due to the dynamic nature of the software engineering industry there is a constant
move towards new strategies for solving design problems. More specifically there is a
move towards Object Oriented (OO) methodologies, presumably because of the
various advantages offered in terms of maintainability, and reuse of code produced this
way. As with various other aspects of the software industry there are however also
problems encountered in this transition and lessons to be learned from the experience
of companies who have already performed this change.
This research report investigates possible guidelines for companies who are currently
contemplating a change to the OO software design methodologies, by covering a
collection of issues one should know about prior to this change. It also summarises the
problems faced in the transition so far, the reasons for these problems and suggests
possible solutions. Lastly it also investigates new trends in the OO arena. The
emphasis is on South African companies and projects. The results obtained are
compared with results obtained overseas to find out what the differences and
similarities are. Areas of concern are also identified, where theoreticians' views have
been ignored, and both South African and overeeas companies have not implemented
any of the suggestions made.Andrew Chakane 201
- …