8,536 research outputs found
A Design Science Research Approach to Smart and Collaborative Urban Supply Networks
Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness.
A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense.
Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice
Reduction of Petri net maintenance modeling complexity via Approximate Bayesian Computation
This paper is part of the ENHAnCE ITN project (https://www.h2020-enhanceitn.eu/) funded by the European Union's Horizon 2020 research and innovation programme under the Marie SklodowskaCurie grant agreement No. 859957. The authors would like to thank the Lloyd's Register Foundation (LRF), a charitable foundation in the U.K. helping to protect life and property by supporting engineeringrelated education, public engagement, and the application of research. The authors gratefully acknowledge the support of these organizations which have enabled the research reported in this paper.The accurate modeling of engineering systems and processes using Petri nets often results in complex graph
representations that are computationally intensive, limiting the potential of this modeling tool in real life
applications. This paper presents a methodology to properly define the optimal structure and properties of
a reduced Petri net that mimic the output of a reference Petri net model. The methodology is based on
Approximate Bayesian Computation to infer the plausible values of the model parameters of the reduced model
in a rigorous probabilistic way. Also, the method provides a numerical measure of the level of approximation
of the reduced model structure, thus allowing the selection of the optimal reduced structure among a set
of potential candidates. The suitability of the proposed methodology is illustrated using a simple illustrative
example and a system reliability engineering case study, showing satisfactory results. The results also show
that the method allows flexible reduction of the structure of the complex Petri net model taken as reference,
and provides numerical justification for the choice of the reduced model structure.European Commission 859957Lloyd's Register Foundation (LRF), a charitable foundation in the U.K
Combining shallow-water and analytical wake models for tidal-array micro-siting
For tidal-stream energy to become a competitive renewable energy source, clustering multiple turbines into arrays is paramount. Array optimisation is thus critical for achieving maximum power performance and reducing cost of energy. However, ascertaining an optimal array layout is a complex problem, subject to specific site hydrodynamics and multiple inter-disciplinary constraints. In this work, we present a novel optimisation approach that combines an analytical-based wake model, FLORIS, with an ocean model, Thetis. The approach is demonstrated through applications of increasing complexity. By utilising the method of analytical wake superposition, the addition or alteration of turbine position does not require re-calculation of the entire flow field, thus allowing the use of simple heuristic techniques to perform optimisation at a fraction of the computational cost of more sophisticated methods. Using a custom condition-based placement algorithm, this methodology is applied to the Pentland Firth for arrays with turbines of 3.05m/s rated speed, demonstrating practical implications whilst considering the temporal variability of the tide. For a 24-turbine array case, micro-siting using this technique delivered an array 15.8% more productive on average than a staggered layout, despite flow speeds regularly exceeding the rated value. Performance was evaluated through assessment of the optimised layout within the ocean model that treats turbines through a discrete turbine representation. Used iteratively, this methodology could deliver improved array configurations in a manner that accounts for local hydrodynamic effects
The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka
It was evident through the literature that the perceived value delivery of the global software
engineering industry is low due to various facts. Therefore, this research concerns global
software product companies in Sri Lanka to explore the software engineering methods and
practices in increasing the value addition. The overall aim of the study is to identify the key
determinants for value addition in the global software engineering industry and critically
evaluate the impact of them for the software product companies to help maximise the value
addition to ultimately assure the sustainability of the industry.
An exploratory research approach was used initially since findings would emerge while the
study unfolds. Mixed method was employed as the literature itself was inadequate to
investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the
disciplines from the targeted organisations which was combined with the literature findings as
well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings
of the existing literature were verified through the exploratory study and the outcomes were
used to formulate the questionnaire for the public survey. 371 responses were considered after
cleansing the total responses received for the data analysis through SPSS 21 with alpha level
0.05. Internal consistency test was done before the descriptive analysis. After assuring the
reliability of the dataset, the correlation test, multiple regression test and analysis of variance
(ANOVA) test were carried out to fulfil the requirements of meeting the research objectives.
Five determinants for value addition were identified along with the key themes for each area.
They are staffing, delivery process, use of tools, governance, and technology infrastructure.
The cross-functional and self-organised teams built around the value streams, employing a
properly interconnected software delivery process with the right governance in the delivery
pipelines, selection of tools and providing the right infrastructure increases the value delivery.
Moreover, the constraints for value addition are poor interconnection in the internal processes,
rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team
arrangements and inadequate focus for the technology infrastructure. The findings add to the
existing body of knowledge on increasing the value addition by employing effective processes,
practices and tools and the impacts of inaccurate applications the same in the global software
engineering industry
Foundations for programming and implementing effect handlers
First-class control operators provide programmers with an expressive and efficient
means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and
control idioms as shareable libraries. Effect handlers provide a particularly structured
approach to programming with first-class control by naming control reifying operations
and separating from their handling.
This thesis is composed of three strands of work in which I develop operational
foundations for programming and implementing effect handlers as well as exploring
the expressive power of effect handlers.
The first strand develops a fine-grain call-by-value core calculus of a statically
typed programming language with a structural notion of effect types, as opposed to the
nominal notion of effect types that dominates the literature. With the structural approach,
effects need not be declared before use. The usual safety properties of statically typed
programming are retained by making crucial use of row polymorphism to build and
track effect signatures. The calculus features three forms of handlers: deep, shallow,
and parameterised. They each offer a different approach to manipulate the control state
of programs. Traditional deep handlers are defined by folds over computation trees,
and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are
defined by case splits (rather than folds) over computation trees. Parameterised handlers
are deep handlers extended with a state value that is threaded through the folds over
computation trees. To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small UNIX-style operating
system complete with multi-user environment, time-sharing, and file I/O.
The second strand studies continuation passing style (CPS) and abstract machine
semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The
CPS translation is obtained through a series of refinements of a basic first-order CPS
translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually
arriving at the notion of generalised continuation, which admit simultaneous support for
deep, shallow, and parameterised handlers. The initial refinement adds support for deep
handlers by representing stacks of continuations and handlers as a curried sequence of
arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the
CPS translation is refined once more to obtain an uncurried representation of stacks
of continuations and handlers. Finally, the translation is made higher-order in order to
contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for
deep, shallow, and parameterised effect handlers. kinds of effect handlers.
The third strand explores the expressiveness of effect handlers. First, I show that
deep, shallow, and parameterised notions of handlers are interdefinable by way of typed
macro-expressiveness, which provides a syntactic notion of expressiveness that affirms
the existence of encodings between handlers, but it provides no information about the
computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class
control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control
Recommended from our members
After Creation: Intergovernmental Organizations and Member State Governments as Co-Participants in an Authority Relationship
This is a re-amalgamation of what started as one manuscript and became two when the length proved to be more than any publisher wanted to consider. The splitting consisted of removing what are now Parts 3, 4, and 5 so that the manuscript focused on the outcome-related shared beliefs holding an authority relationship together. Those parts were last worked on in 2018. The rest were last worked on in late 2021 but also remain incomplete.
The relational approach adopted in this study treats intergovernmental organizations and the governments of member states as co-participants in an authority relationship with the governments of their member states. Authority relationships link two types of actor, defined by their authority-holder or addressee role in the relationship, through a set of shared beliefs about why the relationship exists and how the participants should fulfill their respective roles. The IGO as authority holder has a role that includes a right to instruct other actors about what they should or should not do; the governments of member states as addressees are expected to comply with the instructions. Three sets of shared beliefs provide the conceptual âglueâ holding the relationship together. The first defines the goal of the collective effort, providing both the rationale for having the authority relationship and providing a lode star for assessments of the collective effortâs success or lack of success. The second set defines the shared understanding about allocation of roles and the process of interaction by establishing shared expectations about a) the selection process by which particular actors acquire authority holder roles, b) the definitions identifying one or more categories of addressees expected to follow instructions, and c) the procedures through which the authority holder issues instructions. The third set focus on the outcomes of cooperation through the relationship by defining a) the substantive areas in which the authority holder may issue instructions, b) the bases for assessing the relevance actions mandated in instructions for reaching the goal, and c) the relative efficacy of action paths chosen for reaching the goal as compared to other possible action paths.
Using an authority relationship framework for analyzing cooperation through IGOs highlights the inherently bi-directional nature of IGO-member government activity by viewing their interaction as involving a three-step process in which the IGO as authority holder decides when to issue what instruction, the member state governments as followers react to the instruction with anything from prompt and full compliance through various forms of pushback to outright rejection, and the IGO as authority holder responds to how the followers react with efforts to increase individual compliance with instructions and reinforce continuing acceptance of the authority relationship. Foregrounding the dynamics produced by the interaction of these two streams of perception and action reveals more clearly how far intergovernmental organizations acquire capacity to operate as independent actors, the dynamic ways they maintain that capacity, and how much they influence member governmentsâ beliefs and actions at different times. The approach fosters better understanding of why, when, and for how long governments choose cooperation through an IGO even in periods of rising unilateralism
Consolidation of Urban Freight Transport â Models and Algorithms
Urban freight transport is an indispensable component of economic and social life in cities. Compared to other types of transport, however, it contributes disproportionately to the negative impacts of traffic. As a result, urban freight transport is closely linked to social, environmental, and economic challenges. Managing urban freight transport and addressing these issues poses challenges not only for local city administrations but also for companies, such as logistics service providers (LSPs). Numerous policy measures and company-driven initiatives exist in the area of urban freight transport to overcome these challenges. One central approach is the consolidation of urban freight transport. This dissertation focuses on urban consolidation centers (UCCs) which are a widely studied and applied measure in urban freight transport. The fundamental idea of UCCs is to consolidate freight transport across companies in logistics facilities close to an urban area in order to increase the efficiency of vehicles delivering goods within the urban area. Although the concept has been researched and tested for several decades and it was shown that it can reduce the negative externalities of freight transport in cities, in practice many UCCs struggle with a lack of business participation and financial difficulties. This dissertation is primarily focused on the costs and savings associated with the use of UCCs from the perspective of LSPs. The cost-effectiveness of UCC use, which is also referred to as cost attractiveness, can be seen as a crucial condition for LSPs to be interested in using UCC systems. The overall objective of this dissertation is two-fold. First, it aims to develop models to provide decision support for evaluating the cost-effectiveness of using UCCs. Second, it aims to analyze the impacts of urban freight transport regulations and operational characteristics on the cost attractiveness of using UCCs from the perspective of LSPs. In this context, a distinction is made between UCCs that are jointly operated by a group of LSPs and UCCs that are operated by third parties who offer their urban transport service for a fee. The main body of this dissertation is based on three research papers. The first paper focuses on jointly-operated UCCs that are operated by a group of cooperating LSPs. It presents a simulation model to analyze the financial impacts on LSPs participating in such a scheme. In doing so, a particular focus is placed on urban freight transport regulations. A case study is used to analyze the operation of a jointly-operated UCC for scenarios involving three freight transport regulations. The second and third papers take on a different perspective on UCCs by focusing on third-party operated UCCs. In contrast to the first paper, the second and third papers present an evaluation approach in which the decision to use UCCs is integrated with the vehicle route planning of LSPs. In addition to addressing the basic version of this integrated routing problem, known as the vehicle routing problem with transshipment facilities (VRPTF), the second paper presents problem extensions that incorporate time windows, fleet size and mix decisions, and refined objective functions. To heuristically solve the basic problem and the new problem variants, an adaptive large neighborhood search (ALNS) heuristic with embedded local search heuristic and set partitioning problem (SPP) is presented. Furthermore, various factors influencing the cost attractiveness of UCCs, including time windows and usage fees, are analyzed using a real-world case study. The third paper extends the work of the second paper and incorporates daily and entrance-based city toll schemes and enables multi-trip routing. A mixed-integer linear programming (MILP) formulation of the resulting problem is proposed, as well as an ALNS solution heuristic. Moreover, a real-world case study with three European cities is used to analyze the impact of the two city toll systems in different operational contexts
- âŠ