11,278 research outputs found
Coordination in Business Process Offshoring
We investigate coordination strategies in the remote delivery of business services (i.e. Business Process Offshoring). We analyze 126 surveys of offshored processes to understand both the sources of difficulty in the remote delivery of services as well as how organizations overcome these difficulties. We find that interdependence between offshored and onshore processes can lower offshore process performance. Investment in coordination mechanisms such as modularity, ongoing communication and generating common ground across locations ameliorate the performance impact of interdependence. In particular, we are able to show that building common ground – knowledge that is shared and known to be shared- across locations is a coordination mechanism that is distinct from building communication channels or modularising processes. Our results also suggest the firms may be investing less in common ground than they should.Coordination; offshoring; modularity; common ground; interdependence
The imperfect hiding : some introductory concepts and preliminary issues on modularity
In this work we present a critical assessment of some problems and open questions on the debated notion of modularity. Modularity is greatly in fashion nowadays, being often proposed as the new approach to complex artefact production that enables to combine fast innovation pace, enhanced product variety and reduced need for co-ordination. In line with recent critical assessments of the managerial literature on modularity, we sustain that modularity is only one among several arrangements to cope with the complexity inherent in most high-technology artefact production, and by no means the best one. We first discuss relations between modularity and the broader (and much older within economics) notion of division of labour. Then we sustain that a modular approach to labour division aimed at eliminating technological interdependencies between components or phases of a complex production process may have, as a by-product, the creation of other types of interdependencies which may subsequently result in inefficiencies of various types. Hence, the choice of a modular design strategy implies the resolution of various tradeoffs. Depending on how such tradeoffs are solved, different organisational arrangements may be created to cope with ‘residual’ interdependencies. Hence, there is no need to postulate a perfect isomorphism, as some recent literature has proposed, between modularity at the product level and modularity at the organisational level
The imperfect hiding: Some introductory concepts and preliminary issues on modularity.
In this work we present a critical assessment of some problems and open questions on the debated notion of modularity. Modularity is greatly in fashion nowadays, being often proposed as the new approach to complex artefact production that enables to combine fast innovation pace, enhanced product variety and reduced need for co-ordination. In line with recent critical assessments of the managerial literature on modularity, we sustain that modularity is only one among several arrangements to cope with the complexity inherent in most high-technology artefact production, and by no means the best one. We first discuss relations between modularity and the broader (and much older within economics) notion of division of labour. Then we sustain that a modular approach to labour division aimed at eliminating technological interdependencies between components or phases of a complex production process may have, as a by-product, the creation of other types of interdependencies which may subsequently result in inefficiencies of various types. Hence, the choice of a modular design strategy implies the resolution of various tradeoffs. Depending on how such tradeoffs are solved, different organisational arrangements may be created to cope with 'residual' interdependencies. Hence, there is no need to postulate a perfect isomorphism, as some recent literature has proposed, between modularity at the product level and modularity at the organisational level.
The Accounting Network: how financial institutions react to systemic crisis
The role of Network Theory in the study of the financial crisis has been
widely spotted in the latest years. It has been shown how the network topology
and the dynamics running on top of it can trigger the outbreak of large
systemic crisis. Following this methodological perspective we introduce here
the Accounting Network, i.e. the network we can extract through vector
similarities techniques from companies' financial statements. We build the
Accounting Network on a large database of worldwide banks in the period
2001-2013, covering the onset of the global financial crisis of mid-2007. After
a careful data cleaning, we apply a quality check in the construction of the
network, introducing a parameter (the Quality Ratio) capable of trading off the
size of the sample (coverage) and the representativeness of the financial
statements (accuracy). We compute several basic network statistics and check,
with the Louvain community detection algorithm, for emerging communities of
banks. Remarkably enough sensible regional aggregations show up with the
Japanese and the US clusters dominating the community structure, although the
presence of a geographically mixed community points to a gradual convergence of
banks into similar supranational practices. Finally, a Principal Component
Analysis procedure reveals the main economic components that influence
communities' heterogeneity. Even using the most basic vector similarity
hypotheses on the composition of the financial statements, the signature of the
financial crisis clearly arises across the years around 2008. We finally
discuss how the Accounting Networks can be improved to reflect the best
practices in the financial statement analysis
Enhancing community detection using a network weighting strategy
A community within a network is a group of vertices densely connected to each
other but less connected to the vertices outside. The problem of detecting
communities in large networks plays a key role in a wide range of research
areas, e.g. Computer Science, Biology and Sociology. Most of the existing
algorithms to find communities count on the topological features of the network
and often do not scale well on large, real-life instances.
In this article we propose a strategy to enhance existing community detection
algorithms by adding a pre-processing step in which edges are weighted
according to their centrality w.r.t. the network topology. In our approach, the
centrality of an edge reflects its contribute to making arbitrary graph
tranversals, i.e., spreading messages over the network, as short as possible.
Our strategy is able to effectively complements information about network
topology and it can be used as an additional tool to enhance community
detection. The computation of edge centralities is carried out by performing
multiple random walks of bounded length on the network. Our method makes the
computation of edge centralities feasible also on large-scale networks. It has
been tested in conjunction with three state-of-the-art community detection
algorithms, namely the Louvain method, COPRA and OSLOM. Experimental results
show that our method raises the accuracy of existing algorithms both on
synthetic and real-life datasets.Comment: 28 pages, 2 figure
Communities, Knowledge Creation, and Information Diffusion
In this paper, we examine how patterns of scientific collaboration contribute
to knowledge creation. Recent studies have shown that scientists can benefit
from their position within collaborative networks by being able to receive more
information of better quality in a timely fashion, and by presiding over
communication between collaborators. Here we focus on the tendency of
scientists to cluster into tightly-knit communities, and discuss the
implications of this tendency for scientific performance. We begin by reviewing
a new method for finding communities, and we then assess its benefits in terms
of computation time and accuracy. While communities often serve as a taxonomic
scheme to map knowledge domains, they also affect how successfully scientists
engage in the creation of new knowledge. By drawing on the longstanding debate
on the relative benefits of social cohesion and brokerage, we discuss the
conditions that facilitate collaborations among scientists within or across
communities. We show that successful scientific production occurs within
communities when scientists have cohesive collaborations with others from the
same knowledge domain, and across communities when scientists intermediate
among otherwise disconnected collaborators from different knowledge domains. We
also discuss the implications of communities for information diffusion, and
show how traditional epidemiological approaches need to be refined to take
knowledge heterogeneity into account and preserve the system's ability to
promote creative processes of novel recombinations of idea
A Longitudinal Study of Identifying and Paying Down Architectural Debt
Architectural debt is a form of technical debt that derives from the gap
between the architectural design of the system as it "should be" compared to
"as it is". We measured architecture debt in two ways: 1) in terms of
system-wide coupling measures, and 2) in terms of the number and severity of
architectural flaws. In recent work it was shown that the amount of
architectural debt has a huge impact on software maintainability and evolution.
Consequently, detecting and reducing the debt is expected to make software more
amenable to change. This paper reports on a longitudinal study of a healthcare
communications product created by Brightsquid Secure Communications Corp. This
start-up company is facing the typical trade-off problem of desiring
responsiveness to change requests, but wanting to avoid the ever-increasing
effort that the accumulation of quick-and-dirty changes eventually incurs. In
the first stage of the study, we analyzed the status of the "before" system,
which indicated the impacts of change requests. This initial study motivated a
more in-depth analysis of architectural debt. The results of this analysis were
used to motivate a comprehensive refactoring of the software system. The third
phase of the study was a follow-on architectural debt analysis which quantified
the improvements made. Using this quantitative evidence, augmented by
qualitative evidence gathered from in-depth interviews with Brightsquid's
architects, we present lessons learned about the costs and benefits of paying
down architecture debt in practice.Comment: Submitted to ICSE-SEIP 201
Capability Coordination in Modular Organization: Voluntary FS/OSS Production and the Case of Debian GNU/Linux
The paper analyzes voluntary Free Software/Open Source Software (FS/OSS) organization of work. The empirical setting considered is the Debian GNU/Linux operating system. The paper finds that the production process is hierarchical notwithstanding the modular (nearly decomposable) architecture of software and of voluntary FS/OSS organization. But voluntary FS/OSS project organization is not hierarchical for the same reasons suggested by the most familiar theories of economic organization: hierarchy is justified for coordination of continuous change, rather than for the direction of static production. Hierarchy is ultimately the overhead attached to the benefits engendered by modular organization.Modularity, hierarchy, capabilities, coordination costs, software.
- …