29,038 research outputs found

    Measuring Social Value Orientation

    Get PDF
    Narrow self-interest is often used as a simplifying assumption when studying people making decisions in social contexts. Nonetheless, people exhibit a wide range of different motivations when choosing unilaterally among interdependent outcomes. Measuring the magnitude of the concern people have for others, sometimes called Social Value Orientation (SVO), has been an interest of many social scientists for decades and several different measurement methods have been developed so far. Here we introduce a new measure of SVO that has several advantages over existent methods. A detailed description of the new measurement method is presented, along with norming data that provide evidence of its solid psychometric properties. We conclude with a brief discussion of the research streams that would benefit from a more sensitive and higher resolution measure of SVO, and extend an invitation to others to use this new measure which is freely availabl

    Towards a General Theory of Financial Control for Organisations

    Get PDF
    In this paper, a theory of accounting, control and accounting-related areas is outlined.It is based on a number of previous research-oriented books published over several decades and the author´s specific own experiences from internal and external processes with organisations in focus.Consistency and integrative power of the ideas have been tested in relation to certain books in various fields outside the core of the subject:theatre,sociology, applied systems theory,economic history, institutional theory and economics.The general approach can be described in simple terms as follows.There are global value chains, from resources to output that are in use.These chains change with time.Uncertainty and unpredictability prevail for the present state and for possible changes; to some extent it is possible to estimate risks of the future. At any moment, each organisation has taken some limited position on a chain.Each organisation has a hierarchy which lies above operations. Over time, chains, organisations, hierarchies, output and personal functions vary. According to the approach, insights into control problems for every organisation and system can be gained by analysing relationships between global value chains and a hierarchy of one or several organisations.Time is crucial.financial control; management control; public administration; financial entities; financial reporting; dependencies; function-driven organisations; pay-driven organisations; transfer-driven organisations; supervisory boards; mass media; auditors; natural systems; panarchy; pseudo-commercial units; inter-organisational control; long-term control; short-term effects; hierarchies; global value chains; vertical control; horizontal control; corporate governance; remote control; controllability; transparency; values-in-use; values-in-exchange; fair values; historical costing; opportunity costs; product costing; transfer pricing; local optimization; time-bound optimization; longitudinal relationships.

    Self-organising agent communities for autonomic resource management

    No full text
    The autonomic computing paradigm addresses the operational challenges presented by increasingly complex software systems by proposing that they be composed of many autonomous components, each responsible for the run-time reconfiguration of its own dedicated hardware and software components. Consequently, regulation of the whole software system becomes an emergent property of local adaptation and learning carried out by these autonomous system elements. Designing appropriate local adaptation policies for the components of such systems remains a major challenge. This is particularly true where the system’s scale and dynamism compromise the efficiency of a central executive and/or prevent components from pooling information to achieve a shared, accurate evidence base for their negotiations and decisions.In this paper, we investigate how a self-regulatory system response may arise spontaneously from local interactions between autonomic system elements tasked with adaptively consuming/providing computational resources or services when the demand for such resources is continually changing. We demonstrate that system performance is not maximised when all system components are able to freely share information with one another. Rather, maximum efficiency is achieved when individual components have only limited knowledge of their peers. Under these conditions, the system self-organises into appropriate community structures. By maintaining information flow at the level of communities, the system is able to remain stable enough to efficiently satisfy service demand in resource-limited environments, and thus minimise any unnecessary reconfiguration whilst remaining sufficiently adaptive to be able to reconfigure when service demand changes

    Taking a “Deep Dive”: What Only a Top Leader Can Do

    Get PDF
    Unlike most historical accounts of strategic change inside large firms, empirical research on strategic management rarely uses the day-to-day behaviors of top executives as the unit of analysis. By examining the resource allocation process closely, we introduce the concept of a deep dive, an intervention when top management seizes hold of the substantive content of a strategic initiative and its operational implementation at the project level, as a way to drive new behaviors that enable an organization to shift its performance trajectory into new dimensions unreachable with any of the previously described forms of intervention. We illustrate the power of this previously underexplored change mechanism with a case study, in which a well-established firm overcame barriers to change that were manifest in a wide range of organizational routines and behavioral norms that had been fostered by the pre-existing structural context of the firm.Strategic Change, Resource Allocation Process, Top-down Intervention

    Flexible provisioning of Web service workflows

    No full text
    Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures

    Cloud Workload Allocation Approaches for Quality of Service Guarantee and Cybersecurity Risk Management

    Get PDF
    It has become a dominant trend in industry to adopt cloud computing --thanks to its unique advantages in flexibility, scalability, elasticity and cost efficiency -- for providing online cloud services over the Internet using large-scale data centers. In the meantime, the relentless increase in demand for affordable and high-quality cloud-based services, for individuals and businesses, has led to tremendously high power consumption and operating expense and thus has posed pressing challenges on cloud service providers in finding efficient resource allocation policies. Allowing several services or Virtual Machines (VMs) to commonly share the cloud\u27s infrastructure enables cloud providers to optimize resource usage, power consumption, and operating expense. However, servers sharing among users and VMs causes performance degradation and results in cybersecurity risks. Consequently, how to develop efficient and effective resource management policies to make the appropriate decisions to optimize the trade-offs among resource usage, service quality, and cybersecurity loss plays a vital role in the sustainable future of cloud computing. In this dissertation, we focus on cloud workload allocation problems for resource optimization subject to Quality of Service (QoS) guarantee and cybersecurity risk constraints. To facilitate our research, we first develop a cloud computing prototype that we utilize to empirically validate the performance of different proposed cloud resource management schemes under a close to practical, but also isolated and well-controlled, environment. We then focus our research on the resource management policies for real-time cloud services with QoS guarantee. Based on queuing model with reneging, we establish and formally prove a series of fundamental principles, between service timing characteristics and their resource demands, and based on which we develop several novel resource management algorithms that statically guarantee the QoS requirements for cloud users. We then study the problem of mitigating cybersecurity risk and loss in cloud data centers via cloud resource management. We employ game theory to model the VM-to-VM interdependent cybersecurity risks in cloud clusters. We then conduct a thorough analysis based on our game-theory-based model and develop several algorithms for cybersecurity risk management. Specifically, we start our cybersecurity research from a simple case with only two types of VMs and next extend it to a more general case with an arbitrary number of VM types. Our intensive numerical and experimental results show that our proposed algorithms can significantly outperform the existing methodologies for large-scale cloud data centers in terms of resource usage, cybersecurity loss, and computational effectiveness

    Environmental analysis for application layer networks

    Get PDF
    Die zunehmende Vernetzung von Rechnern über das Internet lies die Vision von Application Layer Netzwerken aufkommen. Sie umfassen Overlay Netzwerke wie beispielsweise Peer-to-Peer Netzwerke und Grid Infrastrukturen unter Verwendung des TCP/IP Protokolls. Ihre gemeinsame Eigenschaft ist die redundante, verteilte Bereitstellung und der Zugang zu Daten-, Rechen- und Anwendungsdiensten, während sie die Heterogenität der Infrastruktur vor dem Nutzer verbergen. In dieser Arbeit werden die Anforderungen, die diese Netzwerke an ökonomische Allokationsmechanismen stellen, untersucht. Die Analyse erfolgt anhand eines Marktanalyseprozesses für einen zentralen Auktionsmechanismus und einen katallaktischen Markt. --Grid Computing

    Interactional and procedural practices in managing coopetitive tensions

    Get PDF
    Purpose - The purpose of this paper is to explore interactional and procedural practices in managing tensions of coopetition (simultaneous collaboration and competition between firms). Design/methodology/approach - Through an in-depth literature review of prior research within coopetition and strategy-as-practice fields, and by using two illustrative empirical examples, the authors develop a framework for preventing and managing coopetitive tensions through combinations of procedural and interactional practices. Findings - The authors identify tensions related to strategizing, task and resource allocation, as well as knowledge sharing. Furthermore, they demonstrate potential ways of how these tensions can be prevented, resolved and managed. Research limitations/implications - The findings show that the analysis of tensions in coopetition would benefit from a holistic, multilevel approach that recognizes practices that are interactional (i.e. face-to-face interactions) as well as procedural (i.e. organizational routines). Coopetitive tensions and their resolution are related to the use or neglect of both types of practices. Furthermore, interactional and procedural practices are mutually interdependent and can complement each other in tension management in various ways. Practical implications - The findings of this study shed light on the roles and activities of actual practitioners involved in coopetition, and shows how their work and practices in-use contribute to coopetition, related tensions and their resolution. Originality/value - By adopting the strategy-as-practice approach, this study generates valuable insights into the practices and tensions in coopetition, as well as illuminates the roles of the practitioners involved in managing coopetition relationships.fi=vertaisarvioitu|en=peerReviewed
    corecore