9,182 research outputs found

    Cost performance and risk in the construction of offshore and onshore wind farms

    Get PDF
    This article investigates the risk of cost overruns and underruns occurring in the construction of 51 onshore and offshore wind farms commissioned between 2000 and 2015 in 13 countries. In total, these projects required about 39billionininvestmentandreachedabout11GWofinstalledcapacity.Weusethisoriginaldatasettotestsixhypothesesaboutconstructioncostoverrunsrelatedto(i)technologicallearning,(ii)fiscalcontrol,(iii)economiesofscale,(iv)configuration,(v)regulationandmarketsand(vi)manufacturingexperience.Wefindthatacrosstheentiredataset,themeancostescalationperprojectis6.539 billion in investment and reached about 11 GW of installed capacity. We use this original dataset to test six hypotheses about construction cost overruns related to (i) technological learning, (ii) fiscal control, (iii) economies of scale, (iv) configuration, (v) regulation and markets and (vi) manufacturing experience. We find that across the entire dataset, the mean cost escalation per project is 6.5% or about 63 million per windfarm, although 20 projects within the sample (39%) did not exhibit cost overruns. The majority of onshore wind farms exhibit cost underruns while for offshore wind farms the results have a larger spread. Interestingly, no significant relationship exists between the size (in total MWor per individual turbine capacity) of a windfarm and the severity of a cost overrun. Nonetheless, there is an indication that the risk increases for larger wind farms at greater distances offshore using new types of turbines and foundations. Overall, the mean cost escalation for onshore projects is 1.7% and 9.6% for offshore projects, amounts much lower than those for other energy infrastructure

    What Belgium Can Teach Bosnia: The Uses of Autonomy in 'Divided House' States

    Get PDF
    Belgium and Bosnia can be understood as “divided house” states, which contain proportionally similar groups with opposing views regarding whether the state should be more unitary or more decentralised. The Belgian example demonstrates that even where groups disagree on state structure, a mixture of various forms of group autonomy may facilitate stability and compromise within the state. Belgium addresses this dilemma in two ways: 1) non-territorial autonomous units in the form of the linguistic communities, and 2) exclusive competencies for different units within the diverse Belgian state. In Bosnia, the rights of minorities in different territorial units, as well as refugee returns to areas where they are minorities, might be improved by structures with non-territorial autonomy that are similar to the Belgian linguistic communities. Similar to Belgium, these non-territorial units might hold exclusive competencies for educational, linguistic, cultural, and religious matters, and enable more political representation of minority individuals. In order to advocate working models for Bosnia, analysts should more carefully examine actual examples from states with similarly divided populations

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    An Overview of the Performance of Public Infrastructure Megaprojects in Kenya

    Get PDF
    The need for this study arose from the thesis that infrastructure megaprojects are often delivered over budget, behind schedule, with benefit shortfalls, over and over again. Many studies have been conducted towards this conclusion but these studies have not included Kenya which is increasingly adopting megaprojects as a model for delivering public goods and services. Through this quantitative study utilizing a cross-sectional census survey design, the performance of 27 completed public infrastructure megaprojects was assessed using broader measures of project success. The findings agree that these projects are delivered over budget and behind schedule but not with benefit shortfalls. It is also confirmed that process or project management success does not necessarily lead to product or organizational success. It is recommended that public infrastructure megaproject sponsors and implementers adopt project structures that allow for innovation through the use of advanced technology. Such structures should encourage the use of competitive tendering and a preference for pain/gain contractual arrangements to accommodate the differences in risk preferences between the client and the contractor, and to minimize the incidences of agency problem among the various stakeholders
    • 

    corecore