4,071 research outputs found

    Efficient data reliability management of cloud storage systems for big data applications

    Get PDF
    Cloud service providers are consistently striving to provide efficient and reliable service, to their client's Big Data storage need. Replication is a simple and flexible method to ensure reliability and availability of data. However, it is not an efficient solution for Big Data since it always scales in terabytes and petabytes. Hence erasure coding is gaining traction despite its shortcomings. Deploying erasure coding in cloud storage confronts several challenges like encoding/decoding complexity, load balancing, exponential resource consumption due to data repair and read latency. This thesis has addressed many challenges among them. Even though data durability and availability should not be compromised for any reason, client's requirements on read performance (access latency) may vary with the nature of data and its access pattern behaviour. Access latency is one of the important metrics and latency acceptance range can be recorded in the client's SLA. Several proactive recovery methods, for erasure codes are proposed in this research, to reduce resource consumption due to recovery. Also, a novel cache based solution is proposed to mitigate the access latency issue of erasure coding

    Mass Housing for New Moscow

    Get PDF
    This thesis explores the highrise housing development industry within Moscow's recently expanded borders, where a large volume of what is commonly understood to be an outdated Soviet-era product continues to be built. The work investigates the reasons why this mass housing typology continues to represent well over half of new housing construction in the country, and seeks to offer viable recommendations for improving the status quo in this particular context and marketplace. It is found that a high prevalence of prefabrication and strict solar penetration requirements greatly hinder, however do not entirely preclude, the diversification of the housing stock. Despite strong evidence that the population prefers more western models of low-rise housing, long-standing traditions continue to normalize the highrise typology. With high demand for housing and limitations of the housing market, most homebuyers have no choice but to settle for this form of housing. Furthermore, the oligopolistic tendencies of the development industry result in a market with limited competition and low price elasticity of supply. In other words, demand for better housing, as well as ample capital from rising incomes and housing subsidies, result in limited improvement in the new housing stock. Without incentives and only obstacles to change, the development industry continues to build a substandard product. It is concluded that a profit motive is a prerequisite for any improvement and diversification of the new housing stock. Therefore, this thesis seeks to propose an alternate mass housing typology that better reflects housing aspirations of the population, while being viable due to improved profitability and marketability

    Global software alliances: the challenge of ‘standardization’

    Get PDF
    Global Software Alliances (GSAs) are a relatively new organizational form that firms are increasingly adopting to meet their software development needs. These relationships are fraught with complexity given the temporal, spatial and cultural separation of the firm contracting out the software development work and the firm doing the development. In this paper, we focus on the challenge of standardization that contributes significantly to the ongoing complexity. The nature of the standardization problem is elaborated, and the tensions that are associated in their implementation are analyzed. A key implication arising from the paper is the need to broaden the technical focus on standards that have existed in prior research, and to give increased emphasis on management practices. Latour’s idea of “circulating reference” is introduced to analyze the question of “what is lost, what is gained, and what remains invariant in the process of translation?

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    Trade Reforms and Technological Accumulation: the Case of the Industrial Sector in Argentina during the 1990s

    Get PDF
    The impacts of trade liberalisation on technological development are particularly important because of their dynamic long-term effects on the economy. The paper pursues a comprehensive approach to technological change that relies on drawing a contrast between visible changes in performance and decision-making processes that stem from a behavioural dimension. Based on the Argentinean Innovation Survey (1997) the paper justifies the importance of a joint determination of these two dimensions for analysing macro-micro links of technological change as the most adequate way of assessing the impact of major macro-policy change on technology. It is organised in three parts. The first part critically discusses the main theoretical arguments that relate trade liberalisation to technological accumulation. The second part claims that the ultimate impact of openness on technological performance is dependent on its incidence on the elements that guide firms' technological decisions. Therefore, a model for micro technological behaviour and trade liberalisation is developed in the light of the Schumpeterian literature and illustrated using techniques appropriate for non-parametric data. Part three emphasises the importance of macro behaviour. Based on empirical information for the Argentinean case it is claimed that the biological metaphor which states that an open market is sufficient to select the best performing firms is often invalid in the context of Argentinean macro behaviour during the 1990s. On the contrary, firms had higher probabilities of remaining in the market when they followed a survival attitude unrelated to productive activities, and this often hampered technological performance. Thus two distinct patterns emerged, one corresponding to technological performance and the other to economic performance.Trade liberalisation, macro-micro links, technological behaviour, efficiency, development, Argentina

    Fault tolerance distributed computing

    Get PDF
    Issued as Funds expenditure reports [nos. 1-4], Quarterly progress reports [nos. 1-3], and Final report, Project no. G-36-63

    Run-time support for parallel object-oriented computing: the NIP lazy task creation technique and the NIP object-based software distributed shared memory

    Get PDF
    PhD ThesisAdvances in hardware technologies combined with decreased costs have started a trend towards massively parallel architectures that utilise commodity components. It is thought unreasonable to expect software developers to manage the high degree of parallelism that is made available by these architectures. This thesis argues that a new programming model is essential for the development of parallel applications and presents a model which embraces the notions of object-orientation and implicit identification of parallelism. The new model allows software engineers to concentrate on development issues, using the object-oriented paradigm, whilst being freed from the burden of explicitly managing parallel activity. To support the programming model, the semantics of an execution model are defined and implemented as part of a run-time support system for object-oriented parallel applications. Details of the novel techniques from the run-time system, in the areas of lazy task creation and object-based, distributed shared memory, are presented. The tasklet construct for representing potentially parallel computation is introduced and further developed by this thesis. Three caching techniques that take advantage of memory access patterns exhibited in object-oriented applications are explored. Finally, the performance characteristics of the introduced run-time techniques are analysed through a number of benchmark applications

    Scalable Internet auctions

    Get PDF
    Current Internet based auction services rely, in general, on a centralised auction server; applications with large and geographically dispersed bidder client bases are thus supported in a centralised manner. Such an approach is fundamentally restrictive as too many users can overload the server, making the whole auction process unresponsive. Further, such an architecture can be vulnerable to server's failures, if not equipped with sufficient redundancy. In addition, bidders who are closer to the server are likely to have relatively faster access to the server than remote bidders, thereby gaining an unfair advantage. To overcome these shortcomings, this thesis investigates ways of enabling widely distributed, arbitrarily large number of auction servers to cooperate in conducting an auction. Allowing a bidder to register with anyone of the auction servers and place bids there, coupled with periodic exchange of auction information between servers forms the basis of the solution investigated to achieve scalability, responsiveness and fairness. Scalability and responsiveness are achieved since the total load is shared amongst many bidder servers; fairness is achieved since bidders are able to register with their local servers. The thesis presents the design and implementation of an hierarchically structured distributed Internet auction system. Protocols for inter-server cooperation are presented. Each server may be replicated locally to mask node failures. Performance evaluations of centralised and distributed configurations are performed to show the advantages of the distributed configuration over the centralised one.EThOS - Electronic Theses Online ServiceIranian Ministry of Science, Research and Technology : Isfahan UniversityGBUnited Kingdo
    • 

    corecore