6 research outputs found

    Information Technology Architectures for Global Comptitive Advantage

    Get PDF

    Switches and mortar in the Internet's shadow : a study of the effects of technology on competitive strategy for the Internet's landlords

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2000.Includes bibliographical references (leaves 132-138).Communications technology has experienced a period of explosive growth, driven by a confluence of legal, political and technical factors including the following: the 1968 Carter Phone and 1980's competitive carrier decisions, the 1984 divestiture of AT&T, the Telecommunications Act of 1996, the development and standardization of new technologies, and the proliferation of the Internet and World Wide Web. This thesis asks the fundamental questions: How has the rapid growth of the Internet and other communications technologies changed the competitive strategy of commercial tenants, and how have these changes affected commercial real estate developers? This study proposes that developers and landlords need to use more forward-looking theories of competitive strategy in order to understand the current and future real estate needs of technology-driven commercial tenants. Telecommunications deregulation and the growth of the Internet led to the creation of a new and rapidly growing high technology industry and commercial tenancy. Deregulation and the Internet also transformed the way traditional commercial real estate uses information technology, encouraged the forging of partnerships between commercial real estate professionals and "last mile" information technology contractors, and resulted in the creation of a new commercial real estate product-the telecom hotel.' Current literature suggests traditional commercial tenants might differ from Internet-based business tenants in four general areas of the development process: feasibility, site selection, design and building operations. The proliferation of the Internet as a catalyst for new real estate products, commercial tenants and partnerships, and the observed differences in development practices between traditional and commercial tenants are both clues to fundamental differences between these two tenants' competitive strategies. It is possible to understand these clues to tenant behavior by taking an in-depth look at how these two tenants compete in their respective industries. Traditional commercial business tenants appear to conform to Michael Porter's theories on competitive strategy and advantage. High-tech tenant's competitive strategies seem to be more accurately reflected by Gary Hamel and C.K. Prahalad's model of competition for the future. These two theories, and the industries they represent, differ in four dimensions: Future versus Past/Present orientation, technology use, rate of growth, and resource use. In comparing three case studies on these four strategic dimensions, this thesis concludes that Porter's more stable, efficiency-oriented model does explain the strategy of Northwestern Mutual, a large insurance organization. Hamel and Prahalad's model better explains the hectic, high growth, future orientation of Akamai and YankeeTek Incubator as well as Teleplace, a telecom hotel service company. Hamel and Prahalad and Porter's frameworks explain significant discrepancies between predicted development practices based on current industry thinking, and observed development practices based on these in depth case studies. This thesis thus verifies a need by real estate developers and landlords to use forward-looking theories of competitive strategy when examining the current and future needs of hightech tenants.by Geoffrey Morgan and Benjamin V.A. Pettigrew.S.M

    SHARING WITH LIVE MIGRATION ENERGY OPTIMIZATION TASK SCHEDULER FOR CLOUD COMPUTING DATACENTRES

    Get PDF
    The use of cloud computing is expanding, and it is becoming the driver for innovation in all companies to serve their customers around the world. A big attention was drawn to the huge energy that was consumed within those datacentres recently neglecting the energy consumption in the rest of the cloud components. Therefore, the energy consumption should be reduced to minimize performance losses, achieve the target battery lifetime, satisfy performance requirements, minimize power consumption, minimize the CO2 emissions, maximize the profit, and maximize resource utilization. Reducing power consumption in the cloud computing datacentres can be achieved by many ways such as managing or utilizing the resources, controlling redundancy, relocating datacentres, improvement of applications or dynamic voltage and frequency scaling. One of the most efficient ways to reduce power is to use a scheduling technique that will find the best task execution order based on the users demands and with the minimum execution time and cloud resources. It is quite a challenge in cloud environment to design an effective and an efficient task scheduling technique which is done based on the user requirements. The scheduling process is not an easy task because within the datacentre there is dissimilar hardware with different capacities and, to improve the resource utilization, an efficient scheduling algorithm must be applied on the incoming tasks to achieve efficient computing resource allocating and power optimization. The scheduler must maintain the balance between the Quality of Service and fairness among the jobs so that the efficiency may be increased. The aim of this project is to propose a novel method for optimizing energy usage in cloud computing environments that satisfy the Quality of Service (QoS) and the regulations of the Service Level Agreement (SLA). Applying a power- and resource-optimised scheduling algorithm will assist to control and improve the process of mapping between the datacentre servers and the incoming tasks and achieve the optimal deployment of the data centre resources to achieve good computing efficiency, network load minimization and reducing the energy consumption in the datacentre. This thesis explores cloud computing energy aware datacentre structures with diverse scheduling heuristics and propose a novel job scheduling technique with sharing and live migration based on file locality (SLM) aiming to maximize efficiency and save power consumed in the datacentre due to bandwidth usage utilization, minimizing the processing time and the system total make span. The propose SLM energy efficient scheduling strategy have four basic algorithms: 1) Job Classifier, 2) SLM job scheduler, 3) Dual fold VM virtualization and 4) VM threshold margins and consolidation. The SLM job classifier worked on categorising the incoming set of user requests to the datacentre in to two different queues based on these requests type and the source file needed to process them. The processing time of each job fluctuate based on the job type and the number of instructions for each job. The second algorithm, which is the SLM scheduler algorithm, dispatch jobs from both queues according to job arrival time and control the allocation process to the most appropriate and available VM based on job similarity according to a predefined synchronized job characteristic table (SJC). The SLM scheduler uses a replicated host’s infrastructure to save the wasted idle hosts energy by maximizing the basic host’s utilization as long as the system can deal with workflow while setting replicated hosts on off mode. The third SLM algorithm, the dual fold VM algorithm, divide the active VMs in to a top and low level slots to allocate similar jobs concurrently which maximize the host utilization at high workload and reduce the total make span. The VM threshold margins and consolidation algorithm set an upper and lower threshold margin as a trigger for VMs consolidation and load balancing process among running VMs, and deploy a continuous provisioning of overload and underutilize VMs detection scheme to maintain and control the system workload balance. The consolidation and load balancing is achieved by performing a series of dynamic live migrations which provides auto-scaling for the servers with in the datacentres. This thesis begins with cloud computing overview then preview the conceptual cloud resources management strategies with classification of scheduling heuristics. Following this, a Competitive analysis of energy efficient scheduling algorithms and related work is presented. The novel SLM algorithm is proposed and evaluated using the CloudSim toolkit under number of scenarios, then the result compared to Particle Swarm Optimization algorithm (PSO) and Ant Colony Algorithm (ACO) shows a significant improvement in the energy usage readings levels and total make span time which is the total time needed to finish processing all the tasks

    Driving and Inhibiting Factors in the Adoption of Open Source Software in Organisations

    Get PDF
    The aim of this research is to investigate the extent to which Open Source Software (OSS) adoption behaviour can empirically be shown to be governed by a set of self-reported (driving and inhibiting) salient beliefs of key informants in a sample of organisations. Traditional IS adoption/usage theory, methodology and practice are drawn on. These are then augmented with theoretical constructs derived from IT governance and organisational diagnostics to propose an artefact that aids the understanding of organisational OSS adoption behaviour, stimulates debate and aids operational management interventions. For this research, a combination of quantitative methods (via Fisher’s Exact Test) and complimentary qualitative method (via Content Analysis) were used using self-selection sampling techniques. In addition, a combination of data and methods were used to establish a set of mixed-methods results (or meta-inferences). From a dataset of 32 completed questionnaires in the pilot study, and 45 in the main study, a relatively parsimonious set of statistically significant driving and inhibiting factors were successfully established (ranging from 95% to 99.5% confidence levels) for a variety for organisational OSS adoption behaviours (i.e. by year, by software category and by stage of adoption). In addition, in terms of mixed-methods, combined quantitative and qualitative data yielded a number of factors limited to a relatively small number of organisational OSS adoption behaviour. The findings of this research are that a relatively small set of driving and inhibiting salient beliefs (e.g. Security, Perpetuity, Unsustainable Business Model, Second Best Perception, Colleagues in IT Dept., Ease of Implementation and Organisation is an Active User) have proven very accurate in predicting certain organisational OSS adoption behaviour (e.g. self-reported Intention to Adopt OSS in 2014) via Binomial Logistic Regression Analysis

    The emergence and management of an inter-organizationally networked IS development industry : An exploratory case study

    Full text link
    The IS development industry is currently undergoing a fundamental change towards a more inter-organizational structure. These networks of smaller companies are expected to nest around existing, large development firms. In this context, this study addresses fundamental research objectives regarding both the motivation for this development and how the new structure can be managed; this from both perspectives, that of the large hubs and that of the smaller spokes. Relying on various economic and management theories, different factors are elaborated that are expected to play an important role for answering these questions. Testing this theoretical framework in ten case studies (two hubs and eight spokes), a comprehensive model of inter-organizational cooperation in IS development is developed. In this model, the motivational and the management factors can be shown to be in close interaction with each other over the existence of a partnership between hub and spoke. The Innovative product developed by a spoke in combination with an existing platform of the hub give both a better market reach. However, as IS development is probably one of the most dynamic industries in the world, successful partnerships do not necessarily have to last for a long time. While some do, other are quickly coming to an end, either through acquisition of the spoke or through imitation of its solution by the hub. The perceived ideal way for the spokes to avoid this fate is the development of new innovations, in which case the partnership process starts anew. This constant pressure can be considered one of the integral parts of the newly emerging networked structure and its ability to generate even more innovative products at a faster pace than it was possible in the old industry structure. This study is the first to offer a model that is able to explain exactly these dynamics within the IS development industry
    corecore