8,104 research outputs found

    MOON: MapReduce On Opportunistic eNvironments

    Get PDF
    Abstract—MapReduce offers a ïŹ‚exible programming model for processing and generating large data sets on dedicated resources, where only a small fraction of such resources are every unavailable at any given time. In contrast, when MapReduce is run on volunteer computing systems, which opportunistically harness idle desktop computers via frameworks like Condor, it results in poor performance due to the volatility of the resources, in particular, the high rate of node unavailability. Specifically, the data and task replication scheme adopted by existing MapReduce implementations is woefully inadequate for resources with high unavailability. To address this, we propose MOON, short for MapReduce On Opportunistic eNvironments. MOON extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms in order to offer reliable MapReduce services on a hybrid resource architecture, where volunteer computing systems are supplemented by a small set of dedicated nodes. The adaptive task and data scheduling algorithms in MOON distinguish between (1) different types of MapReduce data and (2) different types of node outages in order to strategically place tasks and data on both volatile and dedicated nodes. Our tests demonstrate that MOON can deliver a 3-fold performance improvement to Hadoop in volatile, volunteer computing environments

    The role of learning on industrial simulation design and analysis

    Full text link
    The capability of modeling real-world system operations has turned simulation into an indispensable problemsolving methodology for business system design and analysis. Today, simulation supports decisions ranging from sourcing to operations to finance, starting at the strategic level and proceeding towards tactical and operational levels of decision-making. In such a dynamic setting, the practice of simulation goes beyond being a static problem-solving exercise and requires integration with learning. This article discusses the role of learning in simulation design and analysis motivated by the needs of industrial problems and describes how selected tools of statistical learning can be utilized for this purpose

    Linking quality management to manufacturing strategy: an empirical investigation of customer focus practices

    Get PDF
    Quality management (QM) has often been advocated as being universally applicable to organizations. This is in contrast with the manufacturing strategy contingency approach of operations management (OM) which advocates internal and external consistency between manufacturing strategy choices. This article investigates, using the case-study method, whether customer focus practices—a distinctive subset of the whole set of QM practices—are contingent on a plant’s manufacturing strategy context. The study strongly suggests that customer focus practices are contingent on a plant’s manufacturing strategy and identifies mechanisms by which this takes place. The findings inform the implementation of QM programs

    Toward Customizable Multi-tenant SaaS Applications

    Get PDF
    abstract: Nowadays, Computing is so pervasive that it has become indeed the 5th utility (after water, electricity, gas, telephony) as Leonard Kleinrock once envisioned. Evolved from utility computing, cloud computing has emerged as a computing infrastructure that enables rapid delivery of computing resources as a utility in a dynamically scalable, virtualized manner. However, the current industrial cloud computing implementations promote segregation among different cloud providers, which leads to user lockdown because of prohibitive migration cost. On the other hand, Service-Orented Computing (SOC) including service-oriented architecture (SOA) and Web Services (WS) promote standardization and openness with its enabling standards and communication protocols. This thesis proposes a Service-Oriented Cloud Computing Architecture by combining the best attributes of the two paradigms to promote an open, interoperable environment for cloud computing development. Mutil-tenancy SaaS applicantions built on top of SOCCA have more flexibility and are not locked down by a certain platform. Tenants residing on a multi-tenant application appear to be the sole owner of the application and not aware of the existence of others. A multi-tenant SaaS application accommodates each tenant’s unique requirements by allowing tenant-level customization. A complex SaaS application that supports hundreds, even thousands of tenants could have hundreds of customization points with each of them providing multiple options, and this could result in a huge number of ways to customize the application. This dissertation also proposes innovative customization approaches, which studies similar tenants’ customization choices and each individual users behaviors, then provides guided semi-automated customization process for the future tenants. A semi-automated customization process could enable tenants to quickly implement the customization that best suits their business needs.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Building and Protecting vSphere Data Centers Using Site Recovery Manager (SRM)

    Get PDF
    With the evolution of cloud computing technology, companies like Amazon, Microsoft, Google, Softlayer, and Rackspace have started providing Infrastructure as a Service, Software as a Service, and Platform as a Service offering to their customers. For these companies, providing a high degree of availability is as important as providing an overall great hosting service. Disaster is always being unpredictable, the destruction caused by it is always worse than expected. Sometimes it ends up with the loose of information, data and records. Disaster can also make services inaccessible for very long time if disaster recovery was not planned properly. This paper focuses on protecting a vSphere virtual datacenter using Site Recovery Manager (SRM). A study says 23% of companies close within one year after the disaster struck. This paper also discusses on how SRM can be a cost effective disaster recovery solution compared to all the recovery solutions available. It will also cover Recovery Point Objective and Recovery Time Objective. The SRM works on two different replication methodologies that is vSphere replication and Array based replications. These technologies used by Site Recovery Manager to protect Tier-1, 2, and 3 applications. The recent study explains that Traditional DR solutions often fail to meet business requirements because they are too expensive, complex and unreliable. Organizations using Site Recovery Manager ensure highly predictable RTOs at a much lower cost and level of complexity. Lower cost for DR. Site Recovery Manager can reduce the operating overhead by 50% by replacing complex manual run books with simple, automated recovery plans that can be tested without disruption. For organizations with an RPO of 15 minutes or higher, vSphere Replication can eliminate up to 10,000perTBofprotecteddatawithstorage−basedtechnologies.ThecombinedsolutioncansaveoverUSD10,000 per TB of protected data with storage-based technologies. The combined solution can save over USD 1,100 per protected virtual machine per year. These calculations were validated by a third-party global research firm. Integration with Virtual SAN reduces the DR footprint through hyper-converged, software-defined storage that runs on any standard x86 platform. Virtual SAN can decrease the total cost of ownership for recovery storage by 50 percent
    • 

    corecore