5,442 research outputs found

    Digital Advertising and News: Who Advertises on News Sites and How Much Those Ads Are Targeted

    Get PDF
    Analyzes trends in advertising in twenty-two news operations, including shifts to digital advertising, use of consumer data to target ads, types of ads, and industries represented among advertisers by media type

    Performance Controlled Power Optimization for Virtualized Internet Datacenters

    Get PDF
    Modern data centers must provide performance assurance for complex system software such as web applications. In addition, the power consumption of data centers needs to be minimized to reduce operating costs and avoid system overheating. In recent years, more and more data centers start to adopt server virtualization strategies for resource sharing to reduce hardware and operating costs by consolidating applications previously running on multiple physical servers onto a single physical server. In this dissertation, several power efficient algorithms are proposed to effectively reduce server power consumption while achieving the required application-level performance for virtualized servers. First, at the server level this dissertation proposes two control solutions based on dynamic voltage and frequency scaling (DVFS) technology and request batching technology. The two solutions share a performance balancing technique that maintains performance balancing among all virtual machines so that they can have approximately the same performance level relative to their allowed peak values. Then, when the workload intensity is light, we adopt the request batching technology by using a controller to determine the time length for periodically batching incoming requests and putting the processor into sleep mode. When the workload intensity changes from light to moderate, request batching is automatically switched to DVFS to increase the processor frequency for performance guarantees. Second, at the datacenter level, this dissertation proposes a performance-controlled power optimization solution for virtualized server clusters with multi-tier applications. The solution utilizes both DVFS and server consolidation strategies for maximized power savings by integrating feedback control with optimization strategies. At the application level, a multi-input-multi-output controller is designed to achieve the desired performance for applications spanning multiple VMs, on a short time scale, by reallocating the CPU resources and DVFS. At the cluster level, a power optimizer is proposed to incrementally consolidate VMs onto the most power-efficient servers on a longer time scale. Finally, this dissertation proposes a VM scheduling algorithm that exploits core performance heterogeneity to optimize the overall system energy efficiency. The four algorithms at the three different levels are demonstrated with empirical results on hardware testbeds and trace-driven simulations and compared against state-of-the-art baselines

    Content-aware resource allocation model for IPTV delivery networks

    Get PDF
    Nowadays, with the evolution of digital video broadcasting, as well as, the advent of high speed broadband networks, a new era of TV services has emerged known as IPTV. IPTV is a system that employs the high speed broadband networks to deliver TV services to the subscribers. From the service provider viewpoint, the challenge in IPTV systems is how to build delivery networks that exploits the resources efficiently and reduces the service cost, as well. However, designing such delivery networks affected by many factors including choosing the suitable network architecture, load balancing, resources waste, and cost reduction. Furthermore, IPTV contents characteristics, particularly; size, popularity, and interactivity play an important role in balancing the load and avoiding the resources waste for delivery networks. In this paper, we investigate the problem of resource allocation for IPTV delivery networks over the recent architecture, peer-service area architecture. The Genetic Algorithm as an optimization tool has been used to find the optimal provisioning parameters including storage, bandwidth, and CPU consumption. The experiments have been conducted on two data sets with different popularity distributions. The experiments have been conducted on two popularity distributions. The experimental results showed the impact of content status on the resource allocation process

    Energy Efficient Big Data Networks: Impact of Volume and Variety

    Get PDF
    In this article, we study the impact of big data’s volume and variety dimensions on Energy Efficient Big Data Networks (EEBDN) by developing a Mixed Integer Linear Programming (MILP) model to encapsulate the distinctive features of these two dimensions. Firstly, a progressive energy efficient edge, intermediate, and central processing technique is proposed to process big data’s raw traffic by building processing nodes (PNs) in the network along the way from the sources to datacenters. Secondly, we validate the MILP operation by developing a heuristic that mimics, in real time, the behaviour of the MILP for the volume dimension. Thirdly, we test the energy efficiency limits of our green approach under several conditions where PNs are less energy efficient in terms of processing and communication compared to data centers. Fourthly, we test the performance limits in our energy efficient approach by studying a “software matching” problem where different software packages are required to process big data. The results are then compared to the Classical Big Data Networks (CBDN) approach where big data is only processed inside centralized data centers. Our results revealed that up to 52% and 47% power saving can be achieved by the EEBDN approach compared to the CBDN approach, under the impact of volume and variety scenarios, respectively. Moreover, our results identify the limits of the progressive processing approach and in particular the conditions under which the CBDN centralized approach is more appropriate given certain PNs energy efficiency and software availability levels

    Multilingual Cyberbullying Detection System

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Since the use of social media has evolved, the ability of its users to bully others has increased. One of the prevalent forms of bullying is Cyberbullying, which occurs on the social media sites such as Facebook©, WhatsApp©, and Twitter©. The past decade has witnessed a growth in cyberbullying – is a form of bullying that occurs virtually by the use of electronic devices, such as messaging, e-mail, online gaming, social media, or through images or mails sent to a mobile. This bullying is not only limited to English language and occurs in other languages. Hence, it is of the utmost importance to detect cyberbullying in multiple languages. Since current approaches to identify cyberbullying are mostly focused on English language texts, this thesis proposes a new approach (called Multilingual Cyberbullying Detection System) for the detection of cyberbullying in multiple languages (English, Hindi, and Marathi). It uses two techniques, namely, Machine Learning-based and Lexicon-based, to classify the input data as bullying or non-bullying. The aim of this research is to not only detect cyberbullying but also provide a distributed infrastructure to detect bullying. We have developed multiple prototypes (standalone, collaborative, and cloud-based) and carried out experiments with them to detect cyberbullying on different datasets from multiple languages. The outcomes of our experiments show that the machine-learning model outperforms the lexicon-based model in all the languages. In addition, the results of our experiments show that collaboration techniques can help to improve the accuracy of a poor-performing node in the system. Finally, we show that the cloud-based configurations performed better than the local configurations

    Tribes & Cultures – Cross-disciplinary Communication: Pinpointing the Issues for eLearning

    Get PDF
    Effective communication and co-operation across disciplines is needed to create and deploy eLearning systems so that they contribute to enhanced outcomes for students and teachers. Using a Grounded Theory methodology we probed the cultures of the participating tribes: the Educationalists; the Instructional Designers; and the Information Technology Specialists. Six salient themes emerged from the semi-structured interview data of respondents selected from the three tribes, each of which is described in detail in this article. These themes give rise to Six Rules of Thumb to help promote fruitful communication and interaction among the tribes and cultures of eLearning system stakeholders, and thus result in improved eLearning systems

    Spartan Daily March 1, 2011

    Get PDF
    Volume 136, Issue 18https://scholarworks.sjsu.edu/spartandaily/1125/thumbnail.jp

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US
    • …
    corecore