320 research outputs found

    Virtual Cluster Management for Analysis of Geographically Distributed and Immovable Data

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics and Computing, 2015Scenarios exist in the era of Big Data where computational analysis needs to utilize widely distributed and remote compute clusters, especially when the data sources are sensitive or extremely large, and thus unable to move. A large dataset in Malaysia could be ecologically sensitive, for instance, and unable to be moved outside the country boundaries. Controlling an analysis experiment in this virtual cluster setting can be difficult on multiple levels: with setup and control, with managing behavior of the virtual cluster, and with interoperability issues across the compute clusters. Further, datasets can be distributed among clusters, or even across data centers, so that it becomes critical to utilize data locality information to optimize the performance of data-intensive jobs. Finally, datasets are increasingly sensitive and tied to certain administrative boundaries, though once the data has been processed, the aggregated or statistical result can be shared across the boundaries. This dissertation addresses management and control of a widely distributed virtual cluster having sensitive or otherwise immovable data sets through a controller. The Virtual Cluster Controller (VCC) gives control back to the researcher. It creates virtual clusters across multiple cloud platforms. In recognition of sensitive data, it can establish a single network overlay over widely distributed clusters. We define a novel class of data, notably immovable data that we call "pinned data", where the data is treated as a first-class citizen instead of being moved to where needed. We draw from our earlier work with a hierarchical data processing model, Hierarchical MapReduce (HMR), to process geographically distributed data, some of which are pinned data. The applications implemented in HMR use extended MapReduce model where computations are expressed as three functions: Map, Reduce, and GlobalReduce. Further, by facilitating information sharing among resources, applications, and data, the overall performance is improved. Experimental results show that the overhead of VCC is minimum. The HMR outperforms traditional MapReduce model while processing a particular class of applications. The evaluations also show that information sharing between resources and application through the VCC shortens the hierarchical data processing time, as well satisfying the constraints on the pinned data

    Infrared Multitouch Interface

    Get PDF
    Touch based computing (also known as a Natural User Interface) is a rapidly growing trend. While touch based screens have been around for several years now, they had been limited to a single touch at a time. Recently, support has been building for a new type of system that allows multiple touches at the same time. This allows more than 1 person to interact with the screen at the same time as well as support for gestures commands such as rotate, scale, and others. Currently, the majority of these systems are based on using a projector and a camera. This requires a large area to house the enclosure. With the recent decrease in costs, flat panel displays have become very affordable. The goal of this project is to design a multi touch system that would work with an existing flat panel display

    Evaluation of the parallel computational capabilities of embedded platforms for critical systems

    Get PDF
    Modern critical systems need higher performance which cannot be delivered by the simple architectures used so far. Latest embedded architectures feature multi-cores and GPUs, which can be used to satisfy this need. In this thesis we parallelise relevant applications from multiple critical domains represented in the GPU4S benchmark suite, and perform a comparison of the parallel capabilities of candidate platforms for use in critical systems. In particular, we port the open source GPU4S Bench benchmarking suite in the OpenMP programming model, and we benchmark the candidate embedded heterogeneous multi-core platforms of the H2020 UP2DATE project, NVIDIA TX2, NVIDIA Xavier and Xilinx Zynq Ultrascale+, in order to drive the selection of the research platform which will be used in the next phases of the project. Our result indicate that in terms of CPU and GPU performance, the NVIDIA Xavier is the highest performing platform

    A survey of Virtual Private LAN Services (VPLS): Past, present and future

    Get PDF
    Virtual Private LAN services (VPLS) is a Layer 2 Virtual Private Network (L2VPN) service that has gained immense popularity due to a number of its features, such as protocol independence, multipoint-to-multipoint mesh connectivity, robust security, low operational cost (in terms of optimal resource utilization), and high scalability. In addition to the traditional VPLS architectures, novel VPLS solutions have been designed leveraging new emerging paradigms, such as Software Defined Networking (SDN) and Network Function Virtualization (NFV), to keep up with the increasing demand. These emerging solutions help in enhancing scalability, strengthening security, and optimizing resource utilization. This paper aims to conduct an in-depth survey of various VPLS architectures and highlight different characteristics through insightful comparisons. Moreover, the article discusses numerous technical aspects such as security, scalability, compatibility, tunnel management, operational issues, and complexity, along with the lessons learned. Finally, the paper outlines future research directions related to VPLS. To the best of our knowledge, this paper is the first to furnish a detailed survey of VPLS.University College DublinAcademy of Finlan

    Splotch: porting and optimizing for the Xeon Phi

    Get PDF
    With the increasing size and complexity of data produced by large scale numerical simulations, it is of primary importance for scientists to be able to exploit all available hardware in heterogenous High Performance Computing environments for increased throughput and efficiency. We focus on the porting and optimization of Splotch, a scalable visualization algorithm, to utilize the Xeon Phi, Intel's coprocessor based upon the new Many Integrated Core architecture. We discuss steps taken to offload data to the coprocessor and algorithmic modifications to aid faster processing on the many-core architecture and make use of the uniquely wide vector capabilities of the device, with accompanying performance results using multiple Xeon Phi. Finally performance is compared against results achieved with the GPU implementation of Splotch

    A Philosophical Approach to Entrepreneurial Education: A model based on Kantian and Aristotelian thought

    Get PDF
    In the field of entrepreneurship education, how to develop an effective program to teach entrepreneurship has been widely debated. However, an inductive approach based on analysis of educational program experiences and outcomes has led to mixed conclusions about the appropriate scope and structure of entrepreneurship education. In contrast, we take a deductive approach to develop a comprehensive entrepreneurship education model based on concepts from two schools of philosophical thought: the Kantian debate about freedom versus determinism, and the Aristotelian concepts of praxis and poïesis. These philosophical concepts are related to scope and structure dimensions that delineate the soft (art) and hard (science) of entrepreneurship education, their components and interrelationships. Pedagogies associated with each component as well as integrative pedagogies are identified to guide the development of entrepreneurship education programs and teaching. Theoretical propositions are presented for future research
    corecore