83,185 research outputs found

    A Multimedia Interactive Environment Using Program Archetypes: Divide-and-Conquer

    Get PDF
    As networks and distributed systems that can exploit parallel computing become more widespread, the need for ways to teach parallel programming effectively grows as well. Even though many colleges and universities provide courses on parallel programming [1], most of those courses are reserved for graduate students and advanced undergraduates. There is a demand for ways to teach fundamental parallel programming concepts to people with just a working knowledge of programming. By using the idea of a software archetype, and providing a learning environment that teaches both concept and coding, we hope to satisfy this need. This paper presents an overview of the multimedia approach we took in teaching parallel programming and offers Divide-and-Conquer as an example of its use

    Teaching Parallel Computing to Freshmen

    Get PDF
    Parallelism is the future of computing and computer science and should therefore be at the heart of the CS curriculum. Instead of continuing along the evolutionary path by introducing parallel computation “top down” (first in special junior-senior level courses), we are taking a radical approach and introducing parallelism at the earliest possible stages of instruction. Specifically, we are developing a completely new freshman-level course on data structures that integrates parallel computation naturally, and retains the emphasis on laboratory instruction. This will help to steer our curriculum as expeditiously as possible toward parallel computing. Our approach is novel in three distinct and essential ways. First, we will teach parallel computing to freshmen in a course designed from beginning to end to do so. Second, we will motivate the course with examples from scientific computation. Third, we use multimedia and visualization as instructional aids. We have two primary objectives: to begin a reform of our undergraduate curriculum with an laboratory-based freshman course on parallel computation, and to produce tools and methodologies that improve student understanding of the basic principles of parallel computing. Parallelism is the future of computing and computer science and should therefore be at the heart of the CS curriculum. Instead of continuing along the evolutionary path by introducing parallel computation “top down” (first in special junior-senior level courses), we are taking a radical approach and introducing parallelism at the earliest possible stages of instruction. Specifically, we are developing a completely new freshman-level course on data structures that integrates parallel computation naturally, and retains the emphasis on laboratory instruction. This will help to steer our curriculum as expeditiously as possible toward parallel computing. Our approach is novel in three distinct and essential ways. First, we will teach parallel computing to freshmen in a course designed from beginning to end to do so. Second, we will motivate the course with examples from scientific computation. Third, we use multimedia and visualization as instructional aids. We have two primary objectives: to begin a reform of our undergraduate curriculum with an laboratory-based freshman course on parallel computation, and to produce tools and methodologies that improve student understanding of the basic principles of parallel computing

    MASSIVELY PARALLEL ALGORITHMS FOR POINT CLOUD BASED OBJECT RECOGNITION ON HETEROGENEOUS ARCHITECTURE

    Get PDF
    With the advent of new commodity depth sensors, point cloud data processing plays an increasingly important role in object recognition and perception. However, the computational cost of point cloud data processing is extremely high due to the large data size, high dimensionality, and algorithmic complexity. To address the computational challenges of real-time processing, this work investigates the possibilities of using modern heterogeneous computing platforms and its supporting ecosystem such as massively parallel architecture (MPA), computing cluster, compute unified device architecture (CUDA), and multithreaded programming to accelerate the point cloud based object recognition. The aforementioned computing platforms would not yield high performance unless the specific features are properly utilized. Failing that the result actually produces an inferior performance. To achieve the high-speed performance in image descriptor computing, indexing, and matching in point cloud based object recognition, this work explores both coarse and fine grain level parallelism, identifies the acceptable levels of algorithmic approximation, and analyzes various performance impactors. A set of heterogeneous parallel algorithms are designed and implemented in this work. These algorithms include exact and approximate scalable massively parallel image descriptors for descriptor computing, parallel construction of k-dimensional tree (KD-tree) and the forest of KD-trees for descriptor indexing, parallel approximate nearest neighbor search (ANNS) and buffered ANNS (BANNS) on the KD-tree and the forest of KD-trees for descriptor matching. The results show that the proposed massively parallel algorithms on heterogeneous computing platforms can significantly improve the execution time performance of feature computing, indexing, and matching. Meanwhile, this work demonstrates that the heterogeneous computing architectures, with appropriate architecture specific algorithms design and optimization, have the distinct advantages of improving the performance of multimedia applications

    Personalized Empathic Computing (PEC)

    Get PDF
    Until a decade ago, computers were only used by experts, for professional purposes solely. Nowadays, the personal computer (PC) is standard equipment in most western housekeepings and is used to gather information, play games, communicate, etc. In parallel, users' expectations increase and, consequently, PCs are more and more adapted to our needs. The next phase in PC evolution is Personalized Empathic Computing (PEC). When thinking of PEC, questions emerge such as: Who is the user and how to model his or her characteristics? In addition, both possibilities and constraints of technology have to be taken into account. To unravel human emotional state, psychophysiological techniques are employed. Audio and visual information processing is needed to handle the multimedia input. Virtual Reality can be employed to realize high level interaction between users and PEC systems. The realization of PEC requires the cooperation among a broad range of disciplines; e.g., psychology, physiology, computer science, agent technology, interface design, and multimedia analysis. All will be illustrated by running projects, industrial applications, and the latest scientific research. Both the strength and the limitations of current state-of-the-art techniques will be indicated. With that we will look forward, to the future, which is not that far away anymore ..

    A Study on Efficient Design of A Multimedia Conversion Module in PESMS for Social Media Services

    Get PDF
    The main contribution of this paper is to present the Platform-as-a-Service(PaaS) Environment for Social Multimedia Service (PESMS), derived fromthe Social Media Cloud Computing Service Environment. The main role ofour PESMS is to support the development of social networking services thatinclude audio, image, and video formats. In this paper, we focus in particular on the design and implementation of PESMS, including the transcoding function for processing large amounts of social media in a parallel and distributed manner. PESMS is designed to improve the quality and speed of multimedia conversions by incorporating a multimedia conversion module based on Hadoop, consisting of Hadoop Distributed File System for storing large quantities of social data and MapReduce for distributed parallel processing of these data. In this way, our PESMS has the prospect of exponentially reducing the encoding time for transcoding large numbers of image files into specific formats. To test system performance for the transcoding function, we measured the image transcoding time under a variety of experimental conditions. Based on experiments performed on a 28-node cluster, we found that our system delivered excellent performance in the image transcoding function

    Moving Multimedia Simulations into the Cloud: a Cost-Effective Solution

    Get PDF
    Researchers often demand bursts of computing power to quickly obtain the results of certain simulation activities. Multimedia communication simulations usually belong to such category. They may require several days on a generic PC to test a comprehensive set of conditions depending on the complexity of the scenario. This paper proposes to use a cloud computing framework to accelerate these simulations and, consequently, research activities, while at the same time reducing the overall costs. A practical simulation example is shown, representative of a typical simulation of H.264/AVC video communications over a wireless channel. This work shows that, by means of a commercial cloud computing provider, the gains of the proposed technique compared to more traditional solutions using dedicated computers can be significant in terms of speed and cost reductio
    • 

    corecore