16 research outputs found

    A Web-Based University Courses Syllabi Generator

    Get PDF
    To improve university students learning experience and in the quest for ABET accreditation, it is crucial to have clear and consistent syllabi encompassing the course outcomes and their relationship to the overall program outcomes for all offered courses.  This paper aims to present the automation of syllabi for engineering programs by introducing a web-based software application for course syllabus generation. The application has been developed using the best practices in educational theories and is fully aligned with ABET guidelines for program accreditation.   It streamlines the process of writing syllabi and ensures compliance and conformity for all courses offered within a program.  In addition, such automation reduces human errors, improves the student learning experience, reduces paper and printing costs and provides an environmental friendly alternative. Keywords: ABET, engineering education, student learning, syllabi generation, quality improvement

    Toward Content-Aware Video Partitioning Methods for Distributed HEVC Video Encoding

    Get PDF
    Recently, cloud computing has emerged as a potential platform for distributed video encoding due to its advantages in terms of costs as well as performance. For distributed video encoding, the input video must be partitioned into several segments, each of which is processed over distributed resources. This paper describes the effect of different video partitioning schemes on overall encoding performance in the distributed encoding of High-Efficiency Video Coding (HEVC). In addition, we explored performances of video partitioning schemes on the basis of the types of the content to be encode

    Testing a Cloud Provider Network for Hybrid P2P and Cloud Streaming Architectures

    Get PDF
    The number of online real-time streaming services deployed over network topologies like P2P or centralized ones has remarkably increased in the recent years. This has revealed the lack of networks that are well prepared to respond to this kind of traffic. A hybrid distribution network can be an efficient solution for real-time streaming services. This paper contains the experimental results of streaming distribution in a hybrid architecture that consist of mixed connections among P2P and Cloud nodes that can interoperate together. We have chosen to represent the P2P nodes as Planet Lab machines over the world and the cloud nodes using a Cloud provider's network. First we present an experimental validation of the Cloud infrastructure's ability to distribute streaming sessions with respect to some key streaming QoS parameters: jitter, throughput and packet losses. Next we show the results obtained from different test scenarios, when a hybrid distribution network is used. The scenarios measure the improvement of the multimedia QoS parameters, when nodes in the streaming distribution network (located in different continents) are gradually moved into the Cloud provider infrastructure. The overall conclusion is that the QoS of a streaming service can be efficiently improved, unlike in traditional P2P systems and CDN, by deploying a hybrid streaming architecture. This enhancement can be obtained by strategic placing of certain distribution network nodes into the Cloud provider infrastructure, taking advantage of the reduced packet loss and low latency that exists among its datacenters

    Un modelo distribuido para calcular descriptores locales de malla 3D basados en k-rings

    Get PDF
    In order to facilitate 3D object processing, it is common to use high-level representations such as local descriptors that are usually computed using defined neighborhoods. K-rings, a technique to define them, is widely used by several methods. In this work, we propose a model for the distributed computation of local descriptors over 3D triangular meshes, using the concept of k-rings. In our experiments, we measure the performance of our model on huge meshes, evaluating the speedup, the scalability, and the descriptor computation time. We show the optimal configuration of our model for the cluster we implemented and the linear growth of computation time regarding the mesh size and the number of rings. We used the Harris response, which describes the saliency of the object, for our tests.Para facilitar el procesamiento de objetos 3D, es común utilizar representaciones de alto nivel, como los descriptores locales que generalmente se calculan utilizando vecindarios definidos. K-rings es una técnica para definirlos y es ampliamente utilizada por varios métodos. En este trabajo, proponemos un modelo para el cálculo distribuido de descriptores locales sobre mallas triangulares 3D, utilizando el concepto de anillos k. En nuestros experimentos, medimos el rendimiento de nuestro modelo en mallas enormes, evaluando la aceleración, la escalabilidad y el tiempo de cálculo del descriptor. Mostramos la configuración óptima de nuestro modelo para el clúster que implementamos y el crecimiento lineal del tiempo de cálculo con respecto al tamaño de la malla y el número de anillos. Usamos la respuesta de Harris, que describe la prominencia del objeto, para nuestras pruebas

    Large-Scale Image Processing Using MapReduce

    Get PDF
    Jälgides tänapäeva tehnoloogia arengut ning odavate fotokaamerate üha laialdasemat levikut, on üha selgem, et ühe osa üha kasvavast inimeste tekitatud andmete hulgast moodustavad pildid. Teades, et tõenäoliselt tuleb neid andmeid ka töödelda, ning et üksikute arvutite võimsus ei luba kohati juba praegu neid mahukamate ülesannete jaoks kasutada, on inimesed hakanud uurima mitmete hajusarvutuse mudelite pakutavaid võimalusi. Üks selline on MapReduce, mille põhiliseks aluseks on arvutuste üldisele kujule viimine, seades programmeerija ülesandeks defineerida vaid selle, mis toimub andmetega nelja arvutuse faasi - Input, Map, Reduce, Output - jooksul. Kuna sellest mudelist on olemas kvaliteetseid vabavara realisatsioone, ning mahukamateks arvutusteks on kerge vaeva ja vähese kuluga võimalik rentida vajalik infrastruktuur, siis on selline lähenemine pilditöötlusele muutunud peaaegu igaühele kättesaadavaks. Antud magistritöö eesmärgiks on uurida MapReduce mudeli kasutatavust suuremahulise pilditöötluse vallas. Selleks vaatlen eraldi juhte, kus tegemist on tavalistest piltidest koosneva suure andmestikuga, ning kus tuleb töödelda ühte suuremahulist pilti. Samuti jagan nelja klassi vahel kõik pilditöötlusalgoritmid, nimetades need vastavalt lokaalseteks, iteratiivseteks lokaalseteks, mittelokaalseteks ja iteratiivseteks mittelokaalseteks algoritmideks. Kasutades neid jaotusi, kirjeldan üldiselt põhilisi probleeme ja takistusi, mis võivad segada mingit tüüpi algoritmide hajusat rakendamist mingit tüüpi piltandmetel, ning pakun välja võimalikke lahendusi. Töö praktilises osas kirjeldan MapReduce mudeli kasutamist Apache Hadoop raamistikuga kahel erineval andmestikul, millest esimene on 265GiB-suurune pildikogu, ning teine 6.99 gigapiksli suurune mikroskoobifoto. Esimese näite puhul on ülesandeks pildikogust meta-andmete eraldamine, kasutades selleks objekti- ning tekstituvastust. Teise andmestiku puhul on ülesandeks töödelda pilti ühe kindla mitteiteratiivse lokaalse algoritmiga. Kuigi mõlemal juhul on tegemist vaid katsetamise eesmärgil loodud rakendustega, on mõlemal puhul näha, et olemasolevate pilditöötluse algoritmide MapReduce programmideks teisendamine on küllaltki lihtne, ning ei too endaga kaasa suuri kadusid jõudluses. Kokkuvõtteks väidan, et tavapärases mõõdus piltidest koosnevate andmestike puhul on MapReduce mudel lihtne viis arvutusi hajusale kujule viies kiirendada, kuid suuremahuliste piltide puhul kehtib see enamasti ainult mitteiteratiivsete lokaalsete algoritmidega.Due to the increasing popularity of cheap digital photography equipment, personal computing devices with easy to use cameras, and an overall im- provement of image capture technology with regard to quality, the amount of data generated by people each day shows trends of growing faster than the processing capabilities of single devices. For other tasks related to large-scale data, humans have already turned towards distributed computing as a way to side-step impending physical limitations to processing hardware by com- bining the resources of many computers and providing programmers various different interfaces to the resulting construct, relieving them from having to account for the intricacies stemming from it’s physical structure. An example of this is the MapReduce model, which - by way of placing all calculations to a string of Input-Map-Reduce-Output operations capable of working in- dependently - allows for easy application of distributed computing for many trivially parallelised processes. With the aid of freely available implemen- tations of this model and cheap computing infrastructure offered by cloud providers, having access to expensive purpose-built hardware or in-depth un- derstanding of parallel programming are no longer required of anyone who wishes to work with large-scale image data. In this thesis, I look at the issues of processing two kinds of such data - large data-sets of regular images and single large images - using MapReduce. By further classifying image pro- cessing algorithms to iterative/non-iterative and local/non-local, I present a general analysis on why different combinations of algorithms and data might be easier or harder to adapt for distributed processing with MapReduce. Finally, I describe the application of distributed image processing on two ex- ample cases: a 265GiB data-set of photographs and a 6.99 gigapixel image. Both preliminary analysis and practical results indicate that the MapReduce model is well suited for distributed image processing in the first case, whereas in the second case, this is true for only local non-iterative algorithms, and further work is necessary in order to provide a conclusive decision

    Hadoop Image Processing Framework

    Get PDF
    With the rapid growth of social media, the number of images being uploaded to the internet is exploding. Massive quantities of images are shared through multi-platform services such as Snapchat, Instagram, Facebook and WhatsApp; recent studies estimate that over 1.8 billion photos are uploaded every day. However, for the most part, applications that make use of this vast data have yet to emerge. Most current image processing applications, designed for small-scale, local computation, do not scale well to web-sized problems with their large requirements for computational resources and storage. The emergence of processing frameworks such as the Hadoop MapReduce\cite{dean2008} platform addresses the problem of providing a system for computationally intensive data processing and distributed storage. However, to learn the technical complexities of developing useful applications using Hadoop requires a large investment of time and experience on the part of the developer. As such, the pool of researchers and programmers with the varied skills to develop applications that can use large sets of images has been limited. To address this we have developed the Hadoop Image Processing Framework, which provides a Hadoop-based library to support large-scale image processing. The main aim of the framework is to allow developers of image processing applications to leverage the Hadoop MapReduce framework without having to master its technical details and introduce an additional source of complexity and error into their programs.Computer Scienc

    Content Based Image Retrieval for Big Visual Data using Map Reduce

    Get PDF
    With high availability of portable and low cost digital cameras and improvement in image capture technology, huge amount of visual data (photos and videos) is being generated everyday. The processing capability of standalone devices is insufficient to handle such massive data also known as big data. Apache Hadoop, a distributed computing model that is based on Map Reduce framework is an easy to use solution for managing big data with freely available implementation models. Hadoop is mainly designed for cost-effective commodity hardware or inexpensive cloud computing infrastructure. Hence, access to expensive hardware or in-depth understanding of parallel programming is no longer required to work on big data. With high availability of portable and low cost digital cameras and improvement in image capture technology, huge amount of visual data (photos and videos) is being generated everyday. The processing capability of standalone devices is insufficient to handle such massive data also known as big data. Apache Hadoop, a distributed computing model that is based on Map Reduce framework is an easy to use solution for managing big data with freely available implementation models. Hadoop is mainly designed for cost-effective commodity hardware or inexpensive cloud computing infrastructure. Hence, access to expensive hardware or in-depth understanding of parallel programming is no longer required to work on big data
    corecore