47 research outputs found

    Observation of current approaches to utilize the elastic cloud for big data stream processing

    Get PDF
    This paper conducts a systematic literature map to collect information about current approaches to utilize the elastic cloud for data stream processing in the big data context. First is a description and setup of the used scientific methodology which adheres to generally accepted methods for systematic literature maps. After building a reference set and constructing search queries for the data collection came the data set cleaning where the publications were first automatically filtered and consecutively manually reviewed to determine the relevant papers. The collected data was evaluated and visualized to help answer the defined research questions and present information. Finally the results of the thesis are discussed and the limitations and implications addressed.Diese Arbeit befasst sich mit der Durchführung einer Systematic Literature Map um einen Überblick über ein Feld zu gewähren. Das untersuchte Feld dieser Arbeit befasst sich mit der Verwendung der elastischen Eigenschaften der Cloud für Datenstrom Prozessierung im Big Data Umfeld. Bestandteil der Systematic Literature Map ist sowohl das Sammeln aller Publikationen, welche für das untersuchte Feld relevant sind, als auch die Auswertung und Präsentation der gesammelten Daten. Um die Informationen zielgerichtet zu evaluieren, wurden Forschungsfragen definiert, welche als Leitfaden dienen. Zu Beginn wurden die verwendeten wissenschaftlichen Methoden vorgestellt, welche sich an anerkannten Prozeduren orientieren. Nach dem zusammenstellen von einigen relevanten Publikationen, wurden auf deren Basis Suchanfragen für die Datensammlung erstellt. Danach wurden die Daten aus den Online Datenbanken bekannter Verleger exportiert und Duplikate entfernt. Um die endgültigen relevanten Publikationen festzustellen, wurden anhand von Schlagworten irrelevante Publikationen aussortiert und schließlich manuell einzeln bewertet. Die gesammelten Daten wurden teilweise automatisch ausgewertet und manuell klassifiziert um mit den Ergebnissen die vorher definierten Forschungsfragen zu beantworten. Abschließend werden die Ergebnisse diskutiert und die Einschränkungen und Implikationen dieser Arbeit behandelt

    MAS-based Distributed Coordinated Control and Optimization in Microgrid and Microgrid Clusters:A Comprehensive Overview

    Get PDF

    Development of an object detection and mask generation software for dynamic beam projection in automotive pixel lighting applications

    Get PDF
    Nowadays there are many contributions to the automotive industry and the field is developing fast. This work can be used for some real-time autonomous driving applications. The goal was to add advanced functionality to a standard light source in collaboration with electronic systems. Including advanced features may result in safer and more pleasant driving. The application fields of the work could include glare-free light sources, orientation and lane lights, marking lights, and symbol projection. On a real-time source, object detection and classification with a confidence score is implemented. The best model is obtained by intending to train the model with varying parameters. The most accurate result which is mAP value 0.572 was obtained by distributing the training dataset with learning rate 0.2 and setting the epochs to 300. Moreover, a basic implementation of a glare-free light source was done to avoid the drivers from being blinded by the illumination of the beams. The car and rectangle shape masks were generated as image files and sent as CSV files to the pixel light source device. As a result, the rectangle shaped mask functions more precisely then car shaped.Nowadays there are many contributions to the automotive industry and the field is developing fast. This work can be used for some real-time autonomous driving applications. The goal was to add advanced functionality to a standard light source in collaboration with electronic systems. Including advanced features may result in safer and more pleasant driving. The application fields of the work could include glare-free light sources, orientation and lane lights, marking lights, and symbol projection. On a real-time source, object detection and classification with a confidence score is implemented. The best model is obtained by intending to train the model with varying parameters. The most accurate result which is mAP value 0.572 was obtained by distributing the training dataset with learning rate 0.2 and setting the epochs to 300. Moreover, a basic implementation of a glare-free light source was done to avoid the drivers from being blinded by the illumination of the beams. The car and rectangle shape masks were generated as image files and sent as CSV files to the pixel light source device. As a result, the rectangle shaped mask functions more precisely then car shaped

    HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

    Get PDF
    Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people

    Application of Half Spaces in Bounding Wireless Internet Signals for use in Indoor Positioning

    Get PDF
    The problem of outdoor positioning has been largely solved via the use of GPS. This thesis addresses the problem of determining position in areas where GPS is unavailable. No clear solution exists for indoor localization and all approximation methods offer unique drawbacks. To mitigate the drawbacks, robust systems combine multiple complementary approaches. In this thesis, fusion of wireless internet access points and inertial sensors was used to allow indoor positioning without the need for prior information regarding surroundings. Implementation of the algorithm involved development of three separate systems. The first system simply combines inertial sensors on the Android Nexus 7 to form a step counter capable of providing marginally accurate initial measurements. Having achieved reliable initial measurements, the second system receives signal strength from nearby wireless internet access points, augmenting the sensor data in order to generate half-planes. The half-planes partition the available space and bound the possible region in which each access point can exist. Lastly, the third system addresses the tendency of the step counter to lose accuracy over time by using the recently established positions of the access points to correct flawed values. The resulting process forms a simple feedback loop. A primary contribution of this thesis is an algorithm for determining access point position. Testing shows that in certain applications access points relatively near the user's path of travel can be positioned with high accuracy. Additionally, the nature of the design means that the geometric algorithm has a tendency to achieve maximum performance in environments containing many twists and turns while suffering from a lack of useful data on straight paths. In contrast, winding areas confound the step counter which performs better when used in long straight stretches of constant movement. When combined, these trends complement one another and result in a robust final product

    Design and analysis of a 3-dimensional cluster multicomputer architecture using optical interconnection for petaFLOP computing

    Get PDF
    In this dissertation, the design and analyses of an extremely scalable distributed multicomputer architecture, using optical interconnects, that has the potential to deliver in the order of petaFLOP performance is presented in detail. The design takes advantage of optical technologies, harnessing the features inherent in optics, to produce a 3D stack that implements efficiently a large, fully connected system of nodes forming a true 3D architecture. To adopt optics in large-scale multiprocessor cluster systems, efficient routing and scheduling techniques are needed. To this end, novel self-routing strategies for all-optical packet switched networks and on-line scheduling methods that can result in collision free communication and achieve real time operation in high-speed multiprocessor systems are proposed. The system is designed to allow failed/faulty nodes to stay in place without appreciable performance degradation. The approach is to develop a dynamic communication environment that will be able to effectively adapt and evolve with a high density of missing units or nodes. A joint CPU/bandwidth controller that maximizes the resource allocation in this dynamic computing environment is introduced with an objective to optimize the distributed cluster architecture, preventing performance/system degradation in the presence of failed/faulty nodes. A thorough analysis, feasibility study and description of the characteristics of a 3-Dimensional multicomputer system capable of achieving 100 teraFLOP performance is discussed in detail. Included in this dissertation is throughput analysis of the routing schemes, using methods from discrete-time queuing systems and computer simulation results for the different proposed algorithms. A prototype of the 3D architecture proposed is built and a test bed developed to obtain experimental results to further prove the feasibility of the design, validate initial assumptions, algorithms, simulations and the optimized distributed resource allocation scheme. Finally, as a prelude to further research, an efficient data routing strategy for highly scalable distributed mobile multiprocessor networks is introduced

    An investigation of nondeterminism in functional programming languages

    Get PDF
    This thesis investigates nondeterminism in functional programming languages. To establish a precise understanding of nondeterministic language properties, Sondergaard and Sestoft's analysis and definitions of functional language properties are adopted as are the characterizations of weak and strong nondeterminism. This groundwork is followed by a denotational semantic description of a nondeterministic language (suggested by Sondergaard and Sestoft). In this manner, a precise characterization of the effects of strong nondeterminism is developed. Methods used to hide nondeterminism to in order to overcome or sidestep the problem of strong nondeterminism in pure functional languages are defined. These different techniques ensure that functional languages remain pure but also include some of the advantages of nondeterminism. Lastly, this discussion of nondeterminism is applied to the area of functional parallel language implementation to indicate that the related problem and the possible solutions are not purely academic. This application gives rise to an interesting discussion on optimization of list parallelism. This technique relies on the ability to decide when a bag may be used instead of a list

    On the design and implementation of a control system processor

    Get PDF
    In general digital control algorithms are multi-input multi-output (MIMO) recursive digital filters, but there are particular numerical requirements in control system processing for which standard processor devices are not well suited, in particular arising in systems with high sample rates. There is therefore a clear need to understand the numerical requirements properly, to identity optimised forms for implementing control laws, and to translate these into efficient processor architectures. By taking a considered view of the numerical and calculation requirements of control algorithms, it is possible to consider special purpose processors that provide well-targeted support of control laws. This thesis describes a compact, high-speed, special-purpose processor which offers a low-cost solution to implementing linear time invariant controllers. [Continues.

    Improved Encoding for Compressed Textures

    Get PDF
    For the past few decades, graphics hardware has supported mapping a two dimensional image, or texture, onto a three dimensional surface to add detail during rendering. The complexity of modern applications using interactive graphics hardware have created an explosion of the amount of data needed to represent these images. In order to alleviate the amount of memory required to store and transmit textures, graphics hardware manufacturers have introduced hardware decompression units into the texturing pipeline. Textures may now be stored as compressed in memory and decoded at run-time in order to access the pixel data. In order to encode images to be used with these hardware features, many compression algorithms are run offline as a preprocessing step, often times the most time-consuming step in the asset preparation pipeline. This research presents several techniques to quickly serve compressed texture data. With the goal of interactive compression rates while maintaining compression quality, three algorithms are presented in the class of endpoint compression formats. The first uses intensity dilation to estimate compression parameters for low-frequency signal-modulated compressed textures and offers up to a 3X improvement in compression speed. The second, FasTC, shows that by estimating the final compression parameters, partition-based formats can choose an approximate partitioning and offer orders of magnitude faster encoding speed. The third, SegTC, shows additional improvement over selecting a partitioning by using a global segmentation to find the boundaries between image features. This segmentation offers an additional 2X improvement over FasTC while maintaining similar compressed quality. Also presented is a case study in using texture compression to benefit two dimensional concave path rendering. Compressing pixel coverage textures used for compositing yields both an increase in rendering speed and a decrease in storage overhead. Additionally an algorithm is presented that uses a single layer of indirection to adaptively select the block size compressed for each texture, giving a 2X increase in compression ratio for textures of mixed detail. Finally, a texture storage representation that is decoded at runtime on the GPU is presented. The decoded texture is still compressed for graphics hardware but uses 2X fewer bytes for storage and network bandwidth.Doctor of Philosoph

    What Role Can an International Financial Centre's Law Play in the Development of a Sunrise Industry? The Case of Hong Kong and Solar Powered Investments

    Get PDF
    How can international financial centres like Hong Kong increase assets under management – and thus their size and ranking? Most policymakers and their advisors wrongly answer this question by focusing on financial institutions, and the law that governs them. Instead, policymakers need to start by looking at actual markets. What new tastes and technologies need funding? How can such funding fit into already existing geographies of production, distribution and finance? In this paper, we show how a focus on funding sunrise industries can help increase assets under management for the financial institutions operating in an international financial centre like Hong Kong. We show – using the specific example the photovoltaic (solar power) sector – how changes in financial law need to be contingent on market needs. We specifically show how legal changes which promote the securitisation of solar assets (and the sale of these securities) can help increase Hong Kong’s financial institutions’ assets under management. By using this specific case, we hope to provide insight into the broader question of how technological change, geography, and financial law interact.preprin
    corecore