6,238 research outputs found

    Partitioning problems in parallel, pipelined and distributed computing

    Get PDF
    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest

    Approximate algorithms for partitioning and assignment problems

    Get PDF
    The problem of optimally assigning the modules of a parallel/pipelined program over the processors of a multiple computer system under certain restrictions on the interconnection structure of the program as well as the multiple computer system was considered. For a variety of such programs it is possible to find linear time if a partition of the program exists in which the load on any processor is within a certain bound. This method, when combined with a binary search over a finite range, provides an approximate solution to the partitioning problem. The specific problems considered were: a chain structured parallel program over a chain-like computer system, multiple chain-like programs over a host-satellite system, and a tree structured parallel program over a host-satellite system. For a problem with m modules and n processors, the complexity of the algorithm is no worse than O(mnlog(W sub T/epsilon)), where W sub T is the cost of assigning all modules to one processor and epsilon the desired accuracy

    Enhancing Job Scheduling of an Atmospheric Intensive Data Application

    Get PDF
    Nowadays, e-Science applications involve great deal of data to have more accurate analysis. One of its application domains is the Radio Occultation which manages satellite data. Grid Processing Management is a physical infrastructure geographically distributed based on Grid Computing, that is implemented for the overall processing Radio Occultation analysis. After a brief description of algorithms adopted to characterize atmospheric profiles, the paper presents an improvement of job scheduling in order to decrease processing time and optimize resource utilization. Extension of grid computing capacity is implemented by virtual machines in existing physical Grid in order to satisfy temporary job requests. Also scheduling plays an important role in the infrastructure that is handled by a couple of schedulers which are developed to manage data automaticall

    Evaluation of Edge AI Co-Processing Methods for Space Applications

    Get PDF
    The recent years spread of SmallSats offers several new services and opens to the implementation of new technologies to improve the existent ones. However, the communication link to Earth in order to process data often is a bottleneck, due to the amount of collected data and the limited bandwidth. A way to face this challenge is edge computing, which supposedly discards useless data and fasten up the transmission, and therefore the research has moved towards the study of COTS architectures to be used in space, often organized in co-processing setups. This thesis considers AI as application use case and two devices in a controller-accelerator configuration. It proposes to investigate the performances of co-processing methods such as simple parallel, horizontal partitioning and vertical partitioning, for a set of different tasks and taking advantage of different pre-trained models. The actual experiments regard only simple parallel and horizontal partitioning mode, and they compare latency and accuracy results with single processing runs on both devices. Evaluating the results task-by-task, image classification has the best performance improvement taking advantage of horizontal partitioning, with a clear accuracy improvement, as well as semantic segmentation, which shows almost stable accuracy and potentially higher throughput with smaller models input sizes. On the other hand, object detection shows a drop in performances, especially accuracy, which could maybe be improved with more specifically developed models for the chosen hardware. The project clearly shows how co-processing methods are worth of being investigated and can improve system outcomes for some of the analyzed tasks, making future work about it interesting

    Updating the EU Internal Market Concept

    Get PDF
    The study analyses the EU Internal market from a dynamic and a contextual perspective, taking into account, not just the normative changes brought by the intense legislative and judicial activity in this area, but also the important economic and technological transformations that have largely altered the structure of the global economy in the last two to three decades. These could, in my view, challenge the first principles upon which the EU economic integration process and, in particular the “single market” idea, is based. This “updating” of the Internal market project is essential if one is to critically reflect on the role and the specificity of the EU integration process, in the context of the broader globalization movement. The first part of the paper introduces the “neo-functionalist” perspective, which has largely influenced the EU economic integration process, from its incipiency, and explores its theoretical linkages with trade theory (the law of one price), thus presenting the fundamental tenets of positive EU Internal market law. The second part delves into the subsequent mutation of the economic integration ideal towards the more modular and scalar concept of “regulatory convergence”. Opening the black box of economic integration will lead us to analyse its transformation, as a result of a paradigm shift currently occurring in the organization of the global process of economic production, with the development of global value chains, and the important role of technology, in particular the Internet, in promoting economic integration not through law, but through code. The study predicts that addressing more systematically the effect of both private and public obstacles to trade should take centre-stage if one is to opt for a more holistic and dynamic perspective in analysing the process of economic integration. A more extensive intervention of the competition law tool and other regulatory initiatives against private restrictions to trade is therefore to be expected in the future, these areas of law taking a more prevalent part in the EU Internal market law compass. The study discusses in some detail the recent legislative and jurisprudential developments with regard to geo-blocking and geo-filtering practices. The last part of the study provides some concluding thoughts on the need for the EU Internal market concept to be updated and raises some questions with regard to its ontology in the context of a globalized economy

    DAPHNE: An Open and Extensible System Infrastructure for Integrated Data Analysis Pipelines

    Get PDF
    Integrated data analysis (IDA) pipelines—that combine data management (DM) and query processing, high-performance computing (HPC), and machine learning (ML) training and scoring—become increasingly common in practice. Interestingly, systems of these areas share many compilation and runtime techniques, and the used—increasingly heterogeneous—hardware infrastructure converges as well. Yet, the programming paradigms, cluster resource management, data formats and representations, as well as execution strategies differ substantially. DAPHNE is an open and extensible system infrastructure for such IDA pipelines, including language abstractions, compilation and runtime techniques, multi-level scheduling, hardware (HW) accelerators, and computational storage for increasing productivity and eliminating unnecessary overheads. In this paper, we make a case for IDA pipelines, describe the overall DAPHNE system architecture, its key components, and the design of a vectorized execution engine for computational storage, HW accelerators, as well as local and distributed operations. Preliminary experiments that compare DAPHNE with MonetDB, Pandas, DuckDB, and TensorFlow show promising results

    Design of an UAV swarm

    Get PDF
    This master thesis tries to give an overview on the general aspects involved in the design of an UAV swarm. UAV swarms are continuoulsy gaining popularity amongst researchers and UAV manufacturers, since they allow greater success rates in task accomplishing with reduced times. Appart from this, multiple UAVs cooperating between them opens a new field of missions that can only be carried in this way. All the topics explained within this master thesis will explain all the agents involved in the design of an UAV swarm, from the communication protocols between them, navigation and trajectory analysis and task allocation

    PickCells: A Physically Reconfigurable Cell-composed Touchscreen

    Get PDF
    Touchscreens are the predominant medium for interactions with digital services; however, their current fixed form factor narrows the scope for rich physical interactions by limiting interaction possibilities to a single, planar surface. In this paper we introduce the concept of PickCells, a fully reconfigurable device concept composed of cells, that breaks the mould of rigid screens and explores a modular system that affords rich sets of tangible interactions and novel acrossdevice relationships. Through a series of co-design activities – involving HCI experts and potential end-users of such systems – we synthesised a design space aimed at inspiring future research, giving researchers and designers a framework in which to explore modular screen interactions. The design space we propose unifies existing works on modular touch surfaces under a general framework and broadens horizons by opening up unexplored spaces providing new interaction possibilities. In this paper, we present the PickCells concept, a design space of modular touch surfaces, and propose a toolkit for quick scenario prototyping
    corecore