468 research outputs found

    The Clarens Web Service Framework for Distributed Scientific Analysis in Grid Projects

    Get PDF
    Large scientific collaborations are moving towards service oriented architecutres for implementation and deployment of globally distributed systems. Clarens is a high performance, easy to deploy Web Service framework that supports the construction of such globally distributed systems. This paper discusses some of the core functionality of Clarens that the authors believe is important for building distributed systems based on Web Services that support scientific analysis

    셀룰러 사이드링크 성능 향상을 위한 상위계층 기법

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·정보공학부, 2020. 8. 박세웅.In typical cellular communications, User Equipments (UEs) have always had to go through a Base Station (BS) to communicate with each other, e.g., a UE transmits a packet to a BS via uplink and then the BS transmits the packet to another UE via downlink. Although the communication method can serve UEs efficiently, the communication method can cause latency problems and overload problems in BS. Thus, sidelink has been proposed to overcome these problems in 3GPP release 12. Through sidelink, UEs can communicate directly with each other. There are two representative communications using sidelink, i.e., Device-to-Device (D2D) communication and Vehicle-to-Vehicle (V2V) communication. In this dissertation, we consider three strategies to enhance the performances of D2D and V2V communications: (i) efficient feedback mechanism for D2D communications, (ii) context-aware congestion control scheme for V2V communication, and (iii) In-Device Coexistence (IDC)-aware LTE and NR sidelink resource allocation scheme. Firstly, in the related standard, there is no feedback mechanism for D2D communication because D2D communications only support broadcast-type communications. A feedback mechanism is presented for D2D communications. Through our proposed mechanism, UEs can use the feedback mechanism without the help of BS and UEs do not need additional signals to allocate feedback resources. We also propose a rate adaptation algorithm, which consider in-band emission problem, on top of the proposed feedback mechanism. We find that our rate adaptation achieves higher and stable throughput compared with the legacy scheme that complies to the standard. Secondly, we propose a context-aware congestion control scheme for LTE-V2V communication. Through LTE-V2V communication, UEs transmit Cooperative Awareness Message (CAM), which is a periodic message, and Decentralized Environmental Notification Message (DENM), which is a event-driven message and allows one-hop relay. The above two messages have different characteristics and generation rule. Thus, it is difficult and inefficient to apply the same congestion control scheme to two messages. We propose a congestion control schemes for each message. Through the proposed congestion control schemes, UEs decide whether to transmit according to their situation. Through simulation results, we show that our proposed schemes outperform comparison schemes as well as the legacy scheme. Finally, we propose a NR sidelink resource allocation scheme based on multi-agent reinforcement learning, which awares a IDC problem between LTE and NR in Intelligent Transport System (ITS) band. First, we model a realistic IDC interference based on spectrum emission mask specified at the standard. Then, we formulate the resource allocation as a multi-agent reinforcement learning with fingerprint method. Each UE achieves its local observation and rewards, and learns its policy to increase its rewards through updating Q-network. Through simulation results, we observe that the proposed resource allocation scheme further improves Packet Delivery Ratio (PDR) performances compared to the legacy scheme.전형적인 셀룰러 통신에서는, 단말들은 서로 통신하기 위해 항상 기지국을 거쳐야 한다. 예를 들면, 단말이 uplink를 통해 기지국에게 패킷을 전송한 다음 기지국은 downlink를 통해 해당 패킷을 전송해준다. 이러한 통신방식은 단말들에게 효율적으로 서비스를 제공할 수 있지만, 상황에 따라서는 지연문제와 기지국의 과부하 문제를 야기할 수 있다. 따라서 3GPP release12에서 이러한 문제점들을 극복하기 위해 sidelink가 제안되었다. 덕분에 단말들은 sidelink를 통해서 서로 직접 통신을 할 수 있게 되었다. Sidelink를 사용하는 두 가지 대표적인 통신은 D2D(Device-to-Device) 통신과 V2V(Vehicle-to-Vehicle) 통신이다. 본 논문에서는 D2D 와 V2V 통신 성능을 향상시키기 위한 세가지 전략을 고려한다. (i) D2D 통신을 위한 효율적인 피드백 메커니즘, (ii) V2V 통신을 위한 상황인식기반 혼잡제어 기법, 그리고 (iii) IDC(In-Device Coexistence) 인지 기반 sidelink 자원 할당 방식. 첫째, 관련 표준에는 D2D 통신이 브로드캐스트 유형의 통신만을 지원하기 때문에 D2D 통신에 대한 피드백 메커니즘이 없다. 우리는 이러한 한계점을 극복하고자 D2D 통신을 위한 피드백 메커니즘을 제안한다. 제안된 메커니즘을 통해, 단말은 기지국의 도움없이 피드백 메커니즘을 사용할 수 있으며 피드백 자원을 할당하기 위한 추가 신호를 필요로 하지 않는다. 우리는 또한 제안된 피드백 메커니즘위에서 동작할 수 있는 data rate 조절 기법을 제안하였다. 우리는 시뮬레이션 결과를 통하여, 제안한 data rate 조절 기법이 기존 방식보다 더 높고 안정적인 수율을 제공하는 것을 확인하였다. 둘째, LTE-V2V 통신을 위한 상황 인지 기반 혼잡 제어 기법을 제안한다. LTE-V2V 통신에서 단말들은 주기적인 메시지인 CAM(Cooperative Awareness Message) 및 비주기적 메시지이며 one-hop릴레이를 허용하는 DENM(Decentralized Environmental Notification Message)를 전송한다. 위의 두 메시지는 특성과 생성 규칙이 다르기 때문에 동일한 혼잡 제어 기법을 적용하는 것은 비효율적이다. 따라서 우리는 각 메시지에 적용할 수 있는 혼잡 제어 기법들을 제안한다. 제안된 기법들을 통해서 단말들은 그들의 상황에 따라서 전송 여부를 결정하게 된다. 시뮬레이션 결과를 통해 제안된 기법이 기존 표준 방식 뿐만 아니라 최신의 비교 기법들보다 우수한 성능을 얻는 것을 확인하였다. 마지막으로 ITS(Intelligent Transport System)대역에서 LTE와 NR사이의 IDC문제를 고려하는 NR sidelink 자원할당 기법을 제안한다. 먼저, 표준에 지정된 스펙트럼 방출 마스크를 기반으로 현실적인 IDC 간섭을 모델링한다. 그런 다음 다중 에이전트 강화학습으로 자원할당 기법을 제안한다. 각 단말들은 자신들의 주변 환경을 관측하고 관측된 환경을 기반으로 행동하여 보상을 얻고 Q-network을 자신의 보상을 증가시키도록 정책을 업데이트 및 학습한다. 우리는 시뮬레이션 결과를 통하여 제안된 자원할당 박식이 기존기법 대비하여 PDR(Packet Delivery Ratio) 성능을 향상시키는 것을 확인하였다.Introduction 1 Efficient feedback mechanism for LTE-D2D Communication 8 CoCo: Context-aware congestion control scheme for C-V2X communications 35 IDC-aware resource allocation based on multi-agents reinforcement learning 67 Concluding remarks 84 Abstract(In Korean) 96 감사의 글 99Docto

    Optimización de arquitecturas distribuidas para el procesado de datos masivos

    Full text link
    Tesis por compendio[ES] La utilización de sistemas para el tratamiento eficiente de grandes volúmenes de información ha crecido en popularidad durante los últimos años. Esto conlleva el desarrollo de nuevas tecnologías, métodos y algoritmos, que permitan un uso eficiente de las infraestructuras. El tratamiento de grandes volúmenes de información no está exento de numerosos problemas y retos, algunos de los cuales se tratarán de mejorar. Dentro de las posibilidades actuales debemos tener en cuenta la evolución que han tenido los sistemas durante los últimos años y las oportunidades de mejora que existan en cada uno de ellos. El primer sistema de estudio, el Grid, constituye una aproximación inicial de procesamiento masivo y representa uno de los primeros sistemas distribuidos de tratamiento de grandes conjuntos de datos. Participando en la modernización de uno de los mecanismos de acceso a los datos se facilita la mejora de los tratamientos que se realizan en la genómica actual. Los estudios que se presentan están centrados en la transformada de Burrows-Wheeler, que ya es conocida en el análisis genómico por su capacidad para mejorar los tiempos en el alineamiento de cadenas cortas de polinucleótidos. Esta mejora en los tiempos, se perfecciona mediante la reducción de los accesos remotos con la utilización de un sistema de caché intermedia que optimiza su ejecución en un sistema Grid ya consolidado. Esta caché se implementa como complemento a la librería de acceso estándar GFAL utilizada en la infraestructura de IberGrid. En un segundo paso se plantea el tratamiento de los datos en arquitecturas de Big Data. Las mejoras se realizan tanto en la arquitectura Lambda como Kappa mediante la búsqueda de métodos para tratar grandes volúmenes de información multimedia. Mientras que en la arquitectura Lambda se utiliza Apache Hadoop como tecnología para este tratamiento, en la arquitectura Kappa se utiliza Apache Storm como sistema de computación distribuido en tiempo real. En ambas arquitecturas se amplía el ámbito de utilización y se optimiza la ejecución mediante la aplicación de algoritmos que mejoran los problemas en cada una de las tecnologías. El problema del volumen de datos es el centro de un último escalón, por el que se permite mejorar la arquitectura de microservicios. Teniendo en cuenta el número total de nodos que se ejecutan en sistemas de procesamiento tenemos una aproximación de las magnitudes que podemos obtener para el tratamiento de grandes volúmenes. De esta forma, la capacidad de los sistemas para aumentar o disminuir su tamaño permite un gobierno óptimo. Proponiendo un sistema bioinspirado se aporta un método de autoescalado dinámico y distribuido que mejora el comportamiento de los métodos comúnmente utilizados frente a las circunstancias cambiantes no predecibles. Las tres magnitudes clave del Big Data, también conocidas como V's, están representadas y mejoradas: velocidad, enriqueciendo los sistemas de acceso de datos por medio de una reducción de los tiempos de tratamiento de las búsquedas en los sistemas Grid bioinformáticos; variedad, utilizando sistemas multimedia menos frecuentes que los basados en datos tabulares; y por último, volumen, incrementando las capacidades de autoescalado mediante el aprovechamiento de contenedores software y algoritmos bioinspirados.[CA] La utilització de sistemes per al tractament eficient de grans volums d'informació ha crescut en popularitat durant els últims anys. Açò comporta el desenvolupament de noves tecnologies, mètodes i algoritmes, que aconsellen l'ús eficient de les infraestructures. El tractament de grans volums d'informació no està exempt de nombrosos problemes i reptes, alguns dels quals es tractaran de millorar. Dins de les possibilitats actuals hem de tindre en compte l'evolució que han tingut els sistemes durant els últims anys i les ocasions de millora que existisquen en cada un d'ells. El primer sistema d'estudi, el Grid, constituïx una aproximació inicial de processament massiu i representa un dels primers sistemes de tractament distribuït de grans conjunts de dades. Participant en la modernització d'un dels mecanismes d'accés a les dades es facilita la millora dels tractaments que es realitzen en la genòmica actual. Els estudis que es presenten estan centrats en la transformada de Burrows-Wheeler, que ja és coneguda en l'anàlisi genòmica per la seua capacitat per a millorar els temps en l'alineament de cadenes curtes de polinucleòtids. Esta millora en els temps, es perfecciona per mitjà de la reducció dels accessos remots amb la utilització d'un sistema de memòria cau intermèdia que optimitza la seua execució en un sistema Grid ja consolidat. Esta caché s'implementa com a complement a la llibreria d'accés estàndard GFAL utilitzada en la infraestructura d'IberGrid. En un segon pas es planteja el tractament de les dades en arquitectures de Big Data. Les millores es realitzen tant en l'arquitectura Lambda com a Kappa per mitjà de la busca de mètodes per a tractar grans volums d'informació multimèdia. Mentre que en l'arquitectura Lambda s'utilitza Apache Hadoop com a tecnologia per a este tractament, en l'arquitectura Kappa s'utilitza Apache Storm com a sistema de computació distribuït en temps real. En ambdós arquitectures s'àmplia l'àmbit d'utilització i s'optimitza l'execució per mitjà de l'aplicació d'algoritmes que milloren els problemes en cada una de les tecnologies. El problema del volum de dades és el centre d'un últim escaló, pel qual es permet millorar l'arquitectura de microserveis. Tenint en compte el nombre total de nodes que s'executen en sistemes de processament tenim una aproximació de les magnituds que podem obtindre per al tractaments de grans volums. D'aquesta manera la capacitat dels sistemes per a augmentar o disminuir la seua dimensió permet un govern òptim. Proposant un sistema bioinspirat s'aporta un mètode d'autoescalat dinàmic i distribuït que millora el comportament dels mètodes comunment utilitzats enfront de les circumstàncies canviants no predictibles. Les tres magnituds clau del Big Data, també conegudes com V's, es troben representades i millorades: velocitat, enriquint els sistemes d'accés de dades per mitjà d'una reducció dels temps de tractament de les busques en els sistemes Grid bioinformàtics; varietat, utilitzant sistemes multimèdia menys freqüents que els basats en dades tabulars; i finalment, volum, incrementant les capacitats d'autoescalat per mitjà de l'aprofitament de contenidors i algoritmes bioinspirats.[EN] The use of systems for the efficient treatment of large data volumes has grown in popularity during the last few years. This has led to the development of new technologies, methods and algorithms to efficiently use of infrastructures. The Big Data treatment is not exempt from numerous problems and challenges, some of which will be attempted to improve. Within the existing possibilities, we must take into account the evolution that systems have had during the last years and the improvement that exists in each one. The first system of study, the Grid, constitutes an initial approach of massive distributed processing and represents one of the first treatment systems of big data sets. By researching in the modernization of the data access mechanisms, the advance of the treatments carried out in current genomics is facilitated. The studies presented are centred on the Burrows-Wheeler Transform, which is already known in genomic analysis for its ability to improve alignment times of short polynucleotids chains. This time, the update is enhanced by reducing remote accesses by using an intermediate cache system that optimizes its execution in an already consolidated Grid system. This cache is implemented as a GFAL standard file access library complement used in IberGrid infrastructure. In a second step, data processing in Big Data architectures is considered. Improvements are made in both the Lambda and Kappa architectures searching for methods to process large volumes of multimedia information. For the Lambda architecture, Apache Hadoop is used as the main processing technology, while for the Kappa architecture, Apache Storm is used as a real time distributed computing system. In both architectures the use scope is extended and the execution is optimized applying algorithms that improve problems for each technology. The last step is focused on the data volume problem, which allows the improvement of the microservices architecture. The total number of nodes running in a processing system provides a measure for the capacity of processing large data volumes. This way, the ability to increase and decrease capacity allows optimal governance. By proposing a bio-inspired system, a dynamic and distributed self-scaling method is provided improving common methods when facing unpredictable workloads. The three key magnitudes of Big Data, also known as V's, will be represented and improved: speed, enriching data access systems by reducing search processing times in bioinformatic Grid systems; variety, using multimedia data less used than tabular data; and finally, volume, increasing self-scaling capabilities using software containers and bio-inspired algorithms.Herrera Hernández, J. (2020). Optimización de arquitecturas distribuidas para el procesado de datos masivos [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/149374TESISCompendi

    Function-as-a-Service Performance Evaluation: A Multivocal Literature Review

    Get PDF
    Function-as-a-Service (FaaS) is one form of the serverless cloud computing paradigm and is defined through FaaS platforms (e.g., AWS Lambda) executing event-triggered code snippets (i.e., functions). Many studies that empirically evaluate the performance of such FaaS platforms have started to appear but we are currently lacking a comprehensive understanding of the overall domain. To address this gap, we conducted a multivocal literature review (MLR) covering 112 studies from academic (51) and grey (61) literature. We find that existing work mainly studies the AWS Lambda platform and focuses on micro-benchmarks using simple functions to measure CPU speed and FaaS platform overhead (i.e., container cold starts). Further, we discover a mismatch between academic and industrial sources on tested platform configurations, find that function triggers remain insufficiently studied, and identify HTTP API gateways and cloud storages as the most used external service integrations. Following existing guidelines on experimentation in cloud systems, we discover many flaws threatening the reproducibility of experiments presented in the surveyed studies. We conclude with a discussion of gaps in literature and highlight methodological suggestions that may serve to improve future FaaS performance evaluation studies.Comment: improvements including postprint update

    Hierarchical categorisation of tags for delicious

    Get PDF
    In the scenario of social bookmarking, a user browsing the Web bookmarks web pages and assigns free-text labels (i.e., tags) to them according to their personal preferences. In this technical report, we approach one of the practical aspects when it comes to represent users' interests from their tagging activity, namely the categorization of tags into high-level categories of interest. The reason is that the representation of user profiles on the basis of the myriad of tags available on the Web is certainly unfeasible from various practical perspectives; mainly concerning the unavailability of data to reliably, accurately measure interests across such fine-grained categorisation, and, should the data be available, its overwhelming computational intractability. Motivated by this, our study presents the results of a categorization process whereby a collection of tags posted at Delicious #http://delicious.com# are classified into 200 subcategories of interest.Preprin

    National Aeronautics and Space Administration (NASA)/American Society for Engineering Education (ASEE) Summer Faculty Fellowship Program, 1987, volume 1

    Get PDF
    The objective of the NASA/ASEE program were: (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate an exchange of ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of participants' institutions; and (4) to contribute to the research objectives of the NASA centers. Each faculty fellow spent 10 weeks at Johnson Space Center engaged in a research project commensurate with his/her interests and background and worked in collaboration with a NASA/JSC colleague. A compilation is presented of the final reports on the research projects done by the fellows during the summer of 1987. This is volume 1 of a 2 volume report

    Hierarchical categorisation of web tags for Delicious

    Get PDF
    In the scenario of social bookmarking, a user browsing the Web bookmarks web pages and assigns free-text labels (i.e., tags) to them according to their personal preferences. The benefits of social tagging are clear – tags enhance Web content browsing and search. However, since these tags may be publicly available to any Internet user, a privacy attacker may collect this information and extract an accurate snapshot of users’ interests or user profiles, containing sensitive information, such as health-related information, political preferences, salary or religion. In order to hinder attackers in their efforts to profile users, this report focuses on the practical aspects of capturing user interests from their tagging activity. More accurately, we study how to categorise a collection of tags posted by users in one of the most popular bookmarking services, Delicious (http://delicious.com).Preprin

    Multi-kritäres Mapping und Scheduling von Workflow-Anwendungen auf heterogenen Plattformen

    Get PDF
    The results summarized in this thesis deal with the mapping and scheduling of workflow applications on heterogeneous platforms. In this context, we focus on three different types of streaming applications: * Replica placement in tree networks * In this kind of application, clients are issuing requests to some servers and the question is where to place replicas in the network such that all requests can be processed. We discuss and compare several policies to place replicas in tree networks, subject to server capacity, Quality of Service (QoS) and bandwidth constraints. The client requests are known beforehand, while the number and location of the servers have to be determined. The standard approach in the literature is to enforce that all requests of a client be served by the closest server in the tree. We introduce and study two new policies. One major contribution of this work is to assess the impact of these new policies on the total replication cost. Another important goal is to assess the impact of server heterogeneity, both from a theoretical and a practical perspective. We establish several new complexity results, and provide several efficient polynomial heuristics for NP-complete instances of the problem. * Pipeline workflow applications * We consider workflow applications that can be expressed as linear pipeline graphs. An example for this application type is digital image processing, where images are treated in steady-state mode. Several antagonist criteria should be optimized, such as throughput and latency (or a combination) as well as latency and reliability (i.e., the probability that the computation will be successful) of the application. While simple polynomial algorithms can be found for fully homogeneous platforms, the problem becomes NP-hard when tackling heterogeneous platforms. We present an integer linear programming formulation for this latter problem. Furthermore, we provide several efficient polynomial bi-criteria heuristics, whose relative performances are evaluated through extensive simulation. As a case-study, we provide simulations and MPI experimental results for the JPEG encoder application pipeline on a cluster of workstations. * Complex streaming applications * We consider the execution of applications structured as trees of operators, i.e., the application of one or several trees of operators in steady-state to multiple data objects that are continuously updated at various locations in a network. A first goal is to provide the user with a set of processors that should be bought or rented in order to ensure that the application achieves a minimum steady-state throughput, and with the objective of minimizing platform cost. We then extend our model to multiple applications: several concurrent applications are executed at the same time in a network, and one has to ensure that all applications can reach their application throughput. Another contribution of this work is to provide complexity results for different instances of the basic problem, as well as integer linear program formulations of various problem instances. The third contribution is the design of several polynomial-time heuristics, for both application models. One of the primary objectives of the heuristics for concurrent applications is to reuse intermediate results shared by multiple applications.In meiner Dissertation beschäftige ich mich mit dem Scheduling von Workflow-Anwendungen in heterogenen Plattformen. In diesem Zusammenhang konzentriere ich mich auf drei verschiene Anwendungstypen.: * Platzierung von Replikaten in Baumnetzwerken * Dieses erste Schedulingproblem behan-delt die Platzierung von Replikaten in Baumnetzwerken. Ein Beispiel hierfür ist die Platzierung von Replikaten in verteilten Datenbanksystemen, deren Verbindungsstruktur baumartig organi-siert ist. Die Platzierung soll dabei unter mehreren Constraints (Serverkapazitäten, sowie Dienstgüte und Bandbreitenbeschränkungen) durchgeführt werden. In diesem Anwendungstyp stellen Clients Anfragen an verschiedene Server. Diese Client-Anfragen sind im Voraus bekannt, während Anzahl und Platzierung der Server erst ermittelt werden müssen. Die in der Literatur gängige Strategie fordert, dass alle Anfragen eines Clients vom nächstgelegenen Server im Baum behandelt werden. Es werden zwei neue Verfahrensweisen vorgestellt und untersucht. Ein wichtiges Teilergebnis dieser Studie bewertet die Auswirkung der beiden neuen Strategien auf die globalen Replikationskosten. Ausserdem wird der Einfluss von Heterogenität aus theore-tischer und praktischer Sicht untersucht. Es werden verschiedene Komplexitätsergebnisse erar-beitet und mehrere effiziente Polynomialzeit-Heuristiken für NP-vollständige Instanzen des Problems vorgestellt. * Lineare Workflow-Anwendungen * Als nächstes werden Workflow-Anwendungen untersucht, die als lineare Graphen dargestellt werden können. Ein Beispiel dieses Applikationstyps ist die digitale Bildverarbeitung, in der Bilder mittels einer Pipeline verarbeitet werden. Es sollen ver-schie¬dene gegensätzliche Kriterien optimiert werden, wie zum Beispiel Durchsatz und Latenz-zeit, beziehungsweise eine Kombination der beiden, aber auch Latenzzeit und Ausfallsicherheit der Anwendung. Während für vollhomogene Plattformen polynomiale Algorithmen gefunden werden können, wird das Problem NP-hart, sobald heterogene Plattformen angestrebt werden. Diese Arbeit beinhaltet eine vollständige Komplexitätsanalyse. Für die bisher unbekannten polynomialen Varianten des Problems werden optimale Algorithmen vorgeschlagen. Ein ganz-zahliges lineares Programm für das bekannte „chains-on-chains“ Problem für heterogene Plattformen wird vorgestellt. Des weiteren werden verschiedene effiziente polynomiale bi-kritäre Heuristiken präsentiert, deren relative Effizienz durch umfangreiche Simulationen eruiert werden. Eine Fallstudie beschäftigt sich mit der JPEG-Encoder-Pipeline. Hierbei werden Simulationen und MPI-basierte Auswertungen auf einem Rechen-Cluster erstellt. * Komplexe Streaming-Anwendungen * Als letztes wird die Ausführung von Anwendungen, die als Operator-Bäume strukturiert sind, untersucht. Konkret bedeutet dies, dass ein oder mehrere Operator-Bäume in stationärem Zustand auf mannigfaltige Datenobjekte angewendet werden, welche fortlaufend an verschiedenen Stellen im Netzwerk aktualisiert werden. Ein erstes Ziel ist, dem Benutzer eine Gruppe von Rechnern vorzuschlagen, die gekauft oder gemietet werden sollen, so dass die Anwendung einen minimalen stationären Durchsatz erzielt und gleichzeitig Plattformkosten minimiert werden können. Anschließend wird das Modell auf mehrere Anwendungen erweitert: verschiedene nebenläufige Anwendungen werden zeitgleich in einem Netzwerk ausgeführt und es muss sichergestellt werden, dass alle Anwendungen ihren Durchsatz erreichen können. Beide Modelle werden aus theoretischer Sicht untersucht und eine Komplexitäts-analyse für unterschiedliche Instanzen des Grundproblems, sowie Formulierungen als lineare Programme erstellt. Für beide Anwendungsmodelle werden verschiedene Polynomialzeit-Heuristiken präsentiert und charakterisiert. Ein Hauptziel der Heuristiken für nebenläufige Anwendungen ist die Wiederverwertung von Zwischenergebnissen, welche von mehreren Anwedungen geteilt werden
    corecore