493 research outputs found

    Virtual Timing Isolation Safety-Net for Multicore Processors

    Get PDF
    Multicore processors promise to offer the performance as well as the reduced space, weight and power needed by future aircrafts. However, commercial off-the-shelf multicore processors suffer from timing interferences between cores which complicates applying them in hard real-time systems like avionic applications. In this thesis, a safety-net system is proposed which enables a virtual timing isolation of applications running on one core from all other cores. The technique is based on hardware external to the multicore processor and completely transparent to the applications, i.e. no modification of the observed software is necessary. The basic idea is to apply a single-core execution based worst-case execution time analysis and to accept a predefined slowdown during multicore execution. If the slowdown exceeds the acceptable bounds, interferences will be reduced by controlling the behavior of low-critical cores to keep the main application’s progress inside the given bounds. Measuring the progress of the applications running on the main core is performed by tracking the application’s fingerprint. A fingerprint is created by extraction of the performance counters of the critical core in very small timesteps which results in a characteristic curve for every execution of a periodic program. In standalone mode, without any running applications on the other cores, a model of an application is created by clustering and combining the extracted curves. During runtime, the extracted performance counter values are compared to the model to determine the progress of the critical application. In case the progress of an application is unacceptably delayed, the cores creating the interferences are throttled. The interference creating cores are determined by the accesses of the respective cores to the shared resources. A controller that takes the progress of a critical application as well as the time until the final deadline into account throttles the low priority cores. Throttling is either performed by frequency scaling of the interfering cores or by halt and continue with a pulse width modulation scheme. The complete safety-net system was evaluated on a TACLeBench benchmark running on an NXP P4080 multicore processor observed by a Xilinx FPGA implementing a MicroBlaze soft-core microcontroller. The results show that the progress can be measured by the fingerprinting with a final deviation of less than 1% for a TACLeBench execution with running opponent cores and indicate the non-intrusiveness of the approach. Several experiments are conducted to demonstrate the effectiveness of the different throttling mechanisms. Evaluations using a real-world avionic application show that the approach can be applied to integrated modular avionic applications. The safety-net does not ensure robust partitioning in the conventional meaning. The applications on the different cores can influence each other in the timing domain, but the external safety-net ensures that the interference on the high critical application is low enough to keep the timing. This allows for an efficient utilization of the multicore processor. Every critical application is treated individually, and by relying on individual models recorded in standalone mode, the critical as well as the non-critical applications running on the other cores can be exchanged without recreating a fingerprint model. This eases the porting of legacy applications to the multicore processor and allows the exchange of applications without recertification.Der Einsatz von Multicore Prozessoren in Avioniksystemen verspricht sowohl die Performancesteigerung als auch den reduzierten Platz-, Gewichts- und Energieverbrauch, der zur Realisierung von zukünftigen Flugzeugen benötigt wird. Die Verwendung von seriengefertigten (COTS) Multicore Prozessoren in sicherheitskritischen Echtzeitsystemen ist jedoch sehr komplex, da eine gegenseitige zeitliche Beeinflussung der Anwendungen auf den unterschiedlichen Kernen nicht ausgeschlossen werden kann. In dieser Arbeit wird ein Konzept vorgestellt, das eine virtuelle zeitliche Trennung der Anwendungen, die auf einem Prozessorkern ausgeführt werden, von denen der übrigen Kerne ermöglicht. Die Grundidee besteht darin, eine auf einer Single-Core-Ausführung basierende Laufzeitanalyse (WCET) durchzuführen und eine vordefinierte Verlangsamung während der Multicore-Ausführung zu akzeptieren. Wenn die Verlangsamung die zulässige Grenze überschreitet, wird das Verhalten niedrigkritischer Kerne so gesteuert, dass der Fortschritt der Hauptanwendung innerhalb der Deadlines bleibt. Die Bestimmung des Fortschritts der kritischen Anwendungen erfolgt durch das Verfolgen eines sogenannten Fingerprints. Ein Fingerprint wird durch Auslesen der Performance Counter des kritischen Kerns in sehr kleinen Zeitschritten erzeugt, was zu einer charakteristischen Kurve für jede Ausführung eines periodischen Programms führt. Ein Modell einer Anwendung wird erstellt, indem die extrahierten Kurven gruppiert und kombiniert werden. Während der Laufzeit werden die ausgelesenen Werte mit dem Modell verglichen, um den Fortschritt zu bestimmen. Falls die zeitliche Ausführung einer ktitischen Anwendung zu stark verzögert wird, werden die Kerne gedrosselt, welche die Störungen verursachen. Das Konzept wurde mit einem TACLeBench-Benchmark evaluiert, der auf einem NXP P4080 Multicore Prozessor ausgefüht, und von einem Xilinx-FPGA beobachtet wurde. Es konnte gezeigt werden, dass der Fortschritt durch den Fingerprint mit einer endgültigen Abweichung von weniger als 1% für eine TACLeBench-Ausführung mit laufenden konkurrierenden Kernen gemessen werden kann. Die Evaluation mit einer realen Avionik-Anwendung zeigte, dass das Konzept für integrierte modulare Avionik-Anwendungen (IMA) genutzt werden kann. Der Ansatz gewährleistet keine robuste Partitionierung im herkömmlichen Sinne. Die Anwendungen auf den verschiedenen Kernen können sich zeitlich gegenseitig beeinflussen, aber ein externes Sicherheitsnetz stellt sicher, dass die Verlangsamung der hochkritischen Anwendung niedrig genug ist, um die Deadlines zu halten. Dies ermöglicht eine effiziente Auslastung des Multicore Prozessors. Außerdem wird jede kritische Anwendung einzeln behandelt und verfügt über ein individuelles Modell. Somit können die kritischen und nicht kritischen Anwendungen, die auf den anderen Kernen ausgeführt werden, ausgetauscht werden, ohne ein Modell neu zu erstellen. Dies vereinfacht die Portierung von bestehenden Anwendungen auf Multicore Prozessoren und ermöglicht den Austausch von Anwendungen ohne eine erneute Zertifizierung

    Virtual Timing Isolation for Mixed-Criticality Systems

    Get PDF
    Commercial of the shelf multicore processors suffer from timing interferences between cores which complicates applying them in hard real-time systems like avionic applications. This paper proposes a virtual timing isolation of one main application running on one core from all other cores. The proposed technique is based on hardware external to the multicore processor and completely transparent to the main application i.e., no modifications of the software including the operating system are necessary. The basic idea is to apply a single-core execution based Worst Case Execution Time analysis and to accept a predefined slowdown during multicore execution. If the slowdown exceeds the acceptable bounds, interferences will be reduced by controlling the behavior of low-critical cores to keep the main application\u27s progress inside the given bounds. Apart from the main goal of isolating the timing of the critical application a subgoal is also to efficiently use the other cores. For that purpose, three different mechanisms for controlling the non-critical cores are compared regarding efficient usage of the complete processor. Measuring the progress of the main application is performed by tracking the application\u27s Fingerprint. This technology quantifies online any slowdown of execution compared to a given baseline (single-core execution). Several countermeasures to compensate unacceptable slowdowns are proposed and evaluated in this paper, together with an accuracy evaluation of the Fingerprinting. Our evaluations using the TACLeBench benchmark suite show that we can meet a given acceptable timing bound of 4 percent slowdown with a resulting real slowdown of only 3.27 percent in case of a pulse width modulated control and of 4.44 percent in the case of a frequency scaling control

    Computertomografie von Schnee und Eis

    Get PDF
    Die bildgebende Röntgen-Mikrofokus-Computertomografie (µCT) ist ein zerstörungsfreies Verfahren zur Sichtbarmachung und Quantifizierung dreidimensionaler Strukturen im Submillimeterbereich. Im Vortrag wird das Potential der µCT anhand der Anwendung auf die Analyse von Schnee- und Eisbohrkernen in der Polarforschung aufgezeigt. Polare Eiskerne gelten als ein zeitlich sehr hochaufgelöstes Klimaarchiv, in denen eine Vielzahl von Umweltparametern gespeichert sind. Polares Eis enthält eingeschlossene Luftblasen und liefert damit als weltweit einziges Archiv von Atmosphärenluft Informationen über die Treibhausgaskonzentrationen vergangener Zeit. Signalbildung und Lufteinschluss finden während der Metamorphose und Verdichtung von Schnee zu Eis statt, so dass eine Interpretation der gespeicherten Klimasignale nur möglich ist, wenn diese Materialtransformation bekannt bzw. verstanden ist. Dazu wurde am Alfred-Wegener-Institut (AWI) in Kooperation mit dem Fraunhofer Entwicklungszentrum für Röntgentechnik (ERZT) ein Computertomograph entwickelt, der unter Eislaborbedingungen die Untersuchung von meterlangen Eisbohrkernen erlaubt. Neben der Präsentation von spezifischen Erfolgen aus der Analyse an Antarktischen und Grönländischen Bohrkernen werden Limitationen und Perspektiven der µCT aus Sicht der geophysikalischen Anwendung herausgestellt

    Neue Enzyme für ein altes Organell: Kryptische peroxisomale Lokalisationssignale

    Get PDF
    Peroxisomen sind nahezu ubiquitäre, eukaryotische Zellorganellen, die am Abbau von Fettsäuren und an der Entgiftung des dabei entstehenden Wasserstoffperoxids beteiligt sind. Neben dieser generellen Funktion beherbergen die Peroxisomen weitere Stoffwechselwege. Dazu zählen Teile des Glyoxylatwegs in Pflanzen und Pilzen und Stoffwechselwege für die Bildung von Sekundärmetaboliten. Eine Sonderform der Peroxisomen sind die Glycosomen, die in Trypanosomen identifiziert werden konnten und einen Großteil der glykolytischen Enzyme enthalten. Peroxisomale Matrixproteine enthalten entweder carboxyterminale oder aminoterminale PTS („peroxisomal targeting signal“)-Motive (C-terminal: PTS1; N-terminal: PTS2). Diese werden von zytoplasmatischen Rezeptoren erkannt, die gefaltete und sogar im Komplex vorliegende Proteine in die Peroxisomen überführen. In dem pflanzenpathogenen Basidiomyceten Ustilago maydis konnten im Verlauf dieser Arbeit kryptische PTS1-Motive in einer Reihe von Enzymen aus der Glykolyse bzw. Gluconeogenese identifiziert werden. Peroxisomale Isoformen dieser Enzyme entstehen durch alternatives Spleißen oder durch Überlesen von Stopcodons während der Translation. Eine bioinformatische Analyse ergab, dass in einer Vielzahl von Pilzen Isoformen glykolytischer Enzyme mit PTS1-Motiv gebildet werden, wobei die Mechanismen zur Herstellung dieser Isoformen in unterschiedlichen Spezies variieren. Zudem wurden in einigen glykolytischen Enzymen ungewöhnliche PTS1-Motive gefunden, die vom bisher gültigen Konsensus für PTS1-Motive abweichen und ebenfalls eine duale Lokalisierung der Enzyme in Peroxisomen und dem Zytoplasma hervorrufen können. Bei der genaueren Charakterisierung der Peroxisomen in U. maydis fiel auf, dass diese Organellen nicht nur für die β-Oxidation von Fettsäuren benötigt werden, sondern auch eine Funktion beim Zuckerstoffwechsel und der biotrophen Interaktion mit der Wirtspflanze Mais haben. Außerdem konnte gezeigt werden, dass Peroxisomen in U. maydis an der Synthese eines extrazellulären Glykolipids beteiligt sind. Die Ergebnisse dieser Arbeit legen nahe, dass die Peroxisomen in Pilzen durch eine größere metabolische Vielfalt charakterisiert sind, als bisher angenommen wurde. Die Identifizierung kryptischer PTS1-Motive in Enzymen aus der Glykolyse lässt die Vermutung zu, dass Peroxisomen auch in anderen Organismen noch weitere unerwartete Proteine beinhalten

    Marine Ice: A sleeping iron giant in the Southern Ocean?

    Get PDF
    The Polar Southern Ocean (PSO) provides an excess amount of macro-nutrients but productivity is largely limited by the availability of essential micro-nutrients, namely iron, manganese, zinc and others. Seasonal patches of increased productivity off major ice shelfs around Antarctica suggest that local sources of these deficient micro-nutrients must be present. With this session contribution we present a new study on marine ice from the Filchner-Ronne Ice Shelf (FRIS) as a potential source of iron and other limiting micro-nutrients for the Atlantic sector of the PSO. Marine ice is formed via partial melting of meteoric shelf ice near the grounding line of large ice shelves (e.g. FRIS). During this process small refrozen ice platelets accumulate in a layer of over 100 m thickness underneath the ice shelf to form marine ice containing high amounts of particulate material. In a project funded by the German Research Foundation (DFG) within the priority program SPP1158, we analyse 2 marine ice cores (B13: 62m, B15: 167m of marine ice) recovered in the 1990’s from the FRIS on their geochemical compositions. The coring location of B13 was about 40 km away from the shelf ice edge and B15 was drilled another 136 km further inland along the reconstructed flow line of B13. Due to shelf ice migration over the last 30 years, their locations have shifted about 30 km towards the shelf ice edge. First results show dissolved Fe (dFe) and Mn (dMn) concentrations ranging between 30 and 300 nMol and particulate Fe (pFe) of 20 to 120 µMol (0.2 to 1.4 µMol for pMn). These concentrations are orders of magnitude higher than the ones currently found in the PSO for those elements. Basal melting and ice-berg calving of marine ice with the accompanied release of these essential trace metals could therefore fuel local productivity in regions with large extent of shelf ice. With our study we aim to evaluate marine ice as potentially overlooked source for limiting micro-nutrients that could explain high productivity areas within an otherwise relatively low productive PSO

    Representative surface snow density on the East Antarctic Plateau

    Get PDF
    Surface mass balances of polar ice sheets are essential to estimate the contribution of ice sheets to sea level rise. Uncertain snow and firn densities lead to significant uncertainties in surface mass balances, especially in the interior regions of the ice sheets, such as the East Antarctic Plateau (EAP). Robust field measurements of surface snow density are sparse and challenging due to local noise. Here, we present a snow density dataset from an overland traverse in austral summer 2016/17 on the Dronning Maud Land plateau. The sampling strategy using 1 m carbon fiber tubes covered various spatial scales, as well as a high-resolution study in a trench at 79∘ S, 30∘ E. The 1 m snow density has been derived volumetrically, and vertical snow profiles have been measured using a core-scale microfocus X-ray computer tomograph. With an error of less than 2 %, our method provides higher precision than other sampling devices of smaller volume. With four spatially independent snow profiles per location, we reduce the local noise and derive a representative 1 m snow density with an error of the mean of less than 1.5 %. Assessing sampling methods used in previous studies, we find the highest horizontal variability in density in the upper 0.3 m and therefore recommend the 1 m snow density as a robust measure of surface snow density in future studies. The average 1 m snow density across the EAP is 355 kg m−3, which we identify as representative surface snow density between Kohnen Station and Dome Fuji. We cannot detect a temporal trend caused by the temperature increase over the last 2 decades. A difference of more than 10 % to the density of 320 kg m−3 suggested by a semiempirical firn model for the same region indicates the necessity for further calibration of surface snow density parameterizations. Our data provide a solid baseline for tuning the surface snow density parameterizations for regions with low accumulation and low temperatures like the EAP

    Spatial Distribution of Crusts in Antarctic and Greenland Snowpacks and Implications for Snow and Firn Studies

    Get PDF
    The occurrence of snowpack features has been used in the past to classify environmental regimes on the polar ice sheets. Among these features are thin crusts with high density, which contribute to firn stratigraphy and can have significant impact on firn ventilation as well as on remotely inferred properties like accumulation rate or surface mass balance. The importance of crusts in polar snowpack has been acknowledged, but nonetheless little is known about their large-scale distribution. From snow profiles measured by means of microfocus X-ray computer tomography we created a unique dataset showing the spatial distribution of crusts in snow on the East Antarctic Plateau as well as in northern Greenland including a measure for their local variability. With this method, we are able to find also weak and oblique crusts, to count their frequency of occurrence and to measure the high-resolution density. Crusts are local features with a small spatial extent in the range of tens of meters. From several profiles per sampling site we are able to show a decreasing number of crusts in surface snow along a traverse on the East Antarctic Plateau. Combining samples from Antarctica and Greenland with a wide range of annual accumulation rate, we find a positive correlation (R2 = 0.89) between the logarithmic accumulation rate and crusts per annual layer in surface snow. By counting crusts in two Antarctic firn cores, we can show the preservation of crusts with depth and discuss their temporal variability as well as the sensitivity to accumulation rate. In local applications we test the robustness of crusts as a seasonal proxy in comparison to chemical records like impurities or stable water isotopes. While in regions with high accumulation rates the occurrence of crusts shows signs of seasonality, in low accumulation areas dating of the snowpack should be done using a combination of volumetric and stratigraphic elements. Our data can bring new insights for the study of firn permeability, improving of remote sensing signals or the development of new proxies in snow and firn core research

    Contention-Aware Dynamic Memory Bandwidth Isolation with Predictability in COTS Multicores: An Avionics Case Study

    Get PDF
    Airbus is investigating COTS multicore platforms for safety-critical avionics applications, pursuing helicopter-style autonomous and electric aircraft. These aircraft need to be ultra-lightweight for future mobility in the urban city landscape. As a step towards certification, Airbus identified the need for new methods that preserve the ARINC 653 single core schedule of a Helicopter Terrain Awareness and Warning System (HTAWS) application while scheduling additional safety-critical partitions on the other cores. As some partitions in the HTAWS application are memory-intensive, static memory bandwidth throttling may lead to slow down of such partitions or provide only little remaining bandwidth to the other cores. Thus, there is a need for dynamic memory bandwidth isolation. This poses new challenges for scheduling, as execution times and scheduling become interdependent: scheduling requires execution times as input, which depends on memory latencies and contention from memory accesses of other cores - which are determined by scheduling. Furthermore, execution times depend on memory access patterns. In this paper, we propose a method to solve this problem for slot-based time-triggered systems without requiring application source-code modifications using a number of dynamic memory bandwidth levels. It is NoC and DRAM controller contention-aware and based on the existing interference-sensitive WCET computation and the memory bandwidth throttling mechanism. It constructs schedule tables by assigning partitions and dynamic memory bandwidth to each slot on each core, considering worst case memory access patterns. Then at runtime, two servers - for processing time and memory bandwidth - run on each core, jointly controlling the contention between the cores and the amount of memory accesses per slot. As a proof-of-concept, we use a constraint solver to construct tables. Experiments on the P4080 COTS multicore platform, using a research OS from Airbus and EEMBC benchmarks, demonstrate that our proposed method enables preserving existing schedules on a core while scheduling additional safety-critical partitions on other cores, and meets dynamic memory bandwidth isolation requirements
    corecore