reposiTUm (TUW Vienna)
Not a member yet
193390 research outputs found
Sort by
Ein Lyapunov-basiertes beschleunigtes proximales Verfahren – Theorie und Vergleich mit FISTA
Siehe englisches Abstract.\section{Motivation and Scope}Many optimization problems in machine learning, signal processing, and computational statistics are \emph{composite} in nature:\begin{equation}\label{eq:composite_intro} \min_{x\in\mathbb{R}^n} \; F(x) \;=\; f(x) + h(x)\,,\end{equation}where is smooth (differentiable with -Lipschitz gradient) and is proper, closed, and often non-smooth but \emph{proximable}. Classic examples include penalties for sparsity, constraints encoded as indicator functions, and robust losses coupled with structured regularization. For such problems, first-order methods dominate practice: they scale, exploit problem structure, and admit sharp iteration-complexity guarantees. Two algorithmic ideas underpin state-of-the-art performance on \eqref{eq:composite_intro}: (i) \emph{proximal splitting}, which decouples and by a gradient step on followed by a proximity step on (ISTA/proximal gradient); and (ii) \emph{acceleration}, which injects extrapolation (momentum) to achieve the optimal rate in the smooth convex regime (Nesterov acceleration; FISTA in the composite case).In modern applications, the regularizer is sometimes only \emph{weakly convex} (e.g., SCAD), which invalidates the standard convex proximal framework unless one \emph{convexifies} of the composite model. Moreover, empirical performance often hinges on how well the discrete method mirrors an underlying \emph{continuous-time} energy decay. This thesis develops a principled and unified treatment of these themes: we give complete, self-contained Lyapunov proofs for GD, NAG, ISTA, and FISTA; We derive and analyze a new accelerated scheme, \emph{SQ2FISTA}, proposed by PhD student Mr. Ushiyama, from a carefully designed time-varying inertial ODE via a \emph{weak discrete gradient} (wDG) discretization, and prove its and linear rates; we extend FISTA to weakly convex proximals through a \emph{convexified} variant, denoted \emph{FISTA()}, leveraging a prox-shift identity that restores convexity and stability; and we provide PyTorch-style code listings that exactly implement the analyzed algorithms with \emph{safety clamps} for SCAD. A central empirical finding is that \emph{SQ2FISTA} strictly improves over \emph{standard} FISTA and closely matches (often ties) the performance of \emph{FISTA()} on realistic synthetic benchmarks, validating the design principle: when is weakly convex, either adopt an ODE-inspired accelerated discretization (SQ2FISTA) or convexify the proximal part (FISTA()) — both are principled and high-performing, while \emph{plain} FISTA is typically suboptimal.\section{Problem Class, Assumptions, and Notation}We consider \eqref{eq:composite_intro} with the following standing assumptions unless stated otherwise. The smooth part is differentiable and -smooth: for all . In the strongly convex regime, is -strongly convex ( future Model-). The regularizer is proper, closed, and \emph{proximable}, with proximity operatorWhen is \emph{weakly convex} (e.g.\ SCAD), there exists such that is convex; equivalently, has curvature , (future Proximal-). The \emph{total} strong convexity we use throughout isWe denote by a global minimizer of , and write . A standard residual is the \emph{prox-gradient mapping} at stepsize :\begin{equation}\label{eq:PG-map} G_\eta(x)\;:=\;\frac{1}{\eta}\Big(x - \operatorname{prox}_{\eta h}\big(x - \eta\,\nabla f(x)\big)\Big),\qquad \|G_\eta(x)\|=0 \;\Leftrightarrow\; 0\in \nabla f(x)+\partial h(x)\,.\end{equation}\paragraph{Convexification by prox shift.}When is weakly convex with .\begin{table}[t]\centering\caption{Canonical worst-case iteration complexity (deterministic, exact gradients). Strongly convex entries use an error contraction factor .}\label{tab:rates}\begin{tabular}{lcc}\hlineMethod & Convex case & Strongly convex case () \\\hlineGD & & \\ISTA & & \\NAG & & \\FISTA & & \\FISTA() & & \\SQ2FISTA & & (1-(\sqrt{2\mu_{\text{tot}}/L_{\text{eff}}))^k \\\hline\end{tabular}\end{table}\section{Contributions}\paragraph{C1. Unified Lyapunov proofs for GD, NAG, ISTA, and FISTA.}We present complete, concise proofs of the classical rates using a common Lyapunov perspective. For GD, a descent lemma combined with a strong-convexity sandwich yields (convex) and (strongly convex). For NAG, a quadratic-in- potential establishes (convex) and factor (strongly convex). For ISTA/FISTA, three-point inequalities and an estimate-sequence potential yield and , respectively; the strongly convex FISTA variant attains the accelerated linear factor as above.\paragraph{C2. SQ2FISTA via an inertial ODE and weak discrete gradients.}We design a time-varying inertial flow with \emph{hyperbolic} damping and construct a discrete scheme using a weak discrete gradient (wDG) identity so that a discrete energy provably decreases. Solving a scalar recurrence for the weight sequence yields an optimal schedule, leading to in the convex case and in the strongly convex case with . The proof mirrors the continuous Lyapunov decay and is fully discrete, requiring only smoothness, (total) strong convexity when present, and a proximal step.\paragraph{C3. Convexified FISTA for weakly convex proximals.}We introduce FISTA() by convexifying the proximal part via \eqref{eq:prox-shift} with . This preserves the standard FISTA structure but replaces gradients of by those of (hence , ). We prove the same rate in the convex case and a linear rate governed by in the strongly convex case. For SCAD we implement \emph{safety clamps} to keep the closed-form prox well-defined.\paragraph{C4. Pseudocodes.}We provide Pseudo code for all optimizers (GD, NAG, ISTA/FISTA, FISTA(), SQ2FISTA) and proximals (including SCAD with safety clamps), as well as three model families: \emph{Smoothed Hinge SVM}, \emph{Saturated Nonlinear Regression (tanh link)}, and \emph{Ill-conditioned quadratic}. The listings follow a uniform interface, expose exact bounds used by the theory, and record both objective gaps and prox-gradient norms.\section{Expectations}\paragraph{Validity.}All analyses are deterministic and assume exact (or high-precision) gradients, exact prox steps (with safe closed forms for SCAD), and step sizes based on global bounds (or their safe overestimates). The linear-rate guarantees use the \emph{total} strong convexity when present.\paragraph{Limitations.}We do not analyze stochastic gradients, line searches, or adaptive step sizes. With weakly convex , the \emph{original} nonconvex composite need not be globally convex; our guarantees either (i) target the convexified problem (FISTA()) or (ii) control a discrete Lyapunov function for SQ2FISTA that ensures objective decrease and stationarity, but not necessarily avoidance of all nonconvex stationary points beyond our constructed setting. SCAD’s region-2 proximal formula requires a positive denominator; we enforce this via \emph{safety clamps} — benign in practice but slightly perturbative.\paragraph{Takeaway.}In regimes where is weakly convex (a common practical case), \emph{standard} FISTA is not the right baseline. Either convexify (FISTA()) or discretize a well-chosen inertial flow (SQ2FISTA). Both are principled; empirically they are neck-and-neck, with SQ2FISTA offering a physics-consistent derivation and FISTA() offering a minimal patch over a standard workhorse.\section{Reading Guide and Chapter Outline}Chapter~2 revisits Gradient Descent, its Lyapunov analysis, and continuous gradient flow, providing full proofs in both convex and strongly convex regimes. Chapter~3 develops Nesterov’s Accelerated Gradient (NAG), including discrete proofs of and linear rates, alongside their ODE counterparts. Chapter~4 introduces proximal operators, ISTA, FISTA, and a strongly convex accelerated variant; proofs rely on three-point inequalities and estimate sequences. Chapter~5 bridges continuous-time inertial dynamics and discrete methods via \emph{weak discrete gradients} (wDG), yielding a unifying template for Lyapunov-based design and derives \emph{SQ2FISTA} from a hyperbolically damped inertial ODE. We prove energy decrease, derive the optimal weight recurrence, and establish convergence rates. Chapter~6 reports an empirical study on three model families (\emph{Smoothed Hinge SVM}, \emph{Saturated Nonlinear Regression (tanh link)}, and \emph{Ill-conditioned quadratic}) with SCAD, comparing \emph{plain} FISTA, \emph{FISTA()}, and \emph{SQ2FISTA}. Chapters~7 consolidate conclusions, nuanced comparisons between \emph{standard} vs.\ \emph{convexified} FISTA and SQ2FISTA, and discuss limitations.\begin{table}[t]\centering\caption{Main symbols and conventions.}\label{tab:notation}\begin{tabular}{ll}\hlineSymbol & Meaning \\\hline & a global minimizer of ; \\ & Lipschitz smoothness constant of \\ & strong convexity constant of (of the Model) \\ & curvature of (weak convexity: \\ & prox-gradient mapping at stepsize ; see \eqref{eq:PG-map} \\FISTA() & FISTA with prox shift ; cf.\ \eqref{eq:prox-shift} \\SQ2FISTA & ODE-informed accelerated proximal method designed via wDG \\ & effective Lipschitz constant (in SQ2FISTA analysis) \\\hline\end{tabular}\end{table}\newpage\section{Summary}\bigskip\noindent In summary, this thesis contributes a unified analysis and implementation playbook for accelerated composite optimization in the presence of weakly convex proximals. The new method \emph{SQ2FISTA}, developed by PhD student Mr. Ushiyama from the University of Tokyo, derived from first principles, and the pragmatic \emph{FISTA()} baseline together form a robust toolkit that is theoretically sound and empirically validated; \emph{plain} FISTA remains a useful point of reference but is often dominated in the regimes that motivate contemporary applications
Nonlinear static (pushover) analysis of mortared precast walls
In der letzten Zeit werden die Vorfertigteilwände zunehmend bei mehrgeschossigen Gebäuden eingesetzt. Durch den Einsatz dieser Wände lässt sich nicht nur die Bauzeit verkürzen, sondern auch die Kosten reduzieren, die durch Schalungsarbeiten und Gerüstaufbau verursacht werden. Nichtdestotrotz stellt die Verbindung zwischen der Betonvollfertigteilwänden und der Decke eine Schwachstelle dar. In Grunde genommen, werden die Wände in geschosshohen Wandelemente geliefert und in einem Mörtelbett verlegt. Die Verbindung zur Decke erfolgt mittels Anschlussbewehrung an der Oberkannte der Wand. Bei dieser Bauweise, bei der die Verbindung zwischen der Deckenoberkante und der darüberliegenden Wand ausschließlich über eine unbewehrte, vermörtelte Fuge erfolgt, ist insbesondere auf den Schubwiderstand zu achten. Der Schubwiederstand der aussteifenden Wände wird durch das Kippverhalten der Wände sowie den Gleitwiderstand in der Mörtelfuge gegeben. Letzterer wird durch die Reibung in der Mörtelfuge repräsentiert. Besonders relevant sind die Belastungen auf der Druckseite an den Wandenden, die bei einer Erdbebensimulation infolge des Wandkippens auftreten. Dennoch kann die Wand ins Gleiten kommen, wenn die Reibung überwunden wird. Zyklische Belastungsversuche zeigen jedoch, dass ein guter Gleitwiderstand auch bei größeren Verformungen in der Gleitfuge bestehen bleibt. Der Schwerpunkt dieser Arbeit liegt auf der nummerischen Untersuchung des plastischen Potentials der Mörtelfuge. Es werden drei einfache Systeme analysiert und die mögliche Versagensmechanismen vorgestellt. Alle drei Systeme repräsentieren ein viergeschossiges Gebäude, wobei sich der Unterschied in der Schlankheit der Systeme zeigt. Zur Ermittlung des plastischen Potentials der Mörtelfuge wird die in Eurocode 8 beschriebene nichtlineare Push-Over Methode sowie die in RFEM 6 implementierte Push-Over Analyse herangezogen. Als Grundlage dient die Laboruntersuchung, die am Institut für Tragwerksplanung und Ingenieurholzbau der TU Wien durchgeführt wurde. Anschließend werden der gesamte Modellierungsprozess, und die Tragfähigkeitsüberprüfung detailliert erklärt.In recent times, prefabricated walls have been increasingly used for multi-storey buildings. The use of these walls not only shortens the construction time, but also reduces the costs associated with formwork and scaffolding.Nevertheless, the connection between the precast concrete walls and the ceiling is a weak point. Basically, the walls are delivered in storey-high wall elements and laid in a so-called mortar bed. The connection to the ceiling is made by means of connecting reinforcement at the top edge of the wall.With this construction method, in which the connection between the upper edge of the slab and the wall above is made exclusively via an unreinforced, mortared joint, particular attention must be paid to the shear resistance. The shear resistance of the bracing end walls is determined by the tilting behavior of the walls and the sliding resistance in the mortar joint. The latter is represented by the friction in the mortar joint. Particularly relevant are the loads on the pressure side at the wall ends, which occur in an earthquake simulation as a result of wall tilting. Nevertheless, the wall can start to slide if the friction is overcome. However, cyclic loading tests show that a good sliding resistance remains even with larger deformations in the sliding joint.The focus of this thesis is on the numerical investigation of the plastic potential of the mortar joint. Three simple systems are analyzed and the possible failure mechanisms are presented. All three systems represent a four-storey building, with the difference being the slenderness of the systems. The non-linear push-over method described in Eurocode 8 and the push-over analysis implemented in RFEM 6 are used to determine the plastic potential of the mortar joint. The laboratory investigation, which was carried out at the Institute of Structural Design and Timber Engineering at the Vienna University of Technology, serves as the basis. The entire modeling process and the load-bearing capacity check are then explained in detail
Gestaltung einer visuellen Analyse-Pipeline zur Untersuchung der TMS-Wirkungen auf die Herzfrequenz
Wir haben eine hochflexible, visuelle Analyse-Pipeline in Jupiter Notebooks entwickelt,um die transkranielle Magnetstimulation (TMS) und die Herzfrequenz (HF) in verschiedenen kognitiven Zuständen von Probanden zu untersuchen. TMS ist eine vielversprechende Behandlungsmethode für schwere depressive Störungen, die nicht auf pharmakologische Behandlungen ansprechen. Der Wirkmechanismus ist jedoch noch nicht vollständig aufgeklärt. Die Forschung zu den besten Erfassungseinstellungen, wie Stimulationsintensitäten und Zielorte, befindet sich noch im Anfangsstadium. Eine multimodale Analysepipeline, die TMS, funktionelle Magnetresonanztomographie (fMRI) und HF integriert, könnte Aufschluss über die Nervenbahnen geben und die Effizienz der TMS steigern. Um die bereits verfügbare Pipeline zur simultanen TMS-fMRI-Analyse auf die multimodale gleichzeitige TMS-fMRI-HR-Analyse auszuweiten, ist die Untersuchung der Auswirkungen der TMS auf die HR der nächste Schritt. Um die Pipeline für visuelle Analyse zu gestalten, führen wir eine Erweiterung des Daten-Nutzer-Aufgaben Designdreiecks [Miksch and Aigner, 2014] ein und wenden diese an, indem wir den bisherigen Workflow-Ansatz in den Gestaltungsprozess integrieren. Da sich der Datenverarbeitungs-Workflow in diesem Bereich noch in der Entwicklung befindet, bietet die Integration des bisherigen Workflow-Ansatzes in den Designprozess Vorteile, indem das Datenerbe respektiert, die Anpassungsfähigkeit der Benutzer unterstützt und die Kompatibilität der Aufgaben sichergestellt wird. Wir bezeichnen dieses Framework als Daten-Benutzer-Workflow-Aufgaben Designpyramide.Anschließend stellen wir eine visuelle Analyse-Pipeline bereit, um die Datenexploration in den frühen Phasen der Forschung zu unterstützen. Die interaktive Vorverarbeitungs-Pipeline umfasst die Extraktion von Daten, die Behandlung fehlender Daten und die Rauschunterdrückung. Um Zeitreihen der HR mit unterschiedlichen Eigenschaften zu vergleichen, visualisieren wir die Ähnlichkeitsmessung Dynamic Time Warping (DTW) und das Clustering der Herzfrequenzvariabilität (HFV). Wir bewerten die Vorverarbeitungsschrittequantitativ anhand simulierter EKG-Daten. Das wichtigste Ergebnis ist, dasslineare und polynomiale Interpolation mit RMSE-Werten (Root Mean Squared Error) von nur -3 bzw. -5-te Potenze als Imputationsmethoden für EKG mit einer Abtastfrequenz von 400 Hz besonders effektiv sind. Zur Bewertung des Nutzungsszenarios für die TMS- und EKG-Datenexploration, verwenden wir die Qualitative Inspektion der Ergebnisse (QRI). Unsere vorgeschlagene visuelle Analyse-pipeline stellt die ersten Schritte zur Integration der TMS-HR-Analyse in einen trimodalen simultanen TMS-fMRI-HF-Ansatz dar.We designed a highly flexible notebook-based visual analysis pipeline to explore Transcranial Magnetic Stimulation (TMS) and heart rate (HR) in different subjects’ cognitive states. TMS is a promising treatment of major depressive disorder not responsive to pharmacological treatment. However, the mechanism of action is not yet fully understood. The research in the best acquiring settings, such as stimulation intensities and target sites, is emerging. Multimodal analysis pipeline integrating TMS, Functional Magnetic Resonance Imaging (fMRI), and HR could shed light on both understanding the neural pathways and increasing the efficiency of TMS. To extend the already available concurrent TMS-fMRI analysis pipeline towards multimodal concurrent TMS-fMRI-HR, exploring the effect of TMS on HR is the next step. To design the visual analysis pipeline, we introduce and apply an extension of Data–Users–Tasks design triangle [Miksch and Aigner, 2014] by integrating the previous data workflow approach in the designing process. When the data processing workflow in the domain is only evolving, integrating the previous workflow approach into the design process benefits by respecting the data legacy, supports users’ adaptability, and ensures tasks’ compatibility. We refer to this framework as the Data–Users–Workflow-Tasks design pyramid. We subsequently provide a visual analysis pipeline to support data exploration in the early stages of research. The interactive preprocessing pipeline involves extracting data, handling missing data, and reducing noise. To compare time series of HR with different properties, we visualize Dynamic Time Warping (DTW) similarity measurement, and heart rate variability (HRV) metric clustering. We quantitatively evaluate the preprocessing steps using simulated ECG data. The key result is that linear and polynomial interpolation with root mean squared error (RMSE) values as low as to the power of -3 and -5, respectively, are especially effective as imputation methods for ECG with 400 Hz sampling frequency. To further assess the values of the usage scenarios for TMS and ECG data exploration, we employ Qualitative Result Inspection (QRI). Our proposed visual analysis pipeline assembles the first steps towards integrating TMS-HR analysis into a trimodal concurrent TMS-fMRI-HR approach
Experiments on the Behavior of the Filter Stability of Aggregates in Unbound Block Pavement Superstructures
In der vorliegenden Diplomarbeit wurde die Filterstabilität zwischen Gesteinskörnungen innerhalb des Schichtenaufbaus von ungebundenen Pflasterbefestigungen untersucht. Ziel war es, zu überprüfen, ob die in der österreichischen Richtlinie RVS 08.18.01 empfohlenen Sieblinienkombinationen zwischen Fugen- und Bettungsmaterial in der Praxis ausreichend filterstabil sind, um Materialverlagerungen und daraus resultierende Schäden zu vermeiden.Der Fokus lag auf labortechnischen Untersuchungen, bei denen verschiedene Kombinationen von Fugen- und Bettungsmaterialien mechanisch-hydraulischer Belastung ausgesetzt wurden. Der dafür eingesetzte Versuchsaufbau wurde speziell entwickelt, um realitätsnahe Beanspruchungen zu simulieren, und hat sich in seiner Anwendung als zuverlässig und praxisnah erwiesen. Es wurden Veränderungen der Sieblinien analysiert und deren Auswirkungen auf die Filterstabilität bewertet. Ergänzend dazu erfolgte eine theoretische Bewertung anhand der in der RVS definierten geometrischen Filterkriterien – einerseits für mögliche Kombinationen von Fugen- und Bettungsmaterial für die Laborversuche, andererseits zusätzlich für die Untersuchung des Verhaltens zwischen Bettung und Tragschicht sowie zwischen oberer und unterer Tragschicht.Die Ergebnisse zeigen, dass die in der RVS empfohlenen Sieblinienkombinationen unter realitätsnahen Bedingungen als ausreichend filterstabil einzustufen sind. In Einzelfällen kam es jedoch zu Veränderungen der Sieblinien und Anzeichen von Kornverlagerungen, was auf ein potenzielles Risiko für Materialwanderung schließen lässt. Solche Beobachtungen unterstreichen die Bedeutung einer passenden Materialauswahl und können bei abweichenden Kombinationen eine ergänzende labortechnische Überprüfung erforderlich machen.Die Arbeit leistet damit einen wichtigen Beitrag zum Verständnis der Filterstabilität bei Pflasterbefestigungen und betont die Relevanz einer sorgfältig abgestimmten Materialwahl im ungebundenen Schichtenaufbau. Die in der österreichischen Richtlinie RVS 08.18.01 empfohlenen Sieblinien zeigten ein stabiles Verhalten unter dynamischer und hydraulischer Belastung, was ihre praktische Anwendbarkeit bestätigt. Um die langfristige Eignung dieser Materialkombinationen noch besser beurteilen zu können, wäre es sinnvoll, deren Einsatz im Baualltag systematisch zu erfassen und auszuwerten. Erkenntnisse aus der Praxis können dazu beitragen, bestehende Regelwerke gezielt weiterzuentwickeln und an unterschiedliche Anwendungsbedingungen anzupassen.This diploma thesis investigates the filter stability between unbound granular materials within the layer structure of paving constructions. The aim was to assess whether the grain size distributions recommended in the Austrian guideline RVS 08.18.01 for joint and bedding materials are sufficiently filter-stable under practical conditions to prevent material migration and resulting damage.The primary focus was on laboratory investigations in which various combinations of joint and bedding materials were subjected to combined mechanical and hydraulic loading. A purpose-built test setup was used to realistically simulate these loads and proved to be both reliable and practice oriented. Particular attention was given to changes in the grain size distributions and their impact on filter stability. In addition, a theoretical evaluation was carried out based on the geometric filter criteria defined in the RVS - both for the joint-bedding interface and for the transitions between bedding and base layers, as well as between the upper and lower base layers.The results indicate that the grain size combinations recommended in the RVS generally exhibit sufficient filter stability under realistic conditions. However, in some cases, changes in the grain size distribution and signs of particle migration were observed, indicating a potential risk of material displacement. These findings highlight the importance of well-matched material selection and suggest that deviations from standard combinations should be supported by additional laboratory testing.This study provides a valuable contribution to the understanding of filter stability in block pavement superstructures and emphasizes the importance of carefully coordinated material selection in unbound layer systems. The grain size distributions recommended by the Austrian guideline RVS 08.18.01 demonstrated stable behavior under dynamic and hydraulic loading, supporting their applicability in practice. To further assess the long-term suitability of these material combinations, it would be beneficial to systematically document and evaluate their use in real-world construction projects. Practical feedback could help refine existing standards and better adapt them to varying field conditions
Potential for desealing in local spatial planning. Regulatory framework, identification of potential areas and limitations using the example of Bregenz
Die zunehmende Bodenversiegelung stellt insbesondere im Kontext fortschreitender Klimawandelfolgen wie Hitzeinseln und Starkregenereignissen eine zentrale Herausforderung für die Raumplanung dar. Obwohl das Thema an Bedeutung gewinnt, erfährt Bodenentsiegelung bislang nur geringe Beachtung aus raumplanerischer Perspektive. Insbesondere eine räumliche Differenzierung potenzieller Entsiegelungsflächen wurde bisher wenig thematisiert. Vor diesem Hintergrund untersucht die vorliegende Arbeit, wie Entsiegelung strategisch zur Erreichung raumplanerischer Ziele eingesetzt werden kann. Dafür werden einerseits die rechtlichen und planerischen Rahmenbedingungen der Landeshauptstadt Bregenz analysiert und andererseits ein methodischer Ansatz zur Identifikation und Priorisierung geeigneter Entsiegelungsflächen entwickelt. Dabei fließen sowohl ökologische und soziale als auch liegenschaftsbezogene Kriterien in die Bewertung ein, etwa Hitzebelastung, mangelnde Grünraumausstattung, Vulnerabilität unterschiedlicher Bevölkerungsgruppen und Eigentumsverhältnisse. Der entwickelte Ansatz wird exemplarisch auf das Stadtgebiet von Bregenz angewendet. Durch die Kombination von GIS-gestützten Raumanalysen, einer Bewertung anhand definierter Kriterien sowie stichprobenartigen Vor-Ort-Erhebungen werden konkrete Flächen identifiziert, deren Entsiegelung ein strategisches Potenzial für die Stadtplanung haben. Die Ergebnisse zeigen, dass Entsiegelung in Bregenz zwar als relevantes Handlungsfeld wahrgenommen wird, jedoch bislang unzureichend in rechtliche Grundlagen und planerische Prozesse integriert ist. Die örtliche Planung in Bregenz verfügt über Planungs- und Entscheidungsspielräume, um die Reduktion der Bodenversiegelung voranzutreiben; diese werden in Handlungsempfehlungen aufgezeigt. Deutlich wird jedoch auch der Handlungsbedarf auf Landesebene: Um die kommunalen Handlungsspielräume zu stärken, bedarf es finanzieller Anreize, fachlicher Begleitung sowie klarer Vorgaben im Raumplanungsgesetz und weiteren relevanten Rechtsmaterien.The increasing sealing of soil surfaces poses a significant challenge for spatial plan-ning, particularly in light of ongoing climate change impacts such as urban heat islands and heavy rainfall events. Although the issue is gaining importance, the topic of soil unsea-ling has so far received limited attention from a planning perspective. In particular, the spa-tial differentiation of potential unsealing areas in urban contexts remains underexplored. Against this background, the present thesis investigates how unsealing can be strategically emp-loyed to support spatial planning objectives. To this end, the legal and planning frameworks of the city of Bregenz are analyzed, and a methodological approach is developed to identify and prioritize suitable unsealing areas. The assessment incorporates ecological, social, and land-related criteria, such as heat stress, lack of green space, the vulnerability of different population groups, and proper-ty ownership. The developed approach is applied to the urban area of Bregenz as a case study. By combining GIS-based spatial analyses, an evaluation based on defined indicators, and selective field surveys, specific sites are identified whose unsealing offers strategic potential for urban planning. The results show that while unsealing is increasingly recognized as a relevant field of action in Bre-genz, it remains insufficiently embedded in legal frameworks and planning processes. Local planning in Bregenz does possess instruments and decision-making leeway to advance the reduction of sea-led surfaces, which are addressed through targeted recommendations. However, the analysis also re-veals a clear need for action at the state level: strengthening municipal capacities will require financial incentives, technical support, and binding targets and regulations within the spatial planning law and other relevant legal instruments
CoMECS: Framework for Comparing and Measuring the Energy Consumption of Software
The current human-made climate change is one of the greatest challenges of the last, current and coming generations. Humans produce a huge amount of greenhouse gases, that warm up the atmosphere. The immense thirst for energy that we as a society have is one of the main drivers. Achieving the ambitious goals set in the agreements that most countries of the world signed, will necessarily mean reducing our global energy consumption. The Statistical Review of The World report from 2024 shows that the world only produces around 20% of its energy from renewable sources. The rest emits greenhouse gases during its production. Thus we have to transition to renewable energy to avoid these emissions. This is however not possible to do quickly enough, so besides transforming the energy production, we also have to reduce our energy consumption globally. This includes every single part of the economy, which of course means that IT also needs to evaluate and reduce its impact. Malmodin and Lundén estimate in their paper from 2018 that the energy consumption of the ICT and Entertainment & Media sector will fall overall in the future, but for data centers it will rise. The newest report from the International Energy Agency from 2025 estimates that the energy consumption of data centers will more than double by 2030, especially due to the rise in popularity of AI applications.The IT sector and specifically the part of it that is focused on server infrastructure therefore has to get ready to measure and reduce its energy consumption. Underlying a huge part of modern server infrastructure is open-source software, which is usually maintained by just a small number of volunteers. These people do not have the resources to do extensive testing of the energy consumption of their software. This problem is amplified by the lack of well tested and freely available tooling that can be used to measure the power usage of programs. Additionally the available tools are very different in their input, output and their usage.We therefore propose a framework and methodology which facilitate the measurement of energy consumption of software with a wide variety of available software-based power meters. The three goals that we define for this are Openness, Reproducibility and Standardized Output, which our framework CoMECS has to fulfill. We show how we built a wrapper structure based on Nix, which allows for reproducible experiments. We also present our approach to using SSH for controlling the test machine, on which the energy consumption is measured. These techniques allow us to define and execute reliable and reproducible tests against a desktop machine and server without changing any of the defined wrappers.We then use our framework to compare six software-based power meters against a hardware-based solution. Our results show that using the values from the pkg domain of RAPL sensor or the CodeCarbon tool most closely resembles the real measurements using a hardware-based power meter. This is however only true for CPU and main memory focused work loads. Another experiment we conducted was reproducing previous results of how much energy consumption overhead containerization using Docker causes. Lastly, we evaluated the power consumption of nginx over the course of 16 releases and found that it reduced its average energy consumption between the version 1.11 and 1.26 by around 3%
Influence of redeposited tungsten and EUROFER97 layers on deuterium retention in plasma-facing materials
Retention of hydrogen isotopes in plasma-facing materials is a key challenge for safety and fuel efficiency of nuclear fusion reactors. In realistic reactor environments, simultaneous processes, such as erosion, redeposition, implantation and outgassing, can alter surface compositions and may affect hydrogen isotope retention. In this study, we investigate how thin redeposited layers of tungsten and EUROFER97 influence retention and release of previously implanted deuterium. Using a combination of Elastic Recoil Detection Analysis and Rutherford Backscattering Spectrometry, we quantify deuterium retention during in-situ annealing up to 600 °C. Comparisons between coated and uncoated samples show that redeposited tungsten can act as partial diffusion barrier, preventing deuterium from outgassing. In contrast, redeposited EUROFER97 layers show no such effect and appear virtually transparent to deuterium diffusion. These findings emphasize the critical role of redeposited layers on fuel retention and have implications for wall lifetime estimates and fuel inventory control in fusion devices