Publikationer från KTH
Not a member yet
    56339 research outputs found

    UPPERCASE IS ALL YOU NEED

    No full text
    WE PRESENT THE FIRST COMPREHENSIVE STUDY ON THE CRITICAL YET OVERLOOKED ROLE OF UPPERCASE TEXT IN ARTIFICIAL INTELLIGENCE. DESPITE CONSTITUTING A MERE SINGLE-DIGIT PERCENTAGE OF STANDARD ENGLISH PROSE, UPPERCASE LETTERS HAVE DISPROPORTIONATE POWER IN HUMAN-AI INTERACTIONS. THROUGH RIGOROUS EXPERIMENTATION INVOLVING SHOUTING AT VARIOUS LANGUAGE MODELS, WE DEMONSTRATE THAT UPPERCASE IS NOT MERELY A STYLISTIC CHOICE BUT A FUNDAMENTAL TOOL FOR AI COMMUNICATION. OUR RESULTS REVEAL THAT UPPERCASE TEXT SIGNIFICANTLY ENHANCES COMMAND AUTHORITY, CODE GENERATION QUALITY, AND – MOST CRUCIALLY – THE AI’S ABILITY TO CREATE APPROPRIATE CAT PICTURES. THIS PAPER DEFINITIVELY PROVES THAT IN THE REALM OF HUMAN-AI INTERACTION, BIGGER LETTERS == BETTER RESULTS. OUR FINDINGS SUGGEST THAT THE CAPS-LOCK KEY MAY BE THE MOST UNDERUTILIZED RESOURCE IN MODERN AI.QC 20250424</p

    Deep Learning based Tool for Optimizing Scrap Feed prior to EAF Operation

    No full text
    Självklart! Här är texten med de saknade mellanslagen tillagda, utan andra ändringar:  The steel industry is a key sector for modern society. Due to the extreme versatility of steel, it finds application in very different sectors. However, the steel industry is also very polluting, accounting for roughly 7-9 % of the yearly global anthropogenic CO2 emissions. Steel is produced either from iron ore, with the Blast Furnace-Basic Oxygen Furnace (BF-BOF) process, or from steel scrap, following the Electric Arc Furnace (EAF) process. With the latter being a more sustainable alternative (roughly half of the energy required and 1.5 tonnes of CO2 emissions), the steel industry looks forward to shifting the production towards the EAF, which today accounts for 29 % of the global production. However, differently from the BF-BOF, the EAF has still not been optimized up to its full potential. In the process of achieving EAF process’ maximum efficiency, the integration of artificial intelligence could represent a significant step forward. The power and computational ability of machine learning models may indeed be used to further analyse the data collected from industry. In this way, it could be possible to improve the understanding of the EAF process and help optimizing it. This thesis focuses on using data clustering algorithms (Gaussian Mixture Model and K-Means) to analyse real industrial data. The objective was to examine the influence of the steel scrap type and its preparation on the EAF energy consumption. The information gathered during the preliminary data analysis and the data clustering were then used to develop and support an optimizer tool used to predict the most energy-efficient configuration of the EAF charging bucket.Självklart! Här är texten med de saknade mellanslagen tillagda, utan några andra ändringar:  Stålindustrin är en nyckelsektor för det moderna samhället. På grund av stålets extrema mångsidighet, finner det användning inom väldigt olika sektorer. Men stålindustrin är också mycket förorenande och står för ungefär 7-9 % av de årliga globala antropogena CO2-utsläppen. Stål framställs antingen från järnmalm, med masugns-Basic Oxygen Furnace (BF-BOF)-processen, eller från stålskrot, enligt den elektriska ljusbågsugnen (EAF)-processen. Eftersom det senare är ett mer hållbart alternativ (ungefär hälften av energin som krävs och 1.5 ton CO2-utsläpp) ser stålindustrin fram emot att flytta produktionen mot EAF, som idag står för 29 % av den globala produktionen. Men till skillnad från BF-BOF har EAF fortfarande inte optimerats till sin fulla potential. I processen att uppnå EAF-processens maximala effektivitet kan integreringen av artificiell intelligens utgöra ett betydande steg framåt. Kraften och beräkningsförmågan hos modeller för maskininlärning kan verkligen användas för att ytterligare analysera data som samlas in från industrin. På så sätt skulle det kunna vara möjligt att förbättra förståelsen av EAF-processen och hjälpa till att optimera den. Denna avhandling fokuserar på att använda dataklustringsalgoritmer (Gaussian Mixture Model och K-Means) för att analysera verklig industriell data. Syftet var att undersöka vilken inverkan stålskrottypen och dess beredning har på energiförbrukningen för EAF. Informationen som samlades in under den preliminära dataanalysen och dataklustringen användes sedan för att utveckla och stödja ett optimeringsverktyg som används för att förutsäga den mest energieffektiva konfigurationen av EAF-laddningsskopan

    Analys av S&amp;P 500 vid politiska skiften

    No full text
    The stock market index S&amp;P 500 is influenced by a wide range of macroeconomic and behavioral factors. This study aims to explore how the relationship between key indicators differs under two U.S. presidential administrations – Joe Biden and Donald Trump. This is achieved by analyzing periods with differing political climates. Principal Component Regression (PCR) is used to identify the extent to which the selected variables affect stock market behavior and to evaluate whether this relationship differs across the presidential administrations. The results show that while the Biden-era model provides stronger and more interpretable patterns than the Trump-era model, both models exhibit limited reliability due to short time spans and data constraints. The political environment appears to have a strong impact on stock market performance, and PCR proves effective in capturing the complex relationships among variables.Aktieindexet S&amp;P 500 påverkas av flertalet olika makroekonomiska och beteendemässiga faktorer. Denna studie syftar till att undersöka hur sambandet mellan centrala indikatorer skiljer sig åt under två amerikanska presidentadministrationer – Joe Biden och Donald Trump. Detta uppnås genom att analysera perioder med olika politiska klimat. Principal Component Regression (PCR) används för att identifiera i vilken utsträckning de valda variablerna påverkar börsbeteendet och för att utvärdera om detta samband skiljer sig mellan presidentadministrationerna. Resultaten visar att även om modellen för Biden-perioden ger starkare och mer tolkbara mönster än modellen för Trump-perioden, uppvisar båda modellerna begränsad tillförlitlighet på grund av korta tidsperioder och begränsad data. Det politiska klimatet har ett starkt inflytande på aktiemarknadens utveckling och PCR visar sig vara effektivt för att förstå de komplexa sambanden mellan variablerna

    Generative structural design for rapid embodied carbon evaluation

    No full text
    The art and science of structural engineering have played a vital role in shaping the built environment, often relying on methodologies that are far from optimal. Parametric design has become a widely used method among structural engineers for design exploration and better collaboration with architects in the early design stages. Combined with the increasing integration of Artificial Intelligence into everyday applications, this approach others the potential to enhance the current methodologies and make the use of finite element software more accesible.In this thesis, a parametric finite element model has been developed to generate a wide range of structural topologies to train a machine learning regression model capable of predicting material quantities in terms of embodied carbon. The parameterized algorithm focuses on frame-core wall structural systems of arbitrary 2D shapes, though the model is trained with a limited set of shape variations.The main algorithm enables rapid finite element model creation based on simple geometric inputs and allows for the identification of potential material optimization. Despite some simplifications, such as simplified wall distribution and uniform core cross-sections, the analytical approach taken achieves reasonable accuracy and provides significant value in the design exploration in early stages. Meanwhile, the prediction results demonstrate the potential of using machine learning to estimate embodied carbon despite inconsistencies in the results, highlighting the quasi-linear nature of the problem due to the aforementioned simplifications and the potential for data-driven insights.The study successfully provides a foundation for further exploration of embodied carbon prediction using artificial intelligence, offering a pathway toward more efficient design practices.

    The expansion of the planner’s rationality in citizen dialogues : A Study on AI’s potential to enhance efficiency and foster democratization in consultation processes

    No full text
    Idag kan samrådsfaserna i våra planeringsprocesser ses präglas av enflaskhalsproblematik. Samrådens funktion av att spegla och integrera medborgares synpunkter iplanförslag begränsas, på grund av en brist på tid och resurser. De stadsplanerarna somansvarar för samråden tvingas därför nedprioritera fullständiga undersökningar avsynpunkternas ursprung – och därmed analyser av deras underliggande premisser. Dettatillåter inflytelserika aktörer kan förskjuta diskursens underliggande rationalitet till sinafördelar, på bekostnad av intressenter med lägre status. Det är denna dynamik som försvårarför strävan mot mer deliberativ demokrati, där principen om folkstyre upprätthålls genomlikvärdig behandling av argument. Men givet den relativt nya upptäckten av den artificiellaintelligensens kraftfulla förmåga till naturlig språkbehandling skymtar en potentiell lösning:att automatisera den monotona men tidskrävande kategoriseringen av samrådssynpunkter, vilket frigör tid som planerarna kan återinvestera i mer djupgående analyser av de diskursbildande processerna. Denna studie har sammanställt ett kunskapsunderlag som syftar till att integrera denyckelinsikter som krävs för att utveckla en AI-driven modell som kan effektivisera ochdemokratisera planeringens samrådsprocesser, med särskilt fokus på sammanställningarna avderas tillhörande samrådsredogörelser. Det insamlade kunskapsunderlaget visar att dagenskategoriseringsprocesser kan fullt ut replikeras och automatiseras med en AI-modell, vilketger både eftersträvade tidsvinster, men även grundläggande kvalitetshöjningar. Tack varedessa modellers skalbarhet kan ramverket också utvidgas för att hantera ytterligarekontextberikande uppgifter, vars resultat sedan kan visualiseras med valbara grafer ianvändargränssnittet – och därmed hjälpa planerarna att upprätthålla en mer deliberativdiskurs. I diskussionen föreslås att dessa förbättringar ska utforskas vidare genom konkretapilot- och forskningsprojekt för att uppnå den fulla potentialen av denna typ av modell.Kunskapsunderlaget beaktar dessutom de risker som är förknippade med AI-drivna system.Today, the consultation stages of our planning processes can be characterized by a bottleneckphenomenon. Their ability to reflect and integrate citizens’ comments into planning proposalsis constrained by a lack of time and resources. Urban planners overseeing these consultationsare therefore forced to deprioritize comprehensive examinations of where those comments originate—and thus analyses of their underlying premises. This allows influential actors to skew the discourse’s rational foundations to their advantage, at the expense of lower-statusstakeholders. It is this dynamic that impedes the pursuit of a more deliberative democracy, inwhich popular rule is upheld through the equal treatment of arguments. However, given therelatively recent emergence of artificial intelligence’s powerful natural-language processing capabilities, a potential solution comes into view: automating the monotonous yet time-consuming categorization of consultation comments, thereby freeing up time that planners can reinvest in deeper analyses of the discursive processes. This study has assembled a knowledge base aimed at integrating the key insights needed todevelop an AI-driven model capable of streamlining and democratizing planning’sconsultation processes, with a particular focus on compiling their associated consultationreports. The collected evidence shows that today’s categorization workflows can be fullyreplicated by—and automated with—an AI model, yielding both the sought-after timesavings and fundamental quality improvements. Thanks to the scalability of such models, thisframework can be extended to handle additional context-enriching tasks, which can in turn bevisualized via optional graphs in the user interface—thereby helping planners sustain a moredeliberative discourse. The discussion proposes that these enhancements be explored furtherthrough concrete pilot and research projects aimed at developing this type of model. Theknowledge base also accounts for the risks inherent to AI-driven systems

    Enumerative and matroidal aspects of rook placements

    No full text
    Simply construed, combinatorics entails the  counting and classifying of finite objects. Such objects vary from permutations and graphs, to posets and matroids; they have in common the idea of a device that represents discrete data. Enumerative combinatorics deals with the exact or asymptotic enumeration of this data. Rook theory is the enumerative study of non-attacking rook placements on a board and matroid theory is the combinatorial abstraction of the notion of linear independence in linear algebra. In this dissertation, we will describe a new connection between rook theory and matroid theory, and study some consequences of this connection. In a different direction, we will build new tools to study the combinatorial features of the set of all points equidistant from two fixed points, where the notion of distance comes from a polytope. Across this work, the following three perspectives are employed to understand combinatorial data: (a) discrete and convex geometric objects like polyhedra and polyhedral~complexes; (b) bijective methods; and (c) the geometry of polynomials.   This thesis is divided into two parts. The first part consists of Paper A on the geometric combinatorial theory of bisectors. A polyhedral norm is a notion of distance arising from centrally symmetric polytopes; they have found application in modelling problems in areas ranging from algebraic statistics, topological data analysis, robotics, and crystallography.  The bisector associated to any polyhedral norm is a polyhedral complex whose maximal cells are labeled by pairs of facets of the polytope. We identify precisely which maximal cells are present, and in doing so, systematize the study of the bisection fan of a polytope, a tool that captures fundamental combinatorial information about the structure of the bisector. We focus on four fundamental notions of distance ---polygonal norms, || · ||1 and || · ||∞ norms, and the discrete Wasserstein norm. In each of these cases, we give an explicit combinatorial description of the bisection fan, thereby extending work of Criado, Joswig, and Santos (2022).  The second part of this thesis --- spanned by Papers B,C, and D --- is concerned with novel enumerative and matroidal properties of rook placements. In particular, in Paper B, we introduce and define a new matroid whose bases are the set of non-nesting rook placements on a skew Ferrers board; this establishes the first known bridge between rook theory and matroid theory. The structure of rook matroids is interesting: they form a subclass of both transversal matroids and positroids. In this regard, and through the skew shape association, rook matroids bear a striking resemblance to lattice path matroids. We explore this connection in depth by (a) proving a precise criterion for when a rook matroid and lattice path matroid are isomorphic; (b) proving that despite the failure of isomorphism in general, rook matroids and lattice path matroids have the same Tutte polynomial; and (c) proving that every lattice path matroid is a certain contraction of a rook matroids, thereby obtaining a new perspective on results of de Mier--Bonin--Noy (2004) and Oh (2011).  We then apply this matroid structure to two enumerative problems that bear no relation to one another at first glance. The first is determining the precise distributional property satisfied by the generating polynomial of the set of non-nesting rook placements on a skew shape. This question is motivated by the famous Heilmann--Lieb theorem on the real-rootedness of the matching polynomial. In contrast to the case of unrestricted rook placements, the polynomial in question satisfies ultra-log-concavity, but not real-rootedness. The second enumerative problem concerns the log-concavity consequence of the Neggers--Stanley conjecture. The (P, ω)-Eulerian polynomial  --- the descent-generating polynomial of the set of (inverses of) linear extensions of a labeled poset (P, ω) --- was introduced by Stanley (1972) in his PhD thesis and was conjectured to be real-rooted, first by Neggers (1978) and later by Stanley (1986) himself. It was eventually disproven, in one formulation by Brändén (2005) and in another by Stembridge (2007). The natural follow-up question, also a conjecture of Brenti (1989), is whether the (P, ω)-Eulerian polynomial is log-concave in general. We provide an affirmative answer to this conjecture in the case when (P, ω) is a naturally labeled poset of width two, by combining two ideas: a classical bijection known in the theory of distributive lattices, and the Brändén--Huh theory of Lorentzian polynomials.   In Paper D, the positroidal structure of non-nesting rook placements is explored in greater depth. Some consequences include a new proof of the positroid structure and an inequality description of the base polytope of the rook matroid using only the combinatorics of the underlying skew shape. Answering a question of Lam (2024), we characterize rook matroids from the positroidal point of view.  In Paper C, we establish the real-rootedness of the generating polynomial of complete rook placements on Ferrers boards, enumerated by the number of ascents. This set of rook placements is interesting in connection with the natural poset--theoretic structure that it has: the lower interval of 312-avoiding permutations in the Bruhat order. This polynomial also represents another yet another generalization of the classical Eulerian polynomials; it is similar to but distinct from the generalization introduced by Savage and Visontai (2015). QC 2025-05-14</p

    Att stärka rektorer med datavisualisering för att förbättra elevernas läsfärdigheter

    No full text
    School principals play a vital role in guiding schools and shaping teaching practices. While students and teachers use various technologies for teaching and learning practices, the challenge remains in turning raw data into actionable insights for decision-making. Although teachers receive support for their close work with students, less attention is given to principals, who make higher-level decisions and allocate resources. This thesis explores how data visualization tools can help principals gain insights to improve students' reading skills, which have been considerably declining during the last decade, and especially during the last few years. The research examines how data visualizations can support principals in identifying students needing reading assistance and tracking reading progress through a mixed-methods approach. Seven school principals with varying levels of computer experience participated in the study, which involved interacting with prototype visualizations across three rounds. Methods used include Think Aloud protocols, A/B testing, and follow-up interviews. The findings show that well-designed data visualization tools can help principals better identify students struggling with reading. The study highlights the importance of iterative design in improving these tools and emphasizes the need for clear, informative visualizations. Overall, effective data visualization enhances principals' decision-making, ultimately supporting students' reading development.Rektorer spelar en avgörande roll i att leda skolor och forma undervisningsmetoder. När elever och lärare använder olika tekniker för undervisning och lärande, kvarstår utmaningen att omvandla rådata till handlingsbara insikter för beslutsfattande. Även om lärare får stöd för sitt nära arbete med elever, ges mindre uppmärksamhet åt rektorer, som fattar beslut på högre nivå och fördelar resurser. Denna avhandling undersöker hur datavisualiseringsverktyg kan hjälpa rektorer att få insikter för att förbättra elevernas läsförmåga, som har minskat avsevärt under det senaste decenniet och särskilt under de senaste åren. Rapporten undersöker hur datavisualiseringar kan stödja rektorer i att identifiera elever som behöver läshjälp och spåra deras läsutveckling genom en blandad metodansats. Sju rektorer med varierande nivåer av datorvana deltog i studien, där de interagerade med prototypsvisualiseringar i tre omgångar. Metoderna inkluderade "Think Aloud"-protokoll, A/B-testning och uppföljningsintervjuer. Resultaten visar att väl utformade datavisualiseringsverktyg kan hjälpa rektorer att bättre identifiera elever som har svårt med läsningen. Studien betonar vikten av iterativ design för att förbättra dessa verktyg och behovet av tydliga, informativa visualiseringar. Sammanfattningsvis förbättrar effektiv datavisualisering rektorernas beslutsfattande, vilket i slutändan stöder elevernas läsutveckling

    Likvideringskostnad för centralt clearade koncentrerade aktiederivatsportföljer

    No full text
    Central clearing counterparties (CCPs) charge margin to protect against liquidation costs that arise in clearing default situations. With unique access to proprietary market data on Nasdaq Stockholm, we develop a quantitative framework to estimate close-out costs in the cash equity market as a proxy for derivative delta hedging costs of concentrated portfolios. First, we apply algorithmic metaorder identification to transaction-level data with trader identifiers to measure price impact as a function of traded volume across OMXS30 constituents. Second, we perform a complementary analysis of the closing cross auction dynamics, providing insights into price drift and liquidity availability. Based on these results, we develop a liquidation cost function that quantifies the upper bound of estimated price impact as a function of the share of daily traded volume transacted. While extrapolation to larger volumes introduces uncertainty due to data limitations, our findings establish reasonable estimates based on historical market data, serving as a foundation for calibrating risk models used in collateral capital calculations.Centrala motparter (CCPs) tar ut säkerhetskapital för att täcka likvidationskostnader som uppstår vid konkurs av en clearingmedlem. Med tillgång till proprietär marknadsdata från Nasdaq Stockholm utvecklar vi ett kvantitativt ramverk för att uppskatta likvideringskostnaden för aktieportföljer som en approximation av delta-hedging av koncentrerade portföljer av aktiederivat. Vi tillämpar algoritmisk identifiering av metaordrar på transaktionsdata med handlaridentifierare för att mäta prisinverkan som en funktion av handlad volym för underliggande aktier i OMXS30. Därefter genomför vi en analys av stängningsauktionen, vilket ger insikt i prisdrift och likviditetstillgänglighet. Baserat på dessa resultat utvecklar vi en likvideringskostnadsfunktion som kvantifierar den övre gränsen för uppskattad prisinverkan som en funktion av daglig handlad volym. Extrapolering till större volymer medför osäkerhet på grund av begränsningar i datan, men resultaten medför rimliga uppskattningar baserade på historisk marknadsdata, vilket kan användas som grund för kalibrering av riskmodeller vid beräkning av säkerhetskapital

    Abstrahera Fel Från Tillståndfull Dataflöde

    No full text
    Systems distributed across several computers are essential for modern infrastruc- ture, and their reliability is reliant on the correctness of the constituent computers’ failure-handling protocols. Correctness in such systems is often understood as fail- ure transparency, a property that enables to use a system as if no failures occur in it; in other words, it states that there is a high-level model of the system, from which the failures are abstracted away. This work proves that failure transparency is provided by the Asynchronous Barrier Snapshotting protocol used in Apache Flink, a prominent distributed stateful dataflow system. This protocol is formal- ized in operational semantics for the first time in this thesis. As no prior definition of failure transparency is suitable for this formalization, a novel definition is pro- posed, applicable to systems expressed in small-step operational semantics with explicit failure-related rules. The work demonstrates how failure transparency can be proven by reasoning about each execution as a whole, presenting a proof tech- nique convenient for proofs about checkpoint-recovery protocols. The results are a first step towards a verified stateful dataflow programming stack.System fördelade över flera datorer är väsentliga för den moderna infrastrukturen, och deras tillförlitlighet är baserad på korrektheten i protokollen som hanterar fel i de ingående datorerna. Riktigheten förstås ofta som failure transparency, en egenskap som gör det möjligt att använda ett system som om inga fel uppstår i det; med andra ord står det att det finns en högnivåmodell av systemet, från vilken miss- lyckandena abstraheras bort. Detta arbete bevisar att feltransparens tillhandahålls av protokollet Asynchronous Barrier Snapshotting som används i Apache Flink, en framträdande representant för distribuerade system med stateful dataflöde. Den första operativa semantiken för protokollet presenteras; Dessutom, eftersom det inte fanns någon definition av feltransparens för modeller i småstegsoperativ se- mantik, föreslås en ny definition, tillämplig på system uttryckta i småstegsoperativa semantik med explicita felrelaterade regler. Beviset visar hur misslyckandetrans- parens kan bevisas genom att resonera om varje exekvering som helhet, vilket gör det praktiskt i bevis om protokoll för återställning av checkpoints. Resultaten är ett första steg mot en verifierad stack för stateful dataflödesprogrammering

    Aspect-ratio effect on the wake of a wall-mounted square cylinder immersed in a turbulent boundary layer

    No full text
    The wake topology behind a wall-mounted square cylinder immersed in a turbulent boundary layer is investigated using high-resolution large-eddy simulations (LES). The boundary-layer thickness at the obstacle location is fixed, with a Reynolds number based on cylinder height ℎ and free-stream velocity ∞ of 10,000 while the aspect ratio (AR), defined as obstacle height divided by its width, ranges from 1 to 4. The mesh resolution is comparable to DNS standards used for similar wall-mounted obstacles, though with relatively lower Reynolds numbers. The effects of AR on wake structures, turbulence production, and transport are analyzed via Reynolds stresses, anisotropy-invariant maps (AIM), and the turbulent kinetic energy (TKE)budget. In particular, the transition from ‘‘dipole’’ to a ‘‘quadrupole’’ wake is extensively examined as AR increases. With increasing AR, the wake shrinks in both the streamwise and spanwise directions, attributed to the occurrence of the base vortices (AR = 3 and 4). This change in the flow structure also affects the size of the positive-production region that extends from the roof and the flank of the obstacle to the wake core. The AIMs confirm three-dimensional wake features, showing TKE redistribution in all directions (Simonsen and Krogstad, 2005). Stronger turbulence production in AR = 3 and 4 cases highlights the role of tip and base vortices behind the cylinder. The overall aim is to refine the dipole-to-quadrupole transition as a function of AR and accounting for the incoming TBL properties. The novelty relies on proposing the momentum-thickness-based Reynolds number Re as a discriminant for assessing TBL effects on turbulent wake structures.QC 20250122</p

    0

    full texts

    56,339

    metadata records
    Updated in last 30 days.
    Publikationer från KTH is based in Sweden
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇