11,914 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Rank-based linkage I: triplet comparisons and oriented simplicial complexes
Rank-based linkage is a new tool for summarizing a collection of objects
according to their relationships. These objects are not mapped to vectors, and
``similarity'' between objects need be neither numerical nor symmetrical. All
an object needs to do is rank nearby objects by similarity to itself, using a
Comparator which is transitive, but need not be consistent with any metric on
the whole set. Call this a ranking system on . Rank-based linkage is applied
to the -nearest neighbor digraph derived from a ranking system. Computations
occur on a 2-dimensional abstract oriented simplicial complex whose faces are
among the points, edges, and triangles of the line graph of the undirected
-nearest neighbor graph on . In steps it builds an
edge-weighted linkage graph where
is called the in-sway between objects and . Take to be
the links whose in-sway is at least , and partition into components of
the graph , for varying . Rank-based linkage is a
functor from a category of out-ordered digraphs to a category of partitioned
sets, with the practical consequence that augmenting the set of objects in a
rank-respectful way gives a fresh clustering which does not ``rip apart`` the
previous one. The same holds for single linkage clustering in the metric space
context, but not for typical optimization-based methods. Open combinatorial
problems are presented in the last section.Comment: 37 pages, 12 figure
A Design Science Research Approach to Smart and Collaborative Urban Supply Networks
Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness.
A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense.
Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice
Examples of works to practice staccato technique in clarinet instrument
Klarnetin staccato tekniğini güçlendirme aşamaları eser çalışmalarıyla uygulanmıştır. Staccato
geçişlerini hızlandıracak ritim ve nüans çalışmalarına yer verilmiştir. Çalışmanın en önemli amacı
sadece staccato çalışması değil parmak-dilin eş zamanlı uyumunun hassasiyeti üzerinde de
durulmasıdır. Staccato çalışmalarını daha verimli hale getirmek için eser çalışmasının içinde etüt
çalışmasına da yer verilmiştir. Çalışmaların üzerinde titizlikle durulması staccato çalışmasının ilham
verici etkisi ile müzikal kimliğe yeni bir boyut kazandırmıştır. Sekiz özgün eser çalışmasının her
aşaması anlatılmıştır. Her aşamanın bir sonraki performans ve tekniği güçlendirmesi esas alınmıştır.
Bu çalışmada staccato tekniğinin hangi alanlarda kullanıldığı, nasıl sonuçlar elde edildiği bilgisine
yer verilmiştir. Notaların parmak ve dil uyumu ile nasıl şekilleneceği ve nasıl bir çalışma disiplini
içinde gerçekleşeceği planlanmıştır. Kamış-nota-diyafram-parmak-dil-nüans ve disiplin
kavramlarının staccato tekniğinde ayrılmaz bir bütün olduğu saptanmıştır. Araştırmada literatür
taraması yapılarak staccato ile ilgili çalışmalar taranmıştır. Tarama sonucunda klarnet tekniğin de
kullanılan staccato eser çalışmasının az olduğu tespit edilmiştir. Metot taramasında da etüt
çalışmasının daha çok olduğu saptanmıştır. Böylelikle klarnetin staccato tekniğini hızlandırma ve
güçlendirme çalışmaları sunulmuştur. Staccato etüt çalışmaları yapılırken, araya eser çalışmasının
girmesi beyni rahatlattığı ve istekliliği daha arttırdığı gözlemlenmiştir. Staccato çalışmasını yaparken
doğru bir kamış seçimi üzerinde de durulmuştur. Staccato tekniğini doğru çalışmak için doğru bir
kamışın dil hızını arttırdığı saptanmıştır. Doğru bir kamış seçimi kamıştan rahat ses çıkmasına
bağlıdır. Kamış, dil atma gücünü vermiyorsa daha doğru bir kamış seçiminin yapılması gerekliliği
vurgulanmıştır. Staccato çalışmalarında baştan sona bir eseri yorumlamak zor olabilir. Bu açıdan
çalışma, verilen müzikal nüanslara uymanın, dil atış performansını rahatlattığını ortaya koymuştur.
Gelecek nesillere edinilen bilgi ve birikimlerin aktarılması ve geliştirici olması teşvik edilmiştir.
Çıkacak eserlerin nasıl çözüleceği, staccato tekniğinin nasıl üstesinden gelinebileceği anlatılmıştır.
Staccato tekniğinin daha kısa sürede çözüme kavuşturulması amaç edinilmiştir. Parmakların
yerlerini öğrettiğimiz kadar belleğimize de çalışmaların kaydedilmesi önemlidir. Gösterilen azmin ve
sabrın sonucu olarak ortaya çıkan yapıt başarıyı daha da yukarı seviyelere çıkaracaktır
Valtionavustustoiminnan sanasto : 2. laajennettu laitos
Valtionavustustoiminnan sanastossa määritellään valtionavustustoimintaa koskevat keskeiset käsitteet ja annetaan näille valtakunnalliset termisuositukset sekä niiden vastineet ruotsiksi ja englanniksi. Tavoitteena on selkeyttää ja yhdenmukaistaa yli 90 valtionapuviranomaisen käyttämät käsitteet.
Sanastoa on valmisteltu yhteistyössä eri hallinnonaloilla toimivien valtionavustustoiminnan asiantuntijoiden ja valtionavustuksen hakijoiden kanssa. Terminologista sanastotyötä on toteuttanut Sanastokeskuksen terminologi yhdessä CSC – Tieteen tietotekniikan keskuksen asiantuntijan kanssa. Ruotsin ja englannin kieliversioista on vastannut valtioneuvoston kanslian käännös- ja kielitoimiala. Jatkossa työryhmätyöskentely painottuu sanaston ylläpitämiseen ja kehittämiseen.
Julkaisu täydentää aikaisempaa sanaston versiota, joka julkaistiin vuonna 2021. Sanastoa on täydennetty luvuilla 7—8, joissa määritellään valtionavustuksella rahoitettavaan toimintaan, vaikuttavuuteen ja arviointiin sekä talouteen ja toiminnan resursseihin liittyviä käsitteitä. Sanastosta aiemmin julkaistu osa (luvut 1—6) sisältää valtionavustuslajeihin sekä valtionapuviranomaisten ja valtionavustusten hakijoiden väliseen toimintaan liittyviä käsitteitä. Näihin käsitteisiin on tehty tarkennuksia ja uusia käsitesuhdeviittauksia sanaston uusien osuuksien valmistelun aikana
Fair Assortment Planning
Many online platforms, ranging from online retail stores to social media
platforms, employ algorithms to optimize their offered assortment of items
(e.g., products and contents). These algorithms tend to prioritize the
platforms' short-term goals by solely featuring items with the highest
popularity or revenue. However, this practice can then lead to undesirable
outcomes for the rest of the items, making them leave the platform, and in turn
hurting the platform's long-term goals. Motivated by that, we introduce and
study a fair assortment planning problem, which requires any two items with
similar quality/merits to be offered similar outcomes. We show that the problem
can be formulated as a linear program (LP), called (FAIR), that optimizes over
the distribution of all feasible assortments. To find a near-optimal solution
to (FAIR), we propose a framework based on the Ellipsoid method, which requires
a polynomial-time separation oracle to the dual of the LP. We show that finding
an optimal separation oracle to the dual problem is an NP-complete problem, and
hence we propose a series of approximate separation oracles, which then result
in a -approx. algorithm and a PTAS for the original Problem (FAIR). The
approximate separation oracles are designed by (i) showing the separation
oracle to the dual of the LP is equivalent to solving an infinite series of
parameterized knapsack problems, and (ii) taking advantage of the structure of
the parameterized knapsack problems. Finally, we conduct a case study using the
MovieLens dataset, which demonstrates the efficacy of our algorithms and
further sheds light on the price of fairness.Comment: 86 pages, 7 figure
Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC
The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this
final state has yielded some of the most precise measurements of the particle. As
measurements of the Higgs boson become increasingly precise, greater import is
placed on the factors that constitute the uncertainty. Reducing the effects of these
uncertainties requires an understanding of their causes. The research presented
in this thesis aims to illuminate how uncertainties on simulation modelling are
determined and proffers novel techniques in deriving them.
The upgrade of the FastCaloSim tool is described, used for simulating events in
the ATLAS calorimeter at a rate far exceeding the nominal detector simulation,
Geant4. The integration of a method that allows the toolbox to emulate the
accordion geometry of the liquid argon calorimeters is detailed. This tool allows
for the production of larger samples while using significantly fewer computing
resources.
A measurement of the total Higgs boson production cross-section multiplied
by the diphoton branching ratio (σ × Bγγ) is presented, where this value was
determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement
with the Standard Model prediction. The signal and background shape modelling
is described, and the contribution of the background modelling uncertainty to the
total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production
mechanism.
A method for estimating the number of events in a Monte Carlo background
sample required to model the shape is detailed. It was found that the size of
the nominal γγ background events sample required a multiplicative increase by
a factor of 3.60 to adequately model the background with a confidence level of
68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate,
0.5 billion additional simulated events were produced, substantially reducing the
background modelling uncertainty.
A technique is detailed for emulating the effects of Monte Carlo event generator
differences using multivariate reweighting. The technique is used to estimate the
event generator uncertainty on the signal modelling of tHqb events, improving the
reliability of estimating the tHqb production cross-section. Then this multivariate
reweighting technique is used to estimate the generator modelling uncertainties
on background V γγ samples for the first time. The estimated uncertainties were
found to be covered by the currently assumed background modelling uncertainty
Preferentialism and the conditionality of trade agreements. An application of the gravity model
Modern economic growth is driven by international trade, and the preferential trade agreement constitutes the primary fit-for-purpose mechanism of choice for establishing, facilitating, and governing its flows. However, too little attention has been afforded to the differences in content and conditionality associated with different trade agreements. This has led to an under-considered mischaracterisation of the design-flow relationship. Similarly, while the relationship between trade facilitation and trade is clear, the way trade facilitation affects other areas of economic activity, with respect to preferential trade agreements, has received considerably less attention. Particularly, in light of an increasingly globalised and interdependent trading system, the interplay between trade facilitation and foreign direct investment is of particular importance.
Accordingly, this thesis explores the bilateral trade and investment effects of specific conditionality sets, as established within Preferential Trade Agreements (PTAs).
Chapter one utilises recent content condition-indexes for depth, flexibility, and constraints on flexibility, established by Dür et al. (2014) and Baccini et al. (2015), within a gravity framework to estimate the average treatment effect of trade agreement characteristics across bilateral trade relationships in the Association of Southeast Asian Nations (ASEAN) from 1948-2015. This chapter finds that the composition of a given ASEAN trade agreement’s characteristic set has significantly determined the concomitant bilateral trade flows. Conditions determining the classification of a trade agreements depth are positively associated with an increase to bilateral trade; hereby representing the furthered removal of trade barriers and frictions as facilitated by deeper trade agreements. Flexibility conditions, and constraint on flexibility conditions, are also identified as significant determiners for a given trade agreement’s treatment effect of subsequent bilateral trade flows. Given the political nature of their inclusion (i.e., the appropriate address to short term domestic discontent) this influence is negative as regards trade flows. These results highlight the longer implementation and time frame requirements for trade impediments to be removed in a market with higher domestic uncertainty.
Chapter two explores the incorporation of non-trade issue (NTI) conditions in PTAs. Such conditions are increasing both at the intensive and extensive margins. There is a concern from developing nations that this growth of NTI inclusions serves as a way for high-income (HI) nations to dictate the trade agenda, such that developing nations are subject to ‘principled protectionism’. There is evidence that NTI provisions are partly driven by protectionist motives but the effect on trade flows remains largely undiscussed. Utilising the Gravity Model for trade, I test Lechner’s (2016) comprehensive NTI dataset for 202 bilateral country pairs across a 32-year timeframe and find that, on average, NTIs are associated with an increase to bilateral trade. Primarily this boost can be associated with the market access that a PTA utilising NTIs facilitates. In addition, these results are aligned theoretically with the discussions on market harmonisation, shared values, and the erosion of artificial production advantages. Instead of inhibiting trade through burdensome cost, NTIs are acting to support a more stable production and trading environment, motivated by enhanced market access. Employing a novel classification to capture the power supremacy associated with shaping NTIs, this chapter highlights that the positive impact of NTIs is largely driven by the relationship between HI nations and middle-to-low-income (MTLI) counterparts.
Chapter Three employs the gravity model, theoretically augmented for foreign direct investment (FDI), to estimate the effects of trade facilitation conditions utilising indexes established by Neufeld (2014) and the bilateral FDI data curated by UNCTAD (2014). The resultant dataset covers 104 countries, covering a period of 12 years (2001–2012), containing 23,640 observations. The results highlight the bilateral-FDI enhancing effects of trade facilitation conditions in the ASEAN context, aligning itself with the theoretical branch of FDI-PTA literature that has outlined how the ratification of a trade agreement results in increased and positive economic prospect between partners (Medvedev, 2012) resulting from the interrelation between trade and investment as set within an improving regulatory environment. The results align with the expectation that an enhanced trade facilitation landscape (one in which such formalities, procedures, information, and expectations around trade facilitation are conditioned for) is expected to incentivise and attract FDI
Learning disentangled speech representations
A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody.
The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions.
In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks.
This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically
Limit theorems for non-Markovian and fractional processes
This thesis examines various non-Markovian and fractional processes---rough volatility models, stochastic Volterra equations, Wiener chaos expansions---through the prism of asymptotic analysis.
Stochastic Volterra systems serve as a conducive framework encompassing most rough volatility models used in mathematical finance. In Chapter 2, we provide a unified treatment of pathwise large and moderate deviations principles for a general class of multidimensional stochastic Volterra equations with singular kernels, not necessarily of convolution form. Our methodology is based on the weak convergence approach by Budhiraja, Dupuis and Ellis.
This powerful approach also enables us to investigate the pathwise large deviations of families of white noise functionals characterised by their Wiener chaos expansion as~
In Chapter 3, we provide sufficient conditions for the large deviations principle to hold in path space, thereby refreshing a problem left open By Pérez-Abreu (1993). Hinging on analysis on Wiener space, the proof involves describing, controlling and identifying the limit of perturbed multiple stochastic integrals.
In Chapter 4, we come back to mathematical finance via the route of Malliavin calculus. We present explicit small-time formulae for the at-the-money implied volatility, skew and curvature in a large class of models, including rough volatility models and their multi-factor versions. Our general setup encompasses both European options on a stock and VIX options. In particular, we develop a detailed analysis of the two-factor rough Bergomi model.
Finally, in Chapter 5, we consider the large-time behaviour of affine stochastic Volterra equations, an under-developed area in the absence of Markovianity.
We leverage on a measure-valued Markovian lift introduced by Cuchiero and Teichmann and the associated notion of generalised Feller property.
This setting allows us to prove the existence of an invariant measure for the lift and hence of a stationary distribution for the affine Volterra process, featuring in the rough Heston model.Open Acces
- …