15,864 research outputs found
Chinaâs approach to international law and the Belt and Road Initiative - perspectives from international investment law
This dissertation examines Chinaâs approach to international law. In order to do so, it compares the countryâs stance on international dispute resolution in past and present times. After a first historical chapter outlining Chinaâs changeable relationship with international adjudication, the thesis subsequently focuses on contemporary developments. The emphasis here is on international instruments and mechanisms that China uses to protect investments within the Belt and Road Initiative.
This dissertation combines doctrinal analysis with concrete case studies and applies deductive as well as inductive methods. The study of the legal dimension of the initiative leads to the basic assumption that two coexisting regulatory complexes provide investment protection within the initiative. Accordingly, as a first complex, the dissertation analyses Chinaâs design of investment protection treaties and Chinaâs stance in the reform debate on the future of in-vestment arbitration. As an outcome, the analysis claims that even though the first complex does not relate specifically to the Belt and Road Initiative, this complex nevertheless has inextricable links to Chinaâs approach in the initiativeâs context. Soft law documents, which China has concluded with both state and non-state actors, and informal mechanisms of dispute resolution form the second regulatory complex. The study investigates their functions for investment protection in the Belt and Road Initiative.
In an overall view of the two regulatory complexes, this dissertation finds that China uses strictly legal and rather political methods for investment protection. In the synopsis of this result with the findings obtained from the historical part, the study concludes that China follows a realist approach to international law
Data-to-text generation with neural planning
In this thesis, we consider the task of data-to-text generation, which takes non-linguistic
structures as input and produces textual output. The inputs can take the form of
database tables, spreadsheets, charts, and so on. The main application of data-to-text
generation is to present information in a textual format which makes it accessible to
a layperson who may otherwise find it problematic to understand numerical figures.
The task can also automate routine document generation jobs, thus improving human
efficiency. We focus on generating long-form text, i.e., documents with multiple paragraphs. Recent approaches to data-to-text generation have adopted the very successful
encoder-decoder architecture or its variants. These models generate fluent (but often
imprecise) text and perform quite poorly at selecting appropriate content and ordering
it coherently. This thesis focuses on overcoming these issues by integrating content
planning with neural models. We hypothesize data-to-text generation will benefit from
explicit planning, which manifests itself in (a) micro planning, (b) latent entity planning, and (c) macro planning. Throughout this thesis, we assume the input to our
generator are tables (with records) in the sports domain. And the output are summaries
describing what happened in the game (e.g., who won/lost, ..., scored, etc.).
We first describe our work on integrating fine-grained or micro plans with data-to-text generation. As part of this, we generate a micro plan highlighting which records
should be mentioned and in which order, and then generate the document while taking
the micro plan into account.
We then show how data-to-text generation can benefit from higher level latent entity planning. Here, we make use of entity-specific representations which are dynam ically updated. The text is generated conditioned on entity representations and the
records corresponding to the entities by using hierarchical attention at each time step.
We then combine planning with the high level organization of entities, events, and
their interactions. Such coarse-grained macro plans are learnt from data and given
as input to the generator. Finally, we present work on making macro plans latent
while incrementally generating a document paragraph by paragraph. We infer latent
plans sequentially with a structured variational model while interleaving the steps of
planning and generation. Text is generated by conditioning on previous variational
decisions and previously generated text.
Overall our results show that planning makes data-to-text generation more interpretable, improves the factuality and coherence of the generated documents and re duces redundancy in the output document
Increased lifetime of Organic Photovoltaics (OPVs) and the impact of degradation, efficiency and costs in the LCOE of Emerging PVs
Emerging photovoltaic (PV) technologies such as organic photovoltaics (OPVs) and perovskites (PVKs) have the potential to disrupt the PV market due to their ease of fabrication (compatible with cheap roll-to-roll processing) and installation, as well as their significant efficiency improvements in recent years. However, rapid degradation is still an issue present in many emerging PVs, which must be addressed to enable their commercialisation. This thesis shows an OPV lifetime enhancing technique by adding the insulating polymer PMMA to the active layer, and a novel model for quantifying the impact of degradation (alongside efficiency and cost) upon levelized cost of energy (LCOE) in real world emerging PV installations.
The effect of PMMA morphology on the success of a ternary strategy was investigated, leading to device design guidelines. It was found that either increasing the weight percent (wt%) or molecular weight (MW) of PMMA resulted in an increase in the volume of PMMA-rich islands, which provided the OPV protection against water and oxygen ingress. It was also found that adding PMMA can be effective in enhancing the lifetime of different active material combinations, although not to the same extent, and that processing additives can have a negative impact in the devices lifetime.
A novel model was developed taking into account realistic degradation profile sourced from a literature review of state-of-the-art OPV and PVK devices. It was found that optimal strategies to improve LCOE depend on the present characteristics of a device, and that panels with a good balance of efficiency and degradation were better than panels with higher efficiency but higher degradation as well. Further, it was found that low-cost locations were more favoured from reductions in the degradation rate and module cost, whilst high-cost locations were more benefited from improvements in initial efficiency, lower discount rates and reductions in install costs
Foundations for programming and implementing effect handlers
First-class control operators provide programmers with an expressive and efficient
means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and
control idioms as shareable libraries. Effect handlers provide a particularly structured
approach to programming with first-class control by naming control reifying operations
and separating from their handling.
This thesis is composed of three strands of work in which I develop operational
foundations for programming and implementing effect handlers as well as exploring
the expressive power of effect handlers.
The first strand develops a fine-grain call-by-value core calculus of a statically
typed programming language with a structural notion of effect types, as opposed to the
nominal notion of effect types that dominates the literature. With the structural approach,
effects need not be declared before use. The usual safety properties of statically typed
programming are retained by making crucial use of row polymorphism to build and
track effect signatures. The calculus features three forms of handlers: deep, shallow,
and parameterised. They each offer a different approach to manipulate the control state
of programs. Traditional deep handlers are defined by folds over computation trees,
and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are
defined by case splits (rather than folds) over computation trees. Parameterised handlers
are deep handlers extended with a state value that is threaded through the folds over
computation trees. To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small UNIX-style operating
system complete with multi-user environment, time-sharing, and file I/O.
The second strand studies continuation passing style (CPS) and abstract machine
semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The
CPS translation is obtained through a series of refinements of a basic first-order CPS
translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually
arriving at the notion of generalised continuation, which admit simultaneous support for
deep, shallow, and parameterised handlers. The initial refinement adds support for deep
handlers by representing stacks of continuations and handlers as a curried sequence of
arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the
CPS translation is refined once more to obtain an uncurried representation of stacks
of continuations and handlers. Finally, the translation is made higher-order in order to
contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for
deep, shallow, and parameterised effect handlers. kinds of effect handlers.
The third strand explores the expressiveness of effect handlers. First, I show that
deep, shallow, and parameterised notions of handlers are interdefinable by way of typed
macro-expressiveness, which provides a syntactic notion of expressiveness that affirms
the existence of encodings between handlers, but it provides no information about the
computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class
control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control
FiabilitĂ© de lâunderfill et estimation de la durĂ©e de vie dâassemblages microĂ©lectroniques
Abstract : In order to protect the interconnections in flip-chip packages, an underfill material layer
is used to fill the volumes and provide mechanical support between the silicon chip and
the substrate. Due to the chip corner geometry and the mismatch of coefficient of thermal
expansion (CTE), the underfill suffers from a stress concentration at the chip corners when
the temperature is lower than the curing temperature. This stress concentration leads
to subsequent mechanical failures in flip-chip packages, such as chip-underfill interfacial
delamination and underfill cracking. Local stresses and strains are the most important
parameters for understanding the mechanism of underfill failures. As a result, the industry
currently relies on the finite element method (FEM) to calculate the stress components, but
the FEM may not be accurate enough compared to the actual stresses in underfill. FEM
simulations require a careful consideration of important geometrical details and material
properties. This thesis proposes a modeling approach that can accurately estimate the underfill delamination
areas and crack trajectories, with the following three objectives. The first
objective was to develop an experimental technique capable of measuring underfill deformations
around the chip corner region. This technique combined confocal microscopy and
the digital image correlation (DIC) method to enable tri-dimensional strain measurements
at different temperatures, and was named the confocal-DIC technique. This techique was
first validated by a theoretical analysis on thermal strains. In a test component similar
to a flip-chip package, the strain distribution obtained by the FEM model was in good
agreement with the results measured by the confocal-DIC technique, with relative errors
less than 20% at chip corners. Then, the second objective was to measure the strain near
a crack in underfills. Artificial cracks with lengths of 160 ÎŒm and 640 ÎŒm were fabricated
from the chip corner along the 45° diagonal direction. The confocal-DIC-measured
maximum hoop strains and first principal strains were located at the crack front area for
both the 160 ÎŒm and 640 ÎŒm cracks. A crack model was developed using the extended
finite element method (XFEM), and the strain distribution in the simulation had the same
trend as the experimental results. The distribution of hoop strains were in good agreement
with the measured values, when the model element size was smaller than 22 ÎŒm to
capture the strong strain gradient near the crack tip. The third objective was to propose
a modeling approach for underfill delamination and cracking with the effects of manufacturing
variables. A deep thermal cycling test was performed on 13 test cells to obtain the
reference chip-underfill delamination areas and crack profiles. An artificial neural network
(ANN) was trained to relate the effects of manufacturing variables and the number of
cycles to first delamination of each cell. The predicted numbers of cycles for all 6 cells in
the test dataset were located in the intervals of experimental observations. The growth
of delamination was carried out on FEM by evaluating the strain energy amplitude at
the interface elements between the chip and underfill. For 5 out of 6 cells in validation,
the delamination growth model was consistent with the experimental observations. The
cracks in bulk underfill were modelled by XFEM without predefined paths. The directions of edge cracks were in good agreement with the experimental observations, with an error
of less than 2.5°. This approach met the goal of the thesis of estimating the underfill
initial delamination, areas of delamination and crack paths in actual industrial flip-chip
assemblies.Afin de protĂ©ger les interconnexions dans les assemblages, une couche de matĂ©riau dâunderfill est utilisĂ©e pour remplir le volume et fournir un support mĂ©canique entre la puce de silicium et le substrat. En raison de la gĂ©omĂ©trie du coin de puce et de lâĂ©cart du coefficient de dilatation thermique (CTE), lâunderfill souffre dâune concentration de contraintes dans les coins lorsque la tempĂ©rature est infĂ©rieure Ă la tempĂ©rature de cuisson. Cette concentration de contraintes conduit Ă des dĂ©faillances mĂ©caniques dans les encapsulations de flip-chip, telles que la dĂ©lamination interfaciale puce-underfill et la fissuration dâunderfill. Les contraintes et dĂ©formations locales sont les paramĂštres les plus importants pour comprendre le mĂ©canisme des ruptures de lâunderfill. En consĂ©quent, lâindustrie utilise actuellement la mĂ©thode des Ă©lĂ©ments finis (EF) pour calculer les composantes de la contrainte, qui ne sont pas assez prĂ©cises par rapport aux contraintes actuelles dans lâunderfill. Ces simulations nĂ©cessitent un examen minutieux de dĂ©tails gĂ©omĂ©triques importants et des propriĂ©tĂ©s des matĂ©riaux. Cette thĂšse vise Ă proposer une approche de modĂ©lisation permettant dâestimer avec prĂ©cision les zones de dĂ©lamination et les trajectoires des fissures dans lâunderfill, avec les trois objectifs suivants. Le premier objectif est de mettre au point une technique expĂ©rimentale capable de mesurer la dĂ©formation de lâunderfill dans la rĂ©gion du coin de puce. Cette technique, combine la microscopie confocale et la mĂ©thode de corrĂ©lation des images numĂ©riques (DIC) pour permettre des mesures tridimensionnelles des dĂ©formations Ă diffĂ©rentes tempĂ©ratures, et a Ă©tĂ© nommĂ©e le technique confocale-DIC. Cette technique a dâabord Ă©tĂ© validĂ©e par une analyse thĂ©orique en dĂ©formation thermique. Dans un Ă©chantillon similaire Ă un flip-chip, la distribution de la dĂ©formation obtenues par le modĂšle EF Ă©tait en bon accord avec les rĂ©sultats de la technique confocal-DIC, avec des erreurs relatives infĂ©rieures Ă 20% au coin de puce. Ensuite, le second objectif est de mesurer la dĂ©formation autour dâune fissure dans lâunderfill. Des fissures artificielles dâune longueuer de 160 ÎŒm et 640 ÎŒm ont Ă©tĂ© fabriquĂ©es dans lâunderfill vers la direction diagonale de 45°. Les dĂ©formations circonfĂ©rentielles maximales et principale maximale Ă©taient situĂ©es aux pointes des fissures correspondantes. Un modĂšle de fissure a Ă©tĂ© dĂ©veloppĂ© en utilisant la mĂ©thode des Ă©lĂ©ments finis Ă©tendue (XFEM), et la distribution des contraintes dans la simuation a montrĂ© la mĂȘme tendance que les rĂ©sultats expĂ©rimentaux. La distribution des dĂ©formations circonfĂ©rentielles maximales Ă©tait en bon accord avec les valeurs mesurĂ©es lorsque la taille des Ă©lĂ©ments Ă©tait plus petite que 22 ÎŒm, assez petit pour capturer le grand gradient de dĂ©formation prĂšs de la pointe de fissure. Le troisiĂšme objectif Ă©tait dâapporter une approche de modĂ©lisation de la dĂ©lamination et de la fissuration de lâunderfill avec les effets des variables de fabrication. Un test de cyclage thermique a dâabord Ă©tĂ© effectuĂ© sur 13 cellules pour obtenir les zones dĂ©laminĂ©es entre la puce et lâunderfill, et les profils de fissures dans lâunderfill, comme rĂ©fĂ©rence. Un rĂ©seau neuronal artificiel (ANN) a Ă©tĂ© formĂ© pour Ă©tablir une liaison entre les effets des variables de fabrication et le nombre de cycles Ă la dĂ©lamination pour chaque cellule. Les nombres de cycles prĂ©dits pour les 6 cellules de lâensemble de test Ă©taient situĂ©s dans les intervalles dâobservations expĂ©rimentaux. La croissance de la dĂ©lamination a Ă©tĂ© rĂ©alisĂ©e par lâEF en Ă©valuant lâĂ©nergie de la dĂ©formation au niveau des Ă©lĂ©ments interfaciaux entre la puce et lâunderfill. Pour 5 des 6 cellules de la validation, le modĂšle de croissance du dĂ©laminage Ă©tait conforme aux observations expĂ©rimentales. Les fissures dans lâunderfill ont Ă©tĂ© modĂ©lisĂ©es par XFEM sans chemins prĂ©dĂ©finis. Les directions des fissures de bord Ă©taient en bon accord avec les observations expĂ©rimentales, avec une erreur infĂ©rieure Ă 2,5°. Cette approche a rĂ©pondu Ă la problĂ©matique qui consiste Ă estimer lâinitiation des dĂ©lamination, les zones de dĂ©lamination et les trajectoires de fissures dans lâunderfill pour des flip-chips industriels
Recommended from our members
After Creation: Intergovernmental Organizations and Member State Governments as Co-Participants in an Authority Relationship
This is a re-amalgamation of what started as one manuscript and became two when the length proved to be more than any publisher wanted to consider. The splitting consisted of removing what are now Parts 3, 4, and 5 so that the manuscript focused on the outcome-related shared beliefs holding an authority relationship together. Those parts were last worked on in 2018. The rest were last worked on in late 2021 but also remain incomplete.
The relational approach adopted in this study treats intergovernmental organizations and the governments of member states as co-participants in an authority relationship with the governments of their member states. Authority relationships link two types of actor, defined by their authority-holder or addressee role in the relationship, through a set of shared beliefs about why the relationship exists and how the participants should fulfill their respective roles. The IGO as authority holder has a role that includes a right to instruct other actors about what they should or should not do; the governments of member states as addressees are expected to comply with the instructions. Three sets of shared beliefs provide the conceptual âglueâ holding the relationship together. The first defines the goal of the collective effort, providing both the rationale for having the authority relationship and providing a lode star for assessments of the collective effortâs success or lack of success. The second set defines the shared understanding about allocation of roles and the process of interaction by establishing shared expectations about a) the selection process by which particular actors acquire authority holder roles, b) the definitions identifying one or more categories of addressees expected to follow instructions, and c) the procedures through which the authority holder issues instructions. The third set focus on the outcomes of cooperation through the relationship by defining a) the substantive areas in which the authority holder may issue instructions, b) the bases for assessing the relevance actions mandated in instructions for reaching the goal, and c) the relative efficacy of action paths chosen for reaching the goal as compared to other possible action paths.
Using an authority relationship framework for analyzing cooperation through IGOs highlights the inherently bi-directional nature of IGO-member government activity by viewing their interaction as involving a three-step process in which the IGO as authority holder decides when to issue what instruction, the member state governments as followers react to the instruction with anything from prompt and full compliance through various forms of pushback to outright rejection, and the IGO as authority holder responds to how the followers react with efforts to increase individual compliance with instructions and reinforce continuing acceptance of the authority relationship. Foregrounding the dynamics produced by the interaction of these two streams of perception and action reveals more clearly how far intergovernmental organizations acquire capacity to operate as independent actors, the dynamic ways they maintain that capacity, and how much they influence member governmentsâ beliefs and actions at different times. The approach fosters better understanding of why, when, and for how long governments choose cooperation through an IGO even in periods of rising unilateralism
Stochastic maximum principle with control-dependent terminal time and applications
In this thesis we study stochastic control problems with control-dependent stopping terminal time. We assess what are the methods and theorems from standard control optimization settings that can be applied to this framework and we introduce new statements where necessary.
In the first part of the thesis we study a general optimal liquidation problem with a control-dependent stopping time which is the first time the stock holding becomes zero or a fixed terminal time, whichever comes first. We prove a stochastic maximum principle (SMP) which is markedly different in its Hamiltonian condition from that of the standard SMP with fixed terminal time. The new version of the SMP involves an innovative definition of the FBSDE associated to the problem and a new type of Hamiltonian. We present several examples in which the optimal solution satisfies the SMP in this thesis but fails the standard SMP in the literature. The generalised version of the SMP Theorem can also be applied to any problem in physics and engineering in which the terminal time of the optimization depends on the control, such as optimal planning problems.
In the second part of thesis, we introduce an optimal liquidation problem with control-dependent stopping time as before. We analyze the case when an agent is trading on a market with two financial assets correlated with each other. The agentâs task is to liquidate via market orders an initial position of shares of one of the two financial assets, without having the possi- bility of trading the other stock. The main results of this part consist in proving a verification theorem and a comparison principle for the viscosity solution to the HJB equation and finding an approximation of the classical solution of the Hamilton-Jacobi-Bellman (HJB) equation associated to this problem.Open Acces
Migrant Workers' Access to Justice for Wage Theft: A Global Study of Promising Initiatives
Systemic wage theft has long been part of the labour migration landscape in every region of the world. Though every jurisdiction has judicial and/or administrative mechanisms to address wage claims, employers in every country can be confident that very few unpaid migrant workers will ever use those mechanisms to recover their wages. This is because the system is stacked against them at every stage in the wage claim process.
This situation is not inevitable. This report provides a blueprint for improving government and court wage recovery processes for migrant workers. It draws on analysis of select, promising initiatives from around the world that demonstrate how many of the barriers that impede migrant workersâ access to justice can be overcome. These innovations shift risks and burdens of wage recovery away from workers and onto government and business, and disrupt employer expectations of impunity. The report proposes specific, evidence-based reform targets that can underpin global, national and local advocacy, and support greater coordination among a community of practice working to achieve labour justice for migrant workers
International Conference Shaping light for health and wellbeing in cities
The book collects contributions presented during the international conference âShaping light for health and wellbeing in citiesâ organized in the framework of the H2020 ENLIGHTENme project. The conference has investigated the multifaceted consequences light has on life in cities, by adopting a multidisciplinary and integrated approach to explore the complexity of challenges urban lighting poses on health and wellbeing, urban realm and social life. Papers cover several disciplines such as clinical and biomedical sciences, ethics and Responsible Research & Innovation, urban planning and architecture, data accessibility and interoperability, as well as social sciences and economics, and provide multifaceted insights that inspire further explorations. Contributions represent a step towards the development of innovative policies for improving health and wellbeing in our cities, addressing indoor and outdoor lighting
- âŠ