9,757 research outputs found

    Corporate Social Responsibility: the institutionalization of ESG

    Get PDF
    Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective

    Preferentialism and the conditionality of trade agreements. An application of the gravity model

    Get PDF
    Modern economic growth is driven by international trade, and the preferential trade agreement constitutes the primary fit-for-purpose mechanism of choice for establishing, facilitating, and governing its flows. However, too little attention has been afforded to the differences in content and conditionality associated with different trade agreements. This has led to an under-considered mischaracterisation of the design-flow relationship. Similarly, while the relationship between trade facilitation and trade is clear, the way trade facilitation affects other areas of economic activity, with respect to preferential trade agreements, has received considerably less attention. Particularly, in light of an increasingly globalised and interdependent trading system, the interplay between trade facilitation and foreign direct investment is of particular importance. Accordingly, this thesis explores the bilateral trade and investment effects of specific conditionality sets, as established within Preferential Trade Agreements (PTAs). Chapter one utilises recent content condition-indexes for depth, flexibility, and constraints on flexibility, established by DĂŒr et al. (2014) and Baccini et al. (2015), within a gravity framework to estimate the average treatment effect of trade agreement characteristics across bilateral trade relationships in the Association of Southeast Asian Nations (ASEAN) from 1948-2015. This chapter finds that the composition of a given ASEAN trade agreement’s characteristic set has significantly determined the concomitant bilateral trade flows. Conditions determining the classification of a trade agreements depth are positively associated with an increase to bilateral trade; hereby representing the furthered removal of trade barriers and frictions as facilitated by deeper trade agreements. Flexibility conditions, and constraint on flexibility conditions, are also identified as significant determiners for a given trade agreement’s treatment effect of subsequent bilateral trade flows. Given the political nature of their inclusion (i.e., the appropriate address to short term domestic discontent) this influence is negative as regards trade flows. These results highlight the longer implementation and time frame requirements for trade impediments to be removed in a market with higher domestic uncertainty. Chapter two explores the incorporation of non-trade issue (NTI) conditions in PTAs. Such conditions are increasing both at the intensive and extensive margins. There is a concern from developing nations that this growth of NTI inclusions serves as a way for high-income (HI) nations to dictate the trade agenda, such that developing nations are subject to ‘principled protectionism’. There is evidence that NTI provisions are partly driven by protectionist motives but the effect on trade flows remains largely undiscussed. Utilising the Gravity Model for trade, I test Lechner’s (2016) comprehensive NTI dataset for 202 bilateral country pairs across a 32-year timeframe and find that, on average, NTIs are associated with an increase to bilateral trade. Primarily this boost can be associated with the market access that a PTA utilising NTIs facilitates. In addition, these results are aligned theoretically with the discussions on market harmonisation, shared values, and the erosion of artificial production advantages. Instead of inhibiting trade through burdensome cost, NTIs are acting to support a more stable production and trading environment, motivated by enhanced market access. Employing a novel classification to capture the power supremacy associated with shaping NTIs, this chapter highlights that the positive impact of NTIs is largely driven by the relationship between HI nations and middle-to-low-income (MTLI) counterparts. Chapter Three employs the gravity model, theoretically augmented for foreign direct investment (FDI), to estimate the effects of trade facilitation conditions utilising indexes established by Neufeld (2014) and the bilateral FDI data curated by UNCTAD (2014). The resultant dataset covers 104 countries, covering a period of 12 years (2001–2012), containing 23,640 observations. The results highlight the bilateral-FDI enhancing effects of trade facilitation conditions in the ASEAN context, aligning itself with the theoretical branch of FDI-PTA literature that has outlined how the ratification of a trade agreement results in increased and positive economic prospect between partners (Medvedev, 2012) resulting from the interrelation between trade and investment as set within an improving regulatory environment. The results align with the expectation that an enhanced trade facilitation landscape (one in which such formalities, procedures, information, and expectations around trade facilitation are conditioned for) is expected to incentivise and attract FDI

    Statistical-dynamical analyses and modelling of multi-scale ocean variability

    Get PDF
    This thesis aims to provide a comprehensive analysis of multi-scale oceanic variabilities using various statistical and dynamical tools and explore the data-driven methods for correct statistical emulation of the oceans. We considered the classical, wind-driven, double-gyre ocean circulation model in quasi-geostrophic approximation and obtained its eddy-resolving solutions in terms of potential vorticity anomaly and geostrophic streamfunctions. The reference solutions possess two asymmetric gyres of opposite circulations and a strong meandering eastward jet separating them with rich eddy activities around it, such as the Gulf Stream in the North Atlantic and Kuroshio in the North Pacific. This thesis is divided into two parts. The first part discusses a novel scale-separation method based on the local spatial correlations, called correlation-based decomposition (CBD), and provides a comprehensive analysis of mesoscale eddy forcing. In particular, we analyse the instantaneous and time-lagged interactions between the diagnosed eddy forcing and the evolving large-scale PVA using the novel `product integral' characteristics. The product integral time series uncover robust causality between two drastically different yet interacting flow quantities, termed `eddy backscatter'. We also show data-driven augmentation of non-eddy-resolving ocean models by feeding them the eddy fields to restore the missing eddy-driven features, such as the merging western boundary currents, their eastward extension and low-frequency variabilities of gyres. In the second part, we present a systematic inter-comparison of Linear Regression (LR), stochastic and deep-learning methods to build low-cost reduced-order statistical emulators of the oceans. We obtain the forecasts on seasonal and centennial timescales and assess them for their skill, cost and complexity. We found that the multi-level linear stochastic model performs the best, followed by the ``hybrid stochastically-augmented deep learning models''. The superiority of these methods underscores the importance of incorporating core dynamics, memory effects and model errors for robust emulation of multi-scale dynamical systems, such as the oceans.Open Acces

    Examining the Impact of Personal Social Media Use at Work on Workplace Outcomes

    Get PDF
    A noticable shift is underway in today’s multi-generational workforce. As younger employees propel digital workforce transformation and embrace technology adoption in the workplace, organisations need to show they are forward-thinking in their digital transformation strategies, and the emergent integration of social media in organisations is reshaping internal communication strategies, in a bid to improve corporate reputations and foster employee engagement. However, the impact of personal social media use on psychological and behavioural workplace outcomes is still debatebale with contrasting results in the literature identifying both positive and negative effects on workplace outcomes among organisational employees. This study seeks to examine this debate through the lens of social capital theory and study personal social media use at work using distinct variables of social use, cognitive use, and hedonic use. A quantitative analysis of data from 419 organisational employees in Jordan using SEM-PLS reveals that personal social media use at work is a double-edged sword as its impact differs by usage types. First, the social use of personal social media at work reduces job burnout, turnover intention, presenteeism, and absenteeism; it also increases job involvement and organisational citizen behaviour. Second, the cognitive use of personal social media at work increases job involvement, organisational citizen behaviour, employee adaptability, and decreases presenteeism and absenteeism; it also increases job burnout and turnover intention. Finally, the hedonic use of personal social media at work carries only negative effects by increasing job burnout and turnover intention. This study contributes to managerial understanding by showing the impact of different types of personal social media usage and recommends that organisations not limit employee access to personal social media within work time, but rather focus on raising awareness of the negative effects of excessive usage on employee well-being and encourage low to moderate use of personal social media at work and other personal and work-related online interaction associated with positive workplace outcomes. It also clarifies the need for further research in regions such as the Middle East with distinct cultural and socio-economic contexts

    FiabilitĂ© de l’underfill et estimation de la durĂ©e de vie d’assemblages microĂ©lectroniques

    Get PDF
    Abstract : In order to protect the interconnections in flip-chip packages, an underfill material layer is used to fill the volumes and provide mechanical support between the silicon chip and the substrate. Due to the chip corner geometry and the mismatch of coefficient of thermal expansion (CTE), the underfill suffers from a stress concentration at the chip corners when the temperature is lower than the curing temperature. This stress concentration leads to subsequent mechanical failures in flip-chip packages, such as chip-underfill interfacial delamination and underfill cracking. Local stresses and strains are the most important parameters for understanding the mechanism of underfill failures. As a result, the industry currently relies on the finite element method (FEM) to calculate the stress components, but the FEM may not be accurate enough compared to the actual stresses in underfill. FEM simulations require a careful consideration of important geometrical details and material properties. This thesis proposes a modeling approach that can accurately estimate the underfill delamination areas and crack trajectories, with the following three objectives. The first objective was to develop an experimental technique capable of measuring underfill deformations around the chip corner region. This technique combined confocal microscopy and the digital image correlation (DIC) method to enable tri-dimensional strain measurements at different temperatures, and was named the confocal-DIC technique. This techique was first validated by a theoretical analysis on thermal strains. In a test component similar to a flip-chip package, the strain distribution obtained by the FEM model was in good agreement with the results measured by the confocal-DIC technique, with relative errors less than 20% at chip corners. Then, the second objective was to measure the strain near a crack in underfills. Artificial cracks with lengths of 160 ÎŒm and 640 ÎŒm were fabricated from the chip corner along the 45° diagonal direction. The confocal-DIC-measured maximum hoop strains and first principal strains were located at the crack front area for both the 160 ÎŒm and 640 ÎŒm cracks. A crack model was developed using the extended finite element method (XFEM), and the strain distribution in the simulation had the same trend as the experimental results. The distribution of hoop strains were in good agreement with the measured values, when the model element size was smaller than 22 ÎŒm to capture the strong strain gradient near the crack tip. The third objective was to propose a modeling approach for underfill delamination and cracking with the effects of manufacturing variables. A deep thermal cycling test was performed on 13 test cells to obtain the reference chip-underfill delamination areas and crack profiles. An artificial neural network (ANN) was trained to relate the effects of manufacturing variables and the number of cycles to first delamination of each cell. The predicted numbers of cycles for all 6 cells in the test dataset were located in the intervals of experimental observations. The growth of delamination was carried out on FEM by evaluating the strain energy amplitude at the interface elements between the chip and underfill. For 5 out of 6 cells in validation, the delamination growth model was consistent with the experimental observations. The cracks in bulk underfill were modelled by XFEM without predefined paths. The directions of edge cracks were in good agreement with the experimental observations, with an error of less than 2.5°. This approach met the goal of the thesis of estimating the underfill initial delamination, areas of delamination and crack paths in actual industrial flip-chip assemblies.Afin de protĂ©ger les interconnexions dans les assemblages, une couche de matĂ©riau d’underfill est utilisĂ©e pour remplir le volume et fournir un support mĂ©canique entre la puce de silicium et le substrat. En raison de la gĂ©omĂ©trie du coin de puce et de l’écart du coefficient de dilatation thermique (CTE), l’underfill souffre d’une concentration de contraintes dans les coins lorsque la tempĂ©rature est infĂ©rieure Ă  la tempĂ©rature de cuisson. Cette concentration de contraintes conduit Ă  des dĂ©faillances mĂ©caniques dans les encapsulations de flip-chip, telles que la dĂ©lamination interfaciale puce-underfill et la fissuration d’underfill. Les contraintes et dĂ©formations locales sont les paramĂštres les plus importants pour comprendre le mĂ©canisme des ruptures de l’underfill. En consĂ©quent, l’industrie utilise actuellement la mĂ©thode des Ă©lĂ©ments finis (EF) pour calculer les composantes de la contrainte, qui ne sont pas assez prĂ©cises par rapport aux contraintes actuelles dans l’underfill. Ces simulations nĂ©cessitent un examen minutieux de dĂ©tails gĂ©omĂ©triques importants et des propriĂ©tĂ©s des matĂ©riaux. Cette thĂšse vise Ă  proposer une approche de modĂ©lisation permettant d’estimer avec prĂ©cision les zones de dĂ©lamination et les trajectoires des fissures dans l’underfill, avec les trois objectifs suivants. Le premier objectif est de mettre au point une technique expĂ©rimentale capable de mesurer la dĂ©formation de l’underfill dans la rĂ©gion du coin de puce. Cette technique, combine la microscopie confocale et la mĂ©thode de corrĂ©lation des images numĂ©riques (DIC) pour permettre des mesures tridimensionnelles des dĂ©formations Ă  diffĂ©rentes tempĂ©ratures, et a Ă©tĂ© nommĂ©e le technique confocale-DIC. Cette technique a d’abord Ă©tĂ© validĂ©e par une analyse thĂ©orique en dĂ©formation thermique. Dans un Ă©chantillon similaire Ă  un flip-chip, la distribution de la dĂ©formation obtenues par le modĂšle EF Ă©tait en bon accord avec les rĂ©sultats de la technique confocal-DIC, avec des erreurs relatives infĂ©rieures Ă  20% au coin de puce. Ensuite, le second objectif est de mesurer la dĂ©formation autour d’une fissure dans l’underfill. Des fissures artificielles d’une longueuer de 160 ÎŒm et 640 ÎŒm ont Ă©tĂ© fabriquĂ©es dans l’underfill vers la direction diagonale de 45°. Les dĂ©formations circonfĂ©rentielles maximales et principale maximale Ă©taient situĂ©es aux pointes des fissures correspondantes. Un modĂšle de fissure a Ă©tĂ© dĂ©veloppĂ© en utilisant la mĂ©thode des Ă©lĂ©ments finis Ă©tendue (XFEM), et la distribution des contraintes dans la simuation a montrĂ© la mĂȘme tendance que les rĂ©sultats expĂ©rimentaux. La distribution des dĂ©formations circonfĂ©rentielles maximales Ă©tait en bon accord avec les valeurs mesurĂ©es lorsque la taille des Ă©lĂ©ments Ă©tait plus petite que 22 ÎŒm, assez petit pour capturer le grand gradient de dĂ©formation prĂšs de la pointe de fissure. Le troisiĂšme objectif Ă©tait d’apporter une approche de modĂ©lisation de la dĂ©lamination et de la fissuration de l’underfill avec les effets des variables de fabrication. Un test de cyclage thermique a d’abord Ă©tĂ© effectuĂ© sur 13 cellules pour obtenir les zones dĂ©laminĂ©es entre la puce et l’underfill, et les profils de fissures dans l’underfill, comme rĂ©fĂ©rence. Un rĂ©seau neuronal artificiel (ANN) a Ă©tĂ© formĂ© pour Ă©tablir une liaison entre les effets des variables de fabrication et le nombre de cycles Ă  la dĂ©lamination pour chaque cellule. Les nombres de cycles prĂ©dits pour les 6 cellules de l’ensemble de test Ă©taient situĂ©s dans les intervalles d’observations expĂ©rimentaux. La croissance de la dĂ©lamination a Ă©tĂ© rĂ©alisĂ©e par l’EF en Ă©valuant l’énergie de la dĂ©formation au niveau des Ă©lĂ©ments interfaciaux entre la puce et l’underfill. Pour 5 des 6 cellules de la validation, le modĂšle de croissance du dĂ©laminage Ă©tait conforme aux observations expĂ©rimentales. Les fissures dans l’underfill ont Ă©tĂ© modĂ©lisĂ©es par XFEM sans chemins prĂ©dĂ©finis. Les directions des fissures de bord Ă©taient en bon accord avec les observations expĂ©rimentales, avec une erreur infĂ©rieure Ă  2,5°. Cette approche a rĂ©pondu Ă  la problĂ©matique qui consiste Ă  estimer l’initiation des dĂ©lamination, les zones de dĂ©lamination et les trajectoires de fissures dans l’underfill pour des flip-chips industriels

    Neural Natural Language Generation: A Survey on Multilinguality, Multimodality, Controllability and Learning

    Get PDF
    Developing artificial learning systems that can understand and generate natural language has been one of the long-standing goals of artificial intelligence. Recent decades have witnessed an impressive progress on both of these problems, giving rise to a new family of approaches. Especially, the advances in deep learning over the past couple of years have led to neural approaches to natural language generation (NLG). These methods combine generative language learning techniques with neural-networks based frameworks. With a wide range of applications in natural language processing, neural NLG (NNLG) is a new and fast growing field of research. In this state-of-the-art report, we investigate the recent developments and applications of NNLG in its full extent from a multidimensional view, covering critical perspectives such as multimodality, multilinguality, controllability and learning strategies. We summarize the fundamental building blocks of NNLG approaches from these aspects and provide detailed reviews of commonly used preprocessing steps and basic neural architectures. This report also focuses on the seminal applications of these NNLG models such as machine translation, description generation, automatic speech recognition, abstractive summarization, text simplification, question answering and generation, and dialogue generation. Finally, we conclude with a thorough discussion of the described frameworks by pointing out some open research directions.This work has been partially supported by the European Commission ICT COST Action “Multi-task, Multilingual, Multi-modal Language Generation” (CA18231). AE was supported by BAGEP 2021 Award of the Science Academy. EE was supported in part by TUBA GEBIP 2018 Award. BP is in in part funded by Independent Research Fund Denmark (DFF) grant 9063-00077B. IC has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 838188. EL is partly funded by Generalitat Valenciana and the Spanish Government throught projects PROMETEU/2018/089 and RTI2018-094649-B-I00, respectively. SMI is partly funded by UNIRI project uniri-drustv-18-20. GB is partly supported by the Ministry of Innovation and the National Research, Development and Innovation Office within the framework of the Hungarian Artificial Intelligence National Laboratory Programme. COT is partially funded by the Romanian Ministry of European Investments and Projects through the Competitiveness Operational Program (POC) project “HOLOTRAIN” (grant no. 29/221 ap2/07.04.2020, SMIS code: 129077) and by the German Academic Exchange Service (DAAD) through the project “AWAKEN: content-Aware and netWork-Aware faKE News mitigation” (grant no. 91809005). ESA is partially funded by the German Academic Exchange Service (DAAD) through the project “Deep-Learning Anomaly Detection for Human and Automated Users Behavior” (grant no. 91809358)

    Stochastic maximum principle with control-dependent terminal time and applications

    Get PDF
    In this thesis we study stochastic control problems with control-dependent stopping terminal time. We assess what are the methods and theorems from standard control optimization settings that can be applied to this framework and we introduce new statements where necessary. In the first part of the thesis we study a general optimal liquidation problem with a control-dependent stopping time which is the first time the stock holding becomes zero or a fixed terminal time, whichever comes first. We prove a stochastic maximum principle (SMP) which is markedly different in its Hamiltonian condition from that of the standard SMP with fixed terminal time. The new version of the SMP involves an innovative definition of the FBSDE associated to the problem and a new type of Hamiltonian. We present several examples in which the optimal solution satisfies the SMP in this thesis but fails the standard SMP in the literature. The generalised version of the SMP Theorem can also be applied to any problem in physics and engineering in which the terminal time of the optimization depends on the control, such as optimal planning problems. In the second part of thesis, we introduce an optimal liquidation problem with control-dependent stopping time as before. We analyze the case when an agent is trading on a market with two financial assets correlated with each other. The agent’s task is to liquidate via market orders an initial position of shares of one of the two financial assets, without having the possi- bility of trading the other stock. The main results of this part consist in proving a verification theorem and a comparison principle for the viscosity solution to the HJB equation and finding an approximation of the classical solution of the Hamilton-Jacobi-Bellman (HJB) equation associated to this problem.Open Acces

    Industry 4.0: product digital twins for remanufacturing decision-making

    Get PDF
    Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a product’s life cycle. The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The model’s architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an asset’s through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary “smart” capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored

    MDAO method and optimum designs of hybrid-electric civil airliners

    Get PDF
    Hybrid-electric civil airliners (HECAs) are considered the forerunner of the solution of relieving aviation emissions. This paper presents a multidisciplinary design analysis and optimization (MDAO) framework named GENUS, which has been extended to design HECAs. GENUS is a modular, expandable, and flexible design environment with 10 integrated modules for HECA design. Key extensions included hybrid-electric propulsion architectures (HEPAs), the corresponding powertrains, and power management strategies (PMS). In addition, a cost module and an aviation emission tracking function were developed and integrated into GENUS. GENUS was validated for investigating the design of HECAs by evaluating existing HECA concepts. Furthermore, three conventional turbofans were hybridized within GENUS to analyze the sensitivity of the performance of engines to the degree of hybridization (DoH) of power. The effects of hybridized engines on aircraft design were evaluated based on Boeing 737, demonstrating that at least 27.18% fuel saving, 9.97% energy saving, 12.40% cost saving, and 43.56% aviation emissions migration can be achieved. Finally, the potential directions of applying GENUS to explore the design space of HECA was discussed, which is useful to maximize the benefits of HECA

    Resilience in the supply chain management: understanding critical aspects and how digital technologies can contribute to Brazilian companies in the COVID-19 context

    Get PDF
    Purpose The present study aims to identify the most critical elements of resilience in the management of supply chains of Brazilian companies and, in the sequence, debate possible digital technologies mentioned by literature to enhance them. Design/methodology/approach To identify the most critical elements, the information provided by qualified academics was used. Data analysis was performed through Cronbach’s alpha coefficient, hierarchical cluster analysis and Fuzzy TOPSIS approach. Findings Comparatively, the results pointed out three elements of resilience as the most critical in managing supply chains. They are the decision-making (understood as the definitions from the layout of the chain’s operations network to the choice of warehouse locations, distribution centres and manufacturing facilities), human resources (understood as management for human resources development and knowledge management through training) and security (understood as issues related to information technology for data security). For each of them, bibliographic research was performed to identify technologies that enhance these elements of supply chain management resilience. Originality/value The results presented here can significantly contribute to the expansion of debates associated with resilience in managing supply chains of Brazilian companies and directing researchers in the area
    • 

    corecore