6,667 research outputs found
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Corporate Social Responsibility: the institutionalization of ESG
Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective
The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka
It was evident through the literature that the perceived value delivery of the global software
engineering industry is low due to various facts. Therefore, this research concerns global
software product companies in Sri Lanka to explore the software engineering methods and
practices in increasing the value addition. The overall aim of the study is to identify the key
determinants for value addition in the global software engineering industry and critically
evaluate the impact of them for the software product companies to help maximise the value
addition to ultimately assure the sustainability of the industry.
An exploratory research approach was used initially since findings would emerge while the
study unfolds. Mixed method was employed as the literature itself was inadequate to
investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the
disciplines from the targeted organisations which was combined with the literature findings as
well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings
of the existing literature were verified through the exploratory study and the outcomes were
used to formulate the questionnaire for the public survey. 371 responses were considered after
cleansing the total responses received for the data analysis through SPSS 21 with alpha level
0.05. Internal consistency test was done before the descriptive analysis. After assuring the
reliability of the dataset, the correlation test, multiple regression test and analysis of variance
(ANOVA) test were carried out to fulfil the requirements of meeting the research objectives.
Five determinants for value addition were identified along with the key themes for each area.
They are staffing, delivery process, use of tools, governance, and technology infrastructure.
The cross-functional and self-organised teams built around the value streams, employing a
properly interconnected software delivery process with the right governance in the delivery
pipelines, selection of tools and providing the right infrastructure increases the value delivery.
Moreover, the constraints for value addition are poor interconnection in the internal processes,
rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team
arrangements and inadequate focus for the technology infrastructure. The findings add to the
existing body of knowledge on increasing the value addition by employing effective processes,
practices and tools and the impacts of inaccurate applications the same in the global software
engineering industry
Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond
[ES] Esta tesis se enmarca en la intersección entre las técnicas modernas de Machine Learning, como las Redes Neuronales Profundas, y el modelado probabilÃstico confiable. En muchas aplicaciones, no solo nos importa la predicción hecha por un modelo (por ejemplo esta imagen de pulmón presenta cáncer) sino también la confianza que tiene el modelo para hacer esta predicción (por ejemplo esta imagen de pulmón presenta cáncer con 67% probabilidad). En tales aplicaciones, el modelo ayuda al tomador de decisiones (en este caso un médico) a tomar la decisión final. Como consecuencia, es necesario que las probabilidades proporcionadas por un modelo reflejen las proporciones reales presentes en el conjunto al que se ha asignado dichas probabilidades; de lo contrario, el modelo es inútil en la práctica. Cuando esto sucede, decimos que un modelo está perfectamente calibrado.
En esta tesis se exploran tres vias para proveer modelos más calibrados. Primero se muestra como calibrar modelos de manera implicita, que son descalibrados por técnicas de aumentación de datos. Se introduce una función de coste que resuelve esta descalibración tomando como partida las ideas derivadas de la toma de decisiones con la regla de Bayes. Segundo, se muestra como calibrar modelos utilizando una etapa de post calibración implementada con una red neuronal Bayesiana. Finalmente, y en base a las limitaciones estudiadas en la red neuronal Bayesiana, que hipotetizamos que se basan en un prior mispecificado, se introduce un nuevo proceso estocástico que sirve como distribución a priori en un problema de inferencia Bayesiana.[CA] Aquesta tesi s'emmarca en la intersecció entre les tècniques modernes de Machine Learning, com ara les Xarxes Neuronals Profundes, i el modelatge probabilÃstic fiable. En moltes aplicacions, no només ens importa la predicció feta per un model (per ejemplem aquesta imatge de pulmó presenta cà ncer) sinó també la confiança que té el model per fer aquesta predicció (per exemple aquesta imatge de pulmó presenta cà ncer amb 67% probabilitat). En aquestes aplicacions, el model ajuda el prenedor de decisions (en aquest cas un metge) a prendre la decisió final. Com a conseqüència, cal que les probabilitats proporcionades per un model reflecteixin les proporcions reals presents en el conjunt a què s'han assignat aquestes probabilitats; altrament, el model és inútil a la prà ctica. Quan això passa, diem que un model està perfectament calibrat.
En aquesta tesi s'exploren tres vies per proveir models més calibrats. Primer es mostra com calibrar models de manera implÃcita, que són descalibrats per tècniques d'augmentació de dades. S'introdueix una funció de cost que resol aquesta descalibració prenent com a partida les idees derivades de la presa de decisions amb la regla de Bayes. Segon, es mostra com calibrar models utilitzant una etapa de post calibratge implementada amb una xarxa neuronal Bayesiana. Finalment, i segons les limitacions estudiades a la xarxa neuronal Bayesiana, que es basen en un prior mispecificat, s'introdueix un nou procés estocà stic que serveix com a distribució a priori en un problema d'inferència Bayesiana.[EN] This thesis is framed at the intersection between modern Machine Learning techniques, such as Deep Neural Networks, and reliable probabilistic modeling. In many machine learning applications, we do not only care about the prediction made by a model (e.g. this lung image presents cancer) but also in how confident is the model in making this prediction (e.g. this lung image presents cancer with 67% probability). In such applications, the model assists the decision-maker (in this case a doctor) towards making the final decision. As a consequence, one needs that the probabilities provided by a model reflects the true underlying set of outcomes, otherwise the model is useless in practice. When this happens, we say that a model is perfectly calibrated.
In this thesis three ways are explored to provide more calibrated models. First, it is shown how to calibrate models implicitly, which are decalibrated by data augmentation techniques. A cost function is introduced that solves this decalibration taking as a starting point the ideas derived from decision making with Bayes' rule. Second, it shows how to calibrate models using a post-calibration stage implemented with a Bayesian neural network. Finally, and based on the limitations studied in the Bayesian neural network, which we hypothesize that came from a mispecified prior, a new stochastic process is introduced that serves as a priori distribution in a Bayesian inference problem.Maroñas Molano, J. (2022). Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181582TESI
Socio-endocrinology revisited: New tools to tackle old questions
Animals’ social environments impact their health and survival, but the proximate links between sociality and fitness are still not fully understood. In this thesis, I develop and apply new approaches to address an outstanding question within this sociality-fitness link: does grooming (a widely studied, positive social interaction) directly affect glucocorticoid concentrations (GCs; a group of steroid hormones indicating physiological stress) in a wild primate? To date, negative, long-term correlations between grooming and GCs have been found, but the logistical difficulties of studying proximate mechanisms in the wild leave knowledge gaps regarding the short-term, causal mechanisms that underpin this relationship. New technologies, such as collar-mounted tri-axial accelerometers, can provide the continuous behavioural data required to match grooming to non-invasive GC measures (Chapter 1). Using Chacma baboons (Papio ursinus) living on the Cape Peninsula, South Africa as a model system, I identify giving and receiving grooming using tri-axial accelerometers and supervised machine learning methods, with high overall accuracy (~80%) (Chapter 2). I then test what socio-ecological variables predict variation in faecal and urinary GCs (fGCs and uGCs) (Chapter 3). Shorter and rainy days are associated with higher fGCs and uGCs, respectively, suggesting that environmental conditions may impose stressors in the form of temporal bottlenecks. Indeed, I find that short days and days with more rain-hours are associated with reduced giving grooming (Chapter 4), and that this reduction is characterised by fewer and shorter grooming bouts. Finally, I test whether grooming predicts GCs, and find that while there is a long-term negative correlation between grooming and GCs, grooming in the short-term, in particular giving grooming, is associated with higher fGCs and uGCs (Chapter 5). I end with a discussion on how the new tools I applied have enabled me to advance our understanding of sociality and stress in primate social systems (Chapter 6)
Statistical Learning for Gene Expression Biomarker Detection in Neurodegenerative Diseases
In this work, statistical learning approaches are used to detect biomarkers for neurodegenerative diseases (NDs). NDs are becoming increasingly prevalent as populations age, making understanding of disease and identification of biomarkers progressively important for facilitating early diagnosis and the screening of individuals for clinical trials. Advancements in gene expression profiling has enabled the exploration of disease biomarkers at an unprecedented scale. The work presented here demonstrates the value of gene expression data in understanding the underlying processes and detection of biomarkers of NDs. The value of novel approaches to previously collected -omics data is shown and it is demonstrated that new therapeutic targets can be identified. Additionally, the importance of meta-analysis to improve power of multiple small studies is demonstrated. The value of blood transcriptomics data is shown in applications to researching NDs to understand underlying processes using network analysis and a novel hub detection method. Finally, after demonstrating the value of blood gene expression data for investigating NDs, a combination of feature selection and classification algorithms were used to identify novel accurate biomarker signatures for the diagnosis and prognosis of Parkinson’s disease (PD) and Alzheimer’s disease (AD). Additionally, the use of feature pools based on previous knowledge of disease and the viability of neural networks in dimensionality reduction and biomarker detection is demonstrated and discussed. In summary, gene expression data is shown to be valuable for the investigation of ND and novel gene biomarker signatures for the diagnosis and prognosis of PD and AD
Baboon (papio urinus) group decision making at the urban edge
Social animals need to coordinate their group movements and make group decisions if they are to remain together. The development of urban landscapes has fragmented natural landscapes and resulted in increased human-wildlife interactions, affecting animals’ decision-making. Interactions between non-human primates and people are common; high-energy foods found in urban habitats provide rich foraging opportunities for primates, increasing their growth and reproduction, but also resulting in chronic conflict with people that reduces both primate’s and people’s wellbeing. Understanding the decision-making dynamics of urban foraging groups will therefore inform management strategies. Here, I use high-resolution 1Hz GPS data to track the decisions of n=13 adults in a group of chacma baboons (Papio ursinus) to move into urban spaces at the edge of the City of Cape Town, South Africa. Management teams contracted by the city aim to reduce negative baboon-human interactions by herding troops away from urban areas, by targeting males that tend to lead chacma baboon troop decision-making. I find the troop shows high fission-fusion dynamics when moving into urban space. The size and composition of groups entering the urban space varies, suggesting individuals are driven by self-interests. After entering urban space, lower-ranking females spent more time in the urban space than higher-ranking individuals. Dominance rank predicted baboon’s importance in the urban association network, and important individuals were more likely to lead larger group sizes into urban space. However, the alpha male was not as involved in urban association networks as predicted, with the beta ranked male being most central in the urban association network. I interpret these patterns as a consequence of baboon’s response to management interventions, which focus on the alpha and their affiliates when in the urban space. The high level of fission-fusion of the troop highlights behavioural flexibility of individuals and the group in response to urban spaces and management therein
Optimal partition recovery in general graphs
We consider a graph-structured change point problem in which we observe a random vector with piece-wise constant but otherwise unknown mean and whose independent, sub-Gaussian coordinates correspond to the n nodes of a fixed graph. We are interested in the localisation task of recovering the partition of the nodes associated to the constancy regions of the mean vector or, equivalently, of estimating the cut separating the sub-graphs over which the mean remains constant. Although graph-valued signals of this type have been previously studied in the literature for the different tasks of testing for the presence of an anomalous cluster and of estimating the mean vector, no localisation results are known outside the classical case of chain graphs. When the partition S consists of only two elements, we characterise the difficulty of the localisation problem in terms of four key parameters: the maximal noise variance σ2, the size Δ of the smaller element of the partition, the magnitude κ of the difference in the signal values across contiguous elements of the partition and the sum of the effective resistance edge weights |∂r(S)| of the corresponding cut – a graph theoretic quantity quantifying the size of the partition boundary. In particular, we demonstrate an information theoretical lower bound implying that, in the low signal-to-noise ratio regime κ2Δσ−2|∂r(S)|−1≲1, no consistent estimator of the true partition exists. On the other hand, when κ2Δσ−2|∂r(S)|−1≳ζnlog{r(|E|)}, with r(|E|) being the sum of effective resistance weighted edges and ζn being any diverging sequence in n, we show that a polynomial-time, approximate ℓ0-penalised least squared estimator delivers a localisation error – measured by the symmetric difference between the true and estimated partition – of order κ−2σ2|∂r(S)|log{r(|E|)}. Aside from the log{r(|E|)} term, this rate is minimax optimal. Finally, we provide discussions on the localisation error for more general partitions of unknown sizes
- …