6,068 research outputs found

    Corporate Social Responsibility: the institutionalization of ESG

    Get PDF
    Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective

    Chinese Benteng Women’s Participation in Local Development Affairs in Indonesia: Appropriate means for struggle and a pathway to claim citizen’ right?

    Get PDF
    It had been more than two decades passing by aftermath the devastating Asia’s Financial Crisis in 1997, subsequently followed by Suharto’s step down from his presidential throne which he occupied for more than three decades. The financial turmoil turned to a political disaster furthermore has led to massive looting that severely impacted Indonesians of Chinese descendant, including unresolved mystery of the most atrocious sexual violation against women and covert killings of students and democracy activists in this country. Since then, precisely aftermath May 1998, which publicly known as “Reformasi”1, Indonesia underwent political reform that eventually corresponded positively to its macroeconomic growth. Twenty years later, in 2018, Indonesia captured worldwide attention because it has successfully hosted two internationally renowned events, namely the Asian Games 2018 – the most prestigious sport events in Asia – conducted in Jakarta and Palembang; and the IMF/World Bank Annual Meeting 2018 in Bali. Particularly in the IMF/World Bank Annual Meeting, this event has significantly elevated Indonesia’s credibility and international prestige in the global economic powerplay as one of the nations with promising growth and openness. However, the narrative about poverty and inequality, including increasing racial tension, religious conservatism, and sexual violation against women are superseded by friendly climate for foreign investment and eventually excessive glorification of the nation’s economic growth. By portraying the image of promising new economic power, as rhetorically promised by President Joko Widodo during his presidential terms, Indonesia has swept the growing inequality in this highly stratified society that historically compounded with religious and racial tension under the carpet of digital economy.Arte y Humanidade

    Hunting Wildlife in the Tropics and Subtropics

    Get PDF
    The hunting of wild animals for their meat has been a crucial activity in the evolution of humans. It continues to be an essential source of food and a generator of income for millions of Indigenous and rural communities worldwide. Conservationists rightly fear that excessive hunting of many animal species will cause their demise, as has already happened throughout the Anthropocene. Many species of large mammals and birds have been decimated or annihilated due to overhunting by humans. If such pressures continue, many other species will meet the same fate. Equally, if the use of wildlife resources is to continue by those who depend on it, sustainable practices must be implemented. These communities need to remain or become custodians of the wildlife resources within their lands, for their own well-being as well as for biodiversity in general. This title is also available via Open Access on Cambridge Core

    Developing automated meta-research approaches in the preclinical Alzheimer's disease literature

    Get PDF
    Alzheimer’s disease is a devastating neurodegenerative disorder for which there is no cure. A crucial part of the drug development pipeline involves testing therapeutic interventions in animal disease models. However, promising findings in preclinical experiments have not translated into clinical trial success. Reproducibility has often been cited as a major issue affecting biomedical research, where experimental results in one laboratory cannot be replicated in another. By using meta-research (research on research) approaches such as systematic reviews, researchers aim to identify and summarise all available evidence relating to a specific research question. By conducting a meta-analysis, researchers can also combine the results from different experiments statistically to understand the overall effect of an intervention and to explore reasons for variations seen across different publications. Systematic reviews of the preclinical Alzheimer’s disease literature could inform decision making, encourage research improvement, and identify gaps in the literature to guide future research. However, due to the vast amount of potentially useful evidence from animal models of Alzheimer’s disease, it remains difficult to make sense of and utilise this data effectively. Systematic reviews are common practice within evidence based medicine, yet their application to preclinical research is often limited by the time and resources required. In this thesis, I develop, build-upon, and implement automated meta-research approaches to collect, curate, and evaluate the preclinical Alzheimer’s literature. I searched several biomedical databases to obtain all research relevant to Alzheimer’s disease. I developed a novel deduplication tool to automatically identify and remove duplicate publications identified across different databases with minimal human effort. I trained a crowd of reviewers to annotate a subset of the publications identified and used this data to train a machine learning algorithm to screen through the remaining publications for relevance. I developed text-mining tools to extract model, intervention, and treatment information from publications and I improved existing automated tools to extract reported measures to reduce the risk of bias. Using these tools, I created a categorised database of research in transgenic Alzheimer’s disease animal models and created a visual summary of this dataset on an interactive, openly accessible online platform. Using the techniques described, I also identified relevant publications within the categorised dataset to perform systematic reviews of two key outcomes of interest in transgenic Alzheimer’s disease models: (1) synaptic plasticity and transmission in hippocampal slices and (2) motor activity in the open field test. Over 400,000 publications were identified across biomedical research databases, with 230,203 unique publications. In a performance evaluation across different preclinical datasets, the automated deduplication tool I developed could identify over 97% of duplicate citations and a had an error rate similar to that of human performance. When evaluated on a test set of publications, the machine learning classifier trained to identify relevant research in transgenic models performed was highly sensitive (captured 96.5% of relevant publications) and excluded 87.8% of irrelevant publications. Tools to identify the model(s) and outcome measure(s) within the full-text of publications may reduce the burden on reviewers and were found to be more sensitive than searching only the title and abstract of citations. Automated tools to assess risk of bias reporting were highly sensitive and could have the potential to monitor research improvement over time. The final dataset of categorised Alzheimer’s disease research contained 22,375 publications which were then visualised in the interactive web application. Within the application, users can see how many publications report measures to reduce the risk of bias and how many have been classified as using each transgenic model, testing each intervention, and measuring each outcome. Users can also filter to obtain curated lists of relevant research, allowing them to perform systematic reviews at an accelerated pace with reduced effort required to search across databases, and a reduced number of publications to screen for relevance. Both systematic reviews and meta-analyses highlighted failures to report key methodological information within publications. Poor transparency of reporting limited the statistical power I had to understand the sources of between-study variation. However, some variables were found to explain a significant proportion of the heterogeneity. Transgenic animal model had a significant impact on results in both reviews. For certain open field test outcomes, wall colour of the open field arena and the reporting of measures to reduce the risk of bias were found to impact results. For in vitro electrophysiology experiments measuring synaptic plasticity, several electrophysiology parameters, including magnesium concentration of the recording solution, were found to explain a significant proportion of the heterogeneity. Automated meta-research approaches and curated web platforms summarising preclinical research could have the potential to accelerate the conduct of systematic reviews and maximise the potential of existing evidence to inform translation

    The new age of fear: an analysis of crisis framing by right-wing populist parties in Greece and France

    Get PDF
    From the 2009 Eurozone economic downturn, to the 2015 mass movement of forcibly displaced migrants and the current COVID-19 pandemic, crises have seemingly become a ‘new normal’ feature of European politics. During this decade, rolling crises generated a wave of public discontent that damaged the legitimacy of national governments and the European Union and heralded a renaissance of populism. The central message of populist parties, which helped them rise in popularity or enter parliament for the first time, is simple but very effective: democratic representation has been undermined by national and global elites. This has provoked a wealth of studies seeking to explain the rise or breakthrough of populist fringe parties, without adequate consideration of how crises transform, not only the demand side, but also the supply of populist arguments, which has received scarce attention. This thesis seeks to address this imbalance by synthesising insights from the crisis framing literature, which facilitates an understanding and operationalisation of populism as a style of discourse. To assess how far-right parties employ this discourse, and the implications of this for their electoral prospects, a comparative case-study design is employed, exploring the discourse of parties, the National Rally (NR) in France and Golden Dawn (GD) in Greece. Their ideologically similar profile but differential electoral performance, allows for a more nuanced analysis of their respective framing strategies. The thesis examines the discourse of the two parties MPs on month by month basis over a four year period, 2012-2015 for GD and 2012-2013 and 2016-2017 for NR, via the use of the NVivo software. Their respective discourses are quantified and broken down into four key areas associated with Foreign Policy, the Economy, the Political System and Society, analysing the content, frequency and salience of key crisis frames. Discourse analysis of excerpts adds a qualitative element to the analysis that showcases the substantial differences between the two case studies. The analysis demonstrates that references to ‘the people’ and anti-elitism were the centrepieces of each case study’s discourse with strong nativist and nationalist elements. The two parties were extremely similar in the diagnostic stage of their framing and the way which they attribute blame for the crises. However, their discursive strategies diverge regarding their proposed solutions to the crises. Golden Dawn remained a single issue party in terms of discourse, since it never presented a comprehensive plan for ending the crises. As a result, Golden Dawn’s discourse remained one-dimensional throughout its brief period of success, being centred solely on attributing blame and attacking its political opponents and the European Union. On the other hand, National Rally’s framing was more elaborate and ambitious both in terms of the variety of issues raised and, especially, the proposed solutions if advocated. This, it is argued, contributed to the evolution of RN into a mainstream competitor that is no longer dependent on a niche part of the electoral market, while the inability of GD to develop equally successful crisis frames offers a unique understanding as to why the party failed electorally and was unable to enter Parliament in the 2019 elections. The overall analysis produces a rich framework that maps out the key elements of populist crisis discourse by far-right parties, which has implications for electoral politics and for our understanding of populism, more broadly

    Scalable software and models for large-scale extracellular recordings

    Get PDF
    The brain represents information about the world through the electrical activity of populations of neurons. By placing an electrode near a neuron that is firing (spiking), it is possible to detect the resulting extracellular action potential (EAP) that is transmitted down an axon to other neurons. In this way, it is possible to monitor the communication of a group of neurons to uncover how they encode and transmit information. As the number of recorded neurons continues to increase, however, so do the data processing and analysis challenges. It is crucial that scalable software and analysis tools are developed and made available to the neuroscience community to keep up with the large amounts of data that are already being gathered. This thesis is composed of three pieces of work which I develop in order to better process and analyze large-scale extracellular recordings. My work spans all stages of extracellular analysis from the processing of raw electrical recordings to the development of statistical models to reveal underlying structure in neural population activity. In the first work, I focus on developing software to improve the comparison and adoption of different computational approaches for spike sorting. When analyzing neural recordings, most researchers are interested in the spiking activity of individual neurons, which must be extracted from the raw electrical traces through a process called spike sorting. Much development has been directed towards improving the performance and automation of spike sorting. This continuous development, while essential, has contributed to an over-saturation of new, incompatible tools that hinders rigorous benchmarking and complicates reproducible analysis. To address these limitations, I develop SpikeInterface, an open-source, Python framework designed to unify preexisting spike sorting technologies into a single toolkit and to facilitate straightforward benchmarking of different approaches. With this framework, I demonstrate that modern, automated spike sorters have low agreement when analyzing the same dataset, i.e. they find different numbers of neurons with different activity profiles; This result holds true for a variety of simulated and real datasets. Also, I demonstrate that utilizing a consensus-based approach to spike sorting, where the outputs of multiple spike sorters are combined, can dramatically reduce the number of falsely detected neurons. In the second work, I focus on developing an unsupervised machine learning approach for determining the source location of individually detected spikes that are recorded by high-density, microelectrode arrays. By localizing the source of individual spikes, my method is able to determine the approximate position of the recorded neuriii ons in relation to the microelectrode array. To allow my model to work with large-scale datasets, I utilize deep neural networks, a family of machine learning algorithms that can be trained to approximate complicated functions in a scalable fashion. I evaluate my method on both simulated and real extracellular datasets, demonstrating that it is more accurate than other commonly used methods. Also, I show that location estimates for individual spikes can be utilized to improve the efficiency and accuracy of spike sorting. After training, my method allows for localization of one million spikes in approximately 37 seconds on a TITAN X GPU, enabling real-time analysis of massive extracellular datasets. In my third and final presented work, I focus on developing an unsupervised machine learning model that can uncover patterns of activity from neural populations associated with a behaviour being performed. Specifically, I introduce Targeted Neural Dynamical Modelling (TNDM), a statistical model that jointly models the neural activity and any external behavioural variables. TNDM decomposes neural dynamics (i.e. temporal activity patterns) into behaviourally relevant and behaviourally irrelevant dynamics; the behaviourally relevant dynamics constitute all activity patterns required to generate the behaviour of interest while behaviourally irrelevant dynamics may be completely unrelated (e.g. other behavioural or brain states), or even related to behaviour execution (e.g. dynamics that are associated with behaviour generally but are not task specific). Again, I implement TNDM using a deep neural network to improve its scalability and expressivity. On synthetic data and on real recordings from the premotor (PMd) and primary motor cortex (M1) of a monkey performing a center-out reaching task, I show that TNDM is able to extract low-dimensional neural dynamics that are highly predictive of behaviour without sacrificing its fit to the neural data

    Investigating sepsis immunomodulation and the role of vasopressor therapies using monocyte functional assays

    Get PDF
    In sepsis, monocytes exhibit the differential immune response states of ‘priming’ (increased responsiveness to secondary stimuli) and ‘deactivation’ (reduced responsiveness, depressed expression of HLA-DR and CD86, increased PDL-1), which may reflect the opposing hyperinflammatory and immune-suppressive systemic conditions. In an era where the number and phenotype of circulating leucocytes are used to guide sepsis immune therapy investigation, there is debate as to whether sepsis immunity is differentially regulated within the blood and tissue compartments of the body. Adding to this uncertainty is the fact that core support therapies, such as vasopressor resuscitation, may have an immunomodulatory effect during sepsis. We hypothesised that monocytes undergo trans-endothelial reprogramming during migration between vascular and tissue compartments and that this is a crucial determinant of sepsis immunity and its clinical monitoring. Further, we hypothesised that noradrenaline and vasopressin have a role in the functional and phenotypic modification of monocytes during sepsis. Our aims were to 1) develop an in-vitro model of priming and deactivation using healthy volunteer (HV) monocytes; 2) to test the direct effects of noradrenaline and vasopressin on monocytes in comparison to sera from the VAsopressin versus Noradrenaline as Initial therapy in Septic sHock (VANISH) trial; 3) develop a human lung microvascular endothelial cell (HLMVEC) transwell model of monocyte migration and associated phenotypic changes during sepsis. The major findings of this work were that in combination vasopressin and noradrenaline suppressed LPS-induced TNF release in non-pretreated and primed HV monocytes. In contrast, when HV monocytes were incubated with VANISH patient sera we found no difference in surface marker expression or LPS-induced TNF release. Conditioning of monocytes with vasopressin or noradrenaline was found to enhance their migration in an uncoated transwell assay. Lastly, we successfully developed an HLMVEC-coated transwell migration assay able to detect changes in pre- and post-migration monocyte phenotype.Open Acces
    corecore