8,493 research outputs found
A conceptual framework for changes in Fund Management and Accountability relative to ESG issues
Major developments in socially responsible investment (SRI) and in environmental, social and governance (ESG) issues for fund managers (FMs) have occurred in the past decade. Much positive change has occurred but problems of disclosure, transparency and accountability remain. This article argues that trustees, FM investors and investee companies all require shared knowledge to overcome, in part, these problems. This involves clear concepts of accountability, and knowledge of fund management and of the associated ‘chain of accountability’ to enhance visibility and transparency. Dealing with the problems also requires development of an analytic framework based on relevant literature and theory. These empirical and analytic constructs combine to form a novel conceptual framework that is used to identify a clear set of areas to change FM investment decision making in a coherent way relative to ESG issues. The constructs and the change strategy are also used together to analyse how one can create favourable conditions for enhanced accountability. Ethical problems and climate change issues will be used as the main examples of ESG issues. The article has policy implications for the UK ‘Stewardship Code’ (2010), the legal responsibilities of key players and for the ‘Carbon Disclosure Project’
Machine learning and its applications in reliability analysis systems
In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA
Potential of Automated Writing Evaluation Feedback
This paper presents an empirical evaluation of automated writing evaluation (AWE) feedback used for L2 academic writing teaching and learning. It introduces the Intelligent Academic Discourse Evaluator (IADE), a new web-based AWE program that analyzes the introduction section to research articles and generates immediate, individualized, and discipline-specific feedback. The purpose of the study was to investigate the potential of IADE’s feedback. A mixed-methods approach with a concurrent transformative strategy was employed. Quantitative data consisted of responses to Likert-scale, yes/no, and open-ended survey questions; automated and human scores for first and final drafts; and pre-/posttest scores. Qualitative data contained students’ first and final drafts as well as transcripts of think-aloud protocols and Camtasia computer screen recordings, observations, and semistructured interviews. The findings indicate that IADE’s colorcoded and numerical feedback possesses potential for facilitating language learning, a claim supported by evidence of focus on discourse form, noticing of negative evidence, improved rhetorical quality of writing, and increased learning gains
Translating data into narratives. Designing semantic interpretations for reflexive policy practices
Today more than ever, it is evident the role that data can play when designing policies. Not only can understandable data orient better strategies, but they can also enable reflexive practices within Public Administrations, giving directions for knowledge management and smarter governance. However, multiple gaps concur to affect data understanding and interpretation, hindering their subsequent translation into policy-valuable information. To tackle challenges related to data interpretation and usage, the article (i) illustrates a narrative approach for building profiles of cities as narrative feedback from sets of data and (ii) investigates their potential as a (self-)evaluation and a decision-making support device. The feedback structure relies on the conceptual model built for the DIGISER Project, which investigated multidimensional digital transition processes across European cities. Dynamic feedback retrieves data from the project dataset, translating them into discursive form. The effectiveness of the approach and its device is validated through a qualitative enquiry on a textual excerpt provided to three different departments of one of the cities that participated in the survey. The study corroborates that designing narrative feedback as semantic interpretations can trigger understanding, (self-)reflection and support policy change, informing policy formulation and facilitating cross-silo interactions across administrative units engaged in digital transformation processes
Translating data into narratives: Designing semantic interpretations for reflexive policy practices
Today more than ever, it is evident the role that data can play when designing policies. Not only can understandable data orient better strategies, but they can also enable reflexive practices within Public Administrations, giving directions for knowledge management and smarter governance. However, multiple gaps concur to affect data understanding and interpretation, hindering their subsequent translation into policy-valuable information. To tackle challenges related to data interpretation and usage, the article (i) illustrates a narrative approach for building profiles of cities as narrative feedback from sets of data and (ii) investigates their potential as a (self-)evaluation and a decision-making support device. The feedback structure relies on the conceptual model built for the DIGISER Project, which investigated multidimensional digital transition processes across European cities. Dynamic feedback retrieves data from the project dataset, translating them into discursive form. The effectiveness of the approach and its device is validated through a qualitative enquiry on a textual excerpt provided to three different departments of one of the cities that participated in the survey. The study corroborates that designing narrative feedback as semantic interpretations can trigger understanding, (self-)reflection and support policy change, informing policy formulation and facilitating cross-silo interactions across administrative units engaged in digital transformation processes
ConceptExplainer: Understanding the Mental Model of Deep Learning Algorithms via Interactive Concept-based Explanations
Traditional deep learning interpretability methods which are suitable for
non-expert users cannot explain network behaviors at the global level and are
inflexible at providing fine-grained explanations. As a solution, concept-based
explanations are gaining attention due to their human intuitiveness and their
flexibility to describe both global and local model behaviors. Concepts are
groups of similarly meaningful pixels that express a notion, embedded within
the network's latent space and have primarily been hand-generated, but have
recently been discovered by automated approaches. Unfortunately, the magnitude
and diversity of discovered concepts makes it difficult for non-experts to
navigate and make sense of the concept space, and lack of easy-to-use software
also makes concept explanations inaccessible to many non-expert users. Visual
analytics can serve a valuable role in bridging these gaps by enabling
structured navigation and exploration of the concept space to provide
concept-based insights of model behavior to users. To this end, we design,
develop, and validate ConceptExplainer, a visual analytics system that enables
non-expert users to interactively probe and explore the concept space to
explain model behavior at the instance/class/global level. The system was
developed via iterative prototyping to address a number of design challenges
that non-experts face in interpreting the behavior of deep learning models. Via
a rigorous user study, we validate how ConceptExplainer supports these
challenges. Likewise, we conduct a series of usage scenarios to demonstrate how
the system supports the interactive analysis of model behavior across a variety
of tasks and explanation granularities, such as identifying concepts that are
important to classification, identifying bias in training data, and
understanding how concepts can be shared across diverse and seemingly
dissimilar classes.Comment: 9 pages, 6 figure
- …