15,737 research outputs found

    Corporate Social Responsibility: the institutionalization of ESG

    Get PDF
    Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective

    Investigating the impact of visual perspective in a motor imagery-based brain-robot interaction: A pilot study with healthy participants

    Get PDF
    IntroductionMotor Imagery (MI)-based Brain Computer Interfaces (BCI) have raised gained attention for their use in rehabilitation therapies since they allow controlling an external device by using brain activity, in this way promoting brain plasticity mechanisms that could lead to motor recovery. Specifically, rehabilitation robotics can provide precision and consistency for movement exercises, while embodied robotics could provide sensory feedback that can help patients improve their motor skills and coordination. However, it is still not clear whether different types of visual feedback may affect the elicited brain response and hence the effectiveness of MI-BCI for rehabilitation.MethodsIn this paper, we compare two visual feedback strategies based on controlling the movement of robotic arms through a MI-BCI system: 1) first-person perspective, with visual information that the user receives when they view the robot arms from their own perspective; and 2) third-person perspective, whereby the subjects observe the robot from an external perspective. We studied 10 healthy subjects over three consecutive sessions. The electroencephalographic (EEG) signals were recorded and evaluated in terms of the power of the sensorimotor rhythms, as well as their lateralization, and spatial distribution.ResultsOur results show that both feedback perspectives can elicit motor-related brain responses, but without any significant differences between them. Moreover, the evoked responses remained consistent across all sessions, showing no significant differences between the first and the last session.DiscussionOverall, these results suggest that the type of perspective may not influence the brain responses during a MI- BCI task based on a robotic feedback, although, due to the limited sample size, more evidence is required. Finally, this study resulted into the production of 180 labeled MI EEG datasets, publicly available for research purposes

    Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review

    Full text link
    Globally, the external Internet is increasingly being connected to the contemporary industrial control system. As a result, there is an immediate need to protect the network from several threats. The key infrastructure of industrial activity may be protected from harm by using an intrusion detection system (IDS), a preventive measure mechanism, to recognize new kinds of dangerous threats and hostile activities. The most recent artificial intelligence (AI) techniques used to create IDS in many kinds of industrial control networks are examined in this study, with a particular emphasis on IDS-based deep transfer learning (DTL). This latter can be seen as a type of information fusion that merge, and/or adapt knowledge from multiple domains to enhance the performance of the target task, particularly when the labeled data in the target domain is scarce. Publications issued after 2015 were taken into account. These selected publications were divided into three categories: DTL-only and IDS-only are involved in the introduction and background, and DTL-based IDS papers are involved in the core papers of this review. Researchers will be able to have a better grasp of the current state of DTL approaches used in IDS in many different types of networks by reading this review paper. Other useful information, such as the datasets used, the sort of DTL employed, the pre-trained network, IDS techniques, the evaluation metrics including accuracy/F-score and false alarm rate (FAR), and the improvement gained, were also covered. The algorithms, and methods used in several studies, or illustrate deeply and clearly the principle in any DTL-based IDS subcategory are presented to the reader

    Machine Learning Research Trends in Africa: A 30 Years Overview with Bibliometric Analysis Review

    Full text link
    In this paper, a critical bibliometric analysis study is conducted, coupled with an extensive literature survey on recent developments and associated applications in machine learning research with a perspective on Africa. The presented bibliometric analysis study consists of 2761 machine learning-related documents, of which 98% were articles with at least 482 citations published in 903 journals during the past 30 years. Furthermore, the collated documents were retrieved from the Science Citation Index EXPANDED, comprising research publications from 54 African countries between 1993 and 2021. The bibliometric study shows the visualization of the current landscape and future trends in machine learning research and its application to facilitate future collaborative research and knowledge exchange among authors from different research institutions scattered across the African continent

    A direct-laser-written heart-on-a-chip platform for generation and stimulation of engineered heart tissues

    Full text link
    In this dissertation, we first develop a versatile microfluidic heart-on-a-chip model to generate 3D-engineered human cardiac microtissues in highly-controlled microenvironments. The platform, which is enabled by direct laser writing (DLW), has tailor-made attachment sites for cardiac microtissues and comes with integrated strain actuators and force sensors. Application of external pressure waves to the platform results in controllable time-dependent forces on the microtissues. Conversely, oscillatory forces generated by the microtissues are transduced into measurable electrical outputs. After characterization of the responsivity of the transducers, we demonstrate the capabilities of this platform by studying the response of cardiac microtissues to prescribed mechanical loading and pacing. Next, we tune the geometry and mechanical properties of the platform to enable parametric studies on engineered heart tissues. We explore two geometries: a rectangular seeding well with two attachment sites, and a stadium-like seeding well with six attachment sites. The attachment sites are placed symmetrically in the longitudinal direction. The former geometry promotes uniaxial contraction of the tissues; the latter additionally induces diagonal fiber alignment. We systematically increase the length for both configurations and observe a positive correlation between fiber alignment at the center of the microtissues and tissue length. However, progressive thinning and “necking” is also observed, leading to the failure of longer tissues over time. We use the DLW technique to improve the platform, softening the mechanical environment and optimizing the attachment sites for generation of stable microtissues at each length and geometry. Furthermore, electrical pacing is incorporated into the platform to evaluate the functional dynamics of stable microtissues over the entire range of physiological heart rates. Here, we typically observe a decrease in active force and contraction duration as a function of frequency. Lastly, we use a more traditional ?TUG platform to demonstrate the effects of subthreshold electrical pacing on the rhythm of the spontaneously contracting cardiac microtissues. Here, we observe periodic M:N patterns, in which there are ? cycles of stimulation for every ? tissue contractions. Using electric field amplitude, pacing frequency, and homeostatic beating frequencies of the tissues, we provide an empirical map for predicting the emergence of these rhythms

    TEACHING STRATEGIES AND THE PROBLEM FACED BY EFL TEACHER DURING COVID-19 OUTBREAK AT JUNIOR HIGH SCHOOL

    Get PDF
    The education system have to switch from face-to-face to online teaching due to the pandemic. This situation is considered new in Indonesia, the teachers have to adapt their self with this situation. An example is learning to use technology in online teaching and making a lesson plan that can make students interested in online learning.This research aimed to know what are teaching strategies used by EFL teachers and what are the teacher problems in online teaching at the Junior High School 98 during Pandemic. This research used qualitative as a design and narrative descriptive as the approach. The technique to collect the data researcher used in this research is observation, interview, and documentation. In addition, the object of this research is EFL teachers, the researcher interviewed 5 EFL teachers. The results of this research are: 1)The teacher strategies used in online teaching during a pandemic is synchronous, while teacher used platform WhatsApp, Google Classroom, and Google Meet for online classes. In addition, to create the task the teacher gives chance to the students to useanother platform such as Canva, Youtube, Video Maker, etc. On the other hand, the teacher have some strategies to overcome the problems when teaching online, such as when the students have a problem in the following class online through the platform Google Meet, the teacher shared the material in Google Classroom. While, the researcher found in students motivation the teacher do teamwork with students’ parents in control the students at home; 2) the teaching online problems that researcher found in this research are: lack of quota package, lack of internet access, lack of motivation, and lack of facilities

    Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos

    Get PDF
    Data charts are widely used in our daily lives, being present in regular media, such as newspapers, magazines, web pages, books, and many others. A well constructed data chart leads to an intuitive understanding of its underlying data and in the same way, when data charts have wrong design choices, a redesign of these representations might be needed. However, in most cases, these charts are shown as a static image, which means that the original data are not usually available. Therefore, automatic methods could be applied to extract the underlying data from the chart images to allow these changes. The task of recognizing charts and extracting data from them is complex, largely due to the variety of chart types and their visual characteristics. Computer Vision techniques for image classification and object detection are widely used for the problem of recognizing charts, but only in images without any disturbance. Other features in real-world images that can make this task difficult are not present in most literature works, like photo distortions, noise, alignment, etc. Two computer vision techniques that can assist this task and have been little explored in this context are perspective detection and correction. These methods transform a distorted and noisy chart in a clear chart, with its type ready for data extraction or other uses. The task of reconstructing data is straightforward, as long the data is available the visualization can be reconstructed, but the scenario of reconstructing it on the same context is complex. Using a Visualization Grammar for this scenario is a key component, as these grammars usually have extensions for interaction, chart layers, and multiple views without requiring extra development effort. This work presents a model for automated support for custom recognition, and reconstruction of charts in images. The model automatically performs the process steps, such as reverse engineering, turning a static chart back into its data table for later reconstruction, while allowing the user to make modifications in case of uncertainties. This work also features a model-based architecture along with prototypes for various use cases. Validation is performed step by step, with methods inspired by the literature. This work features three use cases providing proof of concept and validation of the model. The first use case features usage of chart recognition methods focused on documents in the real-world, the second use case focus on vocalization of charts, using a visualization grammar to reconstruct a chart in audio format, and the third use case presents an Augmented Reality application that recognizes and reconstructs charts in the same context (a piece of paper) overlaying the new chart and interaction widgets. The results showed that with slight changes, chart recognition and reconstruction methods are now ready for real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando presentes nos meios de comunicação regulares, tais como jornais, revistas, páginas web, livros, e muitos outros. Um gráfico bem construído leva a uma compreensão intuitiva dos seus dados inerentes e da mesma forma, quando os gráficos de dados têm escolhas de conceção erradas, poderá ser necessário um redesenho destas representações. Contudo, na maioria dos casos, estes gráficos são mostrados como uma imagem estática, o que significa que os dados originais não estão normalmente disponíveis. Portanto, poderiam ser aplicados métodos automáticos para extrair os dados inerentes das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande parte devido à variedade de tipos de gráficos e às suas características visuais. As técnicas de Visão Computacional para classificação de imagens e deteção de objetos são amplamente utilizadas para o problema de reconhecimento de gráficos, mas apenas em imagens sem qualquer ruído. Outras características das imagens do mundo real que podem dificultar esta tarefa não estão presentes na maioria das obras literárias, como distorções fotográficas, ruído, alinhamento, etc. Duas técnicas de visão computacional que podem ajudar nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e correção da perspetiva. Estes métodos transformam um gráfico distorcido e ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que os dados estejam disponíveis a visualização pode ser reconstruída, mas o cenário de reconstrução no mesmo contexto é complexo. A utilização de uma Gramática de Visualização para este cenário é um componente chave, uma vez que estas gramáticas têm normalmente extensões para interação, camadas de gráficos, e visões múltiplas sem exigir um esforço extra de desenvolvimento. Este trabalho apresenta um modelo de suporte automatizado para o reconhecimento personalizado, e reconstrução de gráficos em imagens estáticas. O modelo executa automaticamente as etapas do processo, tais como engenharia inversa, transformando um gráfico estático novamente na sua tabela de dados para posterior reconstrução, ao mesmo tempo que permite ao utilizador fazer modificações em caso de incertezas. Este trabalho também apresenta uma arquitetura baseada em modelos, juntamente com protótipos para vários casos de utilização. A validação é efetuada passo a passo, com métodos inspirados na literatura. Este trabalho apresenta três casos de uso, fornecendo prova de conceito e validação do modelo. O primeiro caso de uso apresenta a utilização de métodos de reconhecimento de gráficos focando em documentos no mundo real, o segundo caso de uso centra-se na vocalização de gráficos, utilizando uma gramática de visualização para reconstruir um gráfico em formato áudio, e o terceiro caso de uso apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos gráficos e widgets de interação. Os resultados mostraram que com pequenas alterações, os métodos de reconhecimento e reconstrução dos gráficos estão agora prontos para os gráficos do mundo real, tendo em consideração o tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic

    From wallet to mobile: exploring how mobile payments create customer value in the service experience

    Get PDF
    This study explores how mobile proximity payments (MPP) (e.g., Apple Pay) create customer value in the service experience compared to traditional payment methods (e.g. cash and card). The main objectives were firstly to understand how customer value manifests as an outcome in the MPP service experience, and secondly to understand how the customer activities in the process of using MPP create customer value. To achieve these objectives a conceptual framework is built upon the Grönroos-Voima Value Model (Grönroos and Voima, 2013), and uses the Theory of Consumption Value (Sheth et al., 1991) to determine the customer value constructs for MPP, which is complimented with Script theory (Abelson, 1981) to determine the value creating activities the consumer does in the process of paying with MPP. The study uses a sequential exploratory mixed methods design, wherein the first qualitative stage uses two methods, self-observations (n=200) and semi-structured interviews (n=18). The subsequent second quantitative stage uses an online survey (n=441) and Structural Equation Modelling analysis to further examine the relationships and effect between the value creating activities and customer value constructs identified in stage one. The academic contributions include the development of a model of mobile payment services value creation in the service experience, introducing the concept of in-use barriers which occur after adoption and constrains the consumers existing use of MPP, and revealing the importance of the mobile in-hand momentary condition as an antecedent state. Additionally, the customer value perspective of this thesis demonstrates an alternative to the dominant Information Technology approaches to researching mobile payments and broadens the view of technology from purely an object a user interacts with to an object that is immersed in consumers’ daily life

    Search for third generation vector-like leptons with the ATLAS detector

    Get PDF
    The Standard Model of particle physics provides a concise description of the building blocks of our universe in terms of fundamental particles and their interactions. It is an extremely successful theory, providing a plethora of predictions that precisely match experimental observation. In 2012, the Higgs boson was observed at CERN and was the last particle predicted by the Standard Model that had yet-to-be discovered. While this added further credibility to the theory, the Standard Model appears incomplete. Notably, it only accounts for 5% of the energy density of the universe (the rest being ``dark matter'' and ``dark energy''), it cannot resolve the gravitational force with quantum theory, it does not explain the origin of neutrino masses and cannot account for matter/anti-matter asymmetry. The most plausible explanation is that the theory is an approximation and new physics remains. Vector-like leptons are well-motivated by a number of theories that seek to provide closure on the Standard Model. They are a simple addition to the Standard Model and can help to resolve a number of discrepancies without disturbing precisely measured observables. This thesis presents a search for vector-like leptons that preferentially couple to tau leptons. The search was performed using proton-proton collision data from the Large Hadron Collider collected by the ATLAS experiment from 2015 to 2018 at center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 139 inverse femtobarns. Final states of various lepton multiplicities were considered to isolate the vector-like lepton signal against Standard Model and instrumental background. The major backgrounds mimicking the signal are from WZ, ZZ, tt+Z production and from mis-identified leptons. A number of boosted decision trees were used to improve rejection power against background where the signal was measured using a binned-likelihood estimator. No excess relative to the Standard Model was observed. Exclusion limits were placed on vector-like leptons in the mass range of 130 to 898 GeV

    Synthesis and Characterisation of Low-cost Biopolymeric/mineral Composite Systems and Evaluation of their Potential Application for Heavy Metal Removal

    Get PDF
    Heavy metal pollution and waste management are two major environmental problems faced in the world today. Anthropogenic sources of heavy metals, especially effluent from industries, are serious environmental and health concerns by polluting surface and ground waters. Similarly, on a global scale, thousands of tonnes of industrial and agricultural waste are discarded into the environment annually. There are several conventional methods to treat industrial effluents, including reverse osmosis, oxidation, filtration, flotation, chemical precipitation, ion exchange resins and adsorption. Among them, adsorption and ion exchange are known to be effective mechanisms for removing heavy metal pollution, especially if low-cost materials can be used. This thesis was a study into materials that can be used to remove heavy metals from water using low-cost feedstock materials. The synthesis of low-cost composite matrices from agricultural and industrial by-products and low-cost organic and mineral sources was carried out. The feedstock materials being considered include chitosan (generated from industrial seafood waste), coir fibre (an agricultural by-product), spent coffee grounds (a by-product from coffee machines), hydroxyapatite (from bovine bone), and naturally sourced aluminosilicate minerals such as zeolite. The novel composite adsorbents were prepared using commercially sourced HAp and bovine sourced HAp, with two types of adsorbents being synthesized, including two- and three-component composites. Standard synthetic methods such as precipitation were developed to synthesize these materials, followed by characterization of their structural, physical, and chemical properties (by using FTIR, TGA, SEM, EDX and XRD). The synthesized materials were then evaluated for their ability to remove metal ions from solutions of heavy metals using single-metal ion type and two-metal ion type solution systems, using the model ion solutions, with quantification of their removal efficiency. It was followed by experimentation using the synthesized adsorbents for metal ion removal in complex systems such as an industrial input stream solution system obtained from a local timber treatment company. Two-component composites were considered as control composites to compare the removal efficiency of the three-component composites against. The heavy metal removal experiments were conducted under a range of experimental conditions (e.g., pH, sorbent dose, initial metal ion concentration, time of contact). Of the four metal ion systems considered in this study (Cd2+, Pb2+, Cu2+ and Cr as chromate ions), Pb2+ ion removal by the composites was found to be the highest in single-metal and two-metal ion type solution systems, while chromate ion removal was found to be the lowest. The bovine bone-based hydroxyapatite (bHAp) composites were more efficient at removing the metal cations than composites formed from a commercially sourced hydroxyapatite (cHAp). In industrial input stream solution systems (containing Cu, Cr and As), the Cu2+ ion removal was the highest, which aligned with the observations recorded in the single and two-metal ion type solution systems. Arsenate ion was removed to a higher extent than chromate ion using the three-component composites, while the removal of chromate ion was found to be higher than arsenate ion when using the two-component composites (i.e., the control system). The project also aimed to elucidate the removal mechanisms of these synthesized composite materials by using appropriate adsorption and kinetic models. The adsorption of metal ions exhibited a range of adsorption behaviours as both the models (Langmuir and Freundlich) were found to fit most of the data recorded in different adsorption systems studied. The pseudo-second-order model was found to be the best fitted to describe the kinetics of heavy metal ion adsorption in all the composite adsorbent systems studied, in single-metal ion type and two-metal ion type solution systems. The ion-exchange mechanism was considered as one of the dominant mechanisms for the removal of cations (in single-metal and two-metal ion type solution systems) and arsenate ions (in industrial input stream solution systems) along with other adsorption mechanisms. In contrast, electrostatic attractions were considered to be the dominant mechanism of removal for chromate ions
    corecore