474 research outputs found

    Technofixing the Future: Ethical Side Effects of Using AI and Big Data to meet the SDGs

    Get PDF
    While the use of smart information systems (the combination of AI and Big Data) offer great potential for meeting many of the UN’s Sustainable Development Goals (SDGs), they also raise a number of ethical challenges in their implementation. Through the use of six empirical case studies, this paper will examine potential ethical issues relating to use of SIS to meet the challenges in six of the SDGs (2, 3, 7, 8, 11, and 12). The paper will show that often a simple “technofix”, such as through the use of SIS, is not sufficient and may exacerbate, or create new, issues for the development community using SIS

    Integrating Bilingualism, Verbal Fluency, and Executive Functioning across the Lifespan

    Get PDF
    Published online: 29 Aug 2019Bilingual experience has an impact on an individual’s linguistic processing and general cognitive abilities. The relation between these linguistic and non-linguistic domains, in turn, is mediated by individual linguistic proficiency and developmental changes that take place across the lifespan. This study evaluated this relationship by assessing inhibition skills, and verbal fluency in monolingual and bilingual school-aged children (Experiment 1), young adults (Experiment 2), and older adults (Experiment 3). Results showed that bilinguals outperformed monolinguals in the measure of inhibition, but only in the children and older adult age groups. With regards to verbal fluency, bilingual children outperformed their monolingual peers in the letter verbal fluency task, but no group differences were observed for the young and old adults. These findings suggest that bilingual experience leads to significant advantages in linguistic and non-linguistic domains, but only at the time points when these skills undergo developmental changes.This work was supported by the Australian Research Council [DE150101053]

    Comparison of UVC/S<sub>2</sub>O<sub>8</sub> <sup>2-</sup> with UVC/H<sub>2</sub>O<sub>2</sub> in terms of efficiency and cost for the removal of micropollutants from groundwater

    Get PDF
    This study compared the UVC/S2O82- system with the more commonly used AOP in water industry, UVC/H2O2, and examined whether the first one can be an economically feasible alternative technology. Atrazine and 4 volatile compounds (methyl tert-butyl ether, cis-dichlorethen, 1,4-dioxane and 1,1,1-trichloroethane) were chosen as model contaminants because they exhibit different susceptibility to UVC photolysis and AOPs. A collimated beam apparatus was utilized for the majority of the experiments (controlled environment, without mass transfer phenomena), while selected experiments were performed in a flow-through reactor to simulate industrial applications. Initial experiments on the activation of oxidants with a LP lamp indicated that S2O82- is photolysed about 2.3times faster than H2O2 and that the applied treatment times were not sufficient to utilize the majority of the oxidant. The effect of oxidants' concentrations were tested with atrazine alone and in the micropollutants' mixture and it was decided to use 11.8mgL-1 S2O82- and 14.9mgL-1 H2O2 for further testing since is closer to industrial applications and to minimize the residual oxidant concentration. Changes of the matrix composition of the treated water were investigated with the addition of chloride, bicarbonate and humic acids at concentrations relevant to a well-water-sample, the results showed that the system least affected was UVC/H2O2. Only when bicarbonate was used, UVC/S2O82- performed better. Overall, testing these systems with the mixture of micropollutants gave better insights to their efficiency than atrazine alone and UVC/S2O82- is recommended for selective oxidation of challenging matrices

    Understanding preschoolers' word learning success in different scenarios : disambiguation meets statistical learning and ebook reading

    Get PDF
    Children’s ability to learn new words during their preschool years is crucial for further academic success. Previous research suggests that children rely on different learning mechanisms to acquire new words depending on the available context and linguistic information. To date, there is limited research integrating different paradigms to provide a cohesive view of the mechanisms and processes involved in preschool children’s word learning. We presented 4 year-old children (n = 47) with one of three different novel word-learning scenarios to test their ability to connect novel words to their correspondent referents without explicit instruction to do so. The scenarios were tested with three exposure conditions of different nature: (i) mutual exclusivity–target novel word-referent pair presented with a familiar referent, prompting fast-mapping via disambiguation, (ii) cross-situational–target novel word-referent pair presented next to an unfamiliar referent prompting statistically tracking the target pairs across trials, and (iii) eBook - target word-referent pairs presented within an audio-visual electronic storybook (eBook), prompting inferring meaning incidentally. Results show children succeed at learning the new words above chance in all three scenarios, with higher performance in eBook and mutual exclusivity than in cross-situational word learning. This illustrates children’s astounding ability to learn while coping with uncertainty and varying degrees of ambiguity, which are common in real-world situations. Findings extend our understanding of how preschoolers learn new words more or less successfully depending on specific word learning scenarios, which should be taken into account when working on vocabulary development for school readiness in the preschool years

    The Ethical Balance of Using Smart Information Systems for Promoting the United Nations’ Sustainable Development Goals

    Get PDF
    The Sustainable Development Goals (SDGs) are internationally agreed goals that allow us to determine what humanity, as represented by 193 member states, finds acceptable and desirable. The paper explores how technology can be used to address the SDGs and in particular Smart Information Systems (SIS). SIS, the technologies that build on big data analytics, typically facilitated by AI techniques such as machine learning, are expected to grow in importance and impact. Some of these impacts are likely to be beneficial, notably the growth in efficiency and profits, which will contribute to societal wellbeing. At the same time, there are significant ethical concerns about the consequences of algorithmic biases, job loss, power asymmetries and surveillance, as a result of SIS use. SIS have the potential to exacerbate inequality and further entrench the market dominance of big tech companies, if left uncontrolled. Measuring the impact of SIS on SDGs thus provides a way of assessing whether an SIS or an application of such a technology is acceptable in terms of balancing foreseeable benefits and harms. One possible approach is to use the SDGs as guidelines to determine the ethical nature of SIS implementation. While the idea of using SDGs as a yardstick to measure the acceptability of emerging technologies is conceptually strong, there should be empirical evidence to support such approaches. The paper describes the findings of a set of 6 case studies of SIS across a broad range of application areas, such as smart cities, agriculture, finance, insurance and logistics, explicitly focusing on ethical issues that SIS commonly raise and empirical insights from organisations using these technologies

    Understanding preschoolers’ word learning success in different scenarios: disambiguation meets statistical learning and eBook reading

    Get PDF
    Children’s ability to learn new words during their preschool years is crucial for further academic success. Previous research suggests that children rely on different learning mechanisms to acquire new words depending on the available context and linguistic information. To date, there is limited research integrating different paradigms to provide a cohesive view of the mechanisms and processes involved in preschool children’s word learning. We presented 4 year-old children (n = 47) with one of three different novel word-learning scenarios to test their ability to connect novel words to their correspondent referents without explicit instruction to do so. The scenarios were tested with three exposure conditions of different nature: (i) mutual exclusivity–target novel word-referent pair presented with a familiar referent, prompting fast-mapping via disambiguation, (ii) cross-situational–target novel word-referent pair presented next to an unfamiliar referent prompting statistically tracking the target pairs across trials, and (iii) eBook - target word-referent pairs presented within an audio-visual electronic storybook (eBook), prompting inferring meaning incidentally. Results show children succeed at learning the new words above chance in all three scenarios, with higher performance in eBook and mutual exclusivity than in cross-situational word learning. This illustrates children’s astounding ability to learn while coping with uncertainty and varying degrees of ambiguity, which are common in real-world situations. Findings extend our understanding of how preschoolers learn new words more or less successfully depending on specific word learning scenarios, which should be taken into account when working on vocabulary development for school readiness in the preschool years

    Keeping Off the Weight with DCs

    Get PDF
    Long studied as modulators of insulin sensitivity, adipose tissue immune cells have recently been implicated in regulating fat mass and weight gain. In this issue of Immunity, Reisner and colleagues (2015) report that ablation of perforin-expressing dendritic cells induces T cell expansion, worsening autoimmunity and surprisingly increasing adiposity

    Optimization-based assisted calibration of traffic simulation models

    Full text link
    Use of traffic simulation has increased in recent decades; and this high-fidelity modelling, along with moving vehicle animation, has allowed transportation decisions to be made with better confidence. During this time, traffic engineers have been encouraged to embrace the process of calibration, in which steps are taken to reconcile simulated and field-observed performance. According to international surveys, experts, and conventional wisdom, existing (non-automated) methods of calibration have been difficult or inadequate. There has been extensive research on improved calibration methods, but many of these efforts have not produced the flexibility and practicality required by real-world engineers. With this in mind, a patent-pending (US 61/859,819) architecture for software-assisted calibration was developed to maximize practicality, flexibility, and ease-of-use. This architecture is called SASCO (i.e. Sensitivity Analysis, Self-Calibration, and Optimization). The original optimization method within SASCO was based on "directed brute force" (DBF) searching; performing exhaustive evaluation of alternatives in a discrete, user-defined search space. Simultaneous Perturbation Stochastic Approximation (SPSA) has also gained favor as an efficient method for optimizing computationally expensive, "black-box" traffic simulations, and was also implemented within SASCO. This paper uses synthetic and real-world case studies to assess the qualities of DBF and SPSA, so they can be applied in the right situations. SPSA was found to be the fastest method, which is important when calibrating numerous inputs, but DBF was more reliable. Additionally DBF was better than SPSA for sensitivity analysis, and for calibrating complex inputs. Regardless of which optimization method is selected, the SASCO architecture appears to offer a new and practice-ready level of calibration efficiency. (C) 2015 Elsevier Ltd. All rights reserved.Hale, DK.; Antoniou, C.; Brackstone, M.; Michalaka, D.; Moreno Chou, AT.; Parikh, K. (2015). Optimization-based assisted calibration of traffic simulation models. Transportation Research Part C: Emerging Technologies. 55:100-115. doi:10.1016/j.trc.2015.01.018S1001155

    An AI ethics ‘David and Goliath’: value conficts between large tech companies and their employees

    Get PDF
    Artifcial intelligence ethics requires a united approach from policymakers, AI companies, and individuals, in the development, deployment, and use of these technologies. However, sometimes discussions can become fragmented because of the diferent levels of governance (Schmitt in AI Ethics 1–12, 2021) or because of diferent values, stakeholders, and actors involved (Ryan and Stahl in J Inf Commun Ethics Soc 19:61–86, 2021). Recently, these conficts became very visible, with such examples as the dismissal of AI ethics researcher Dr. Timnit Gebru from Google and the resignation of whistle-blower Frances Haugen from Facebook. Underpinning each debacle was a confict between the organisation’s economic and business interests and the morals of their employees. This paper will examine tensions between the ethics of AI organisations and the values of their employees, by providing an exploration of the AI ethics literature in this area, and a qualitative analysis of three workshops with AI developers and practitioners. Common ethical and social tensions (such as power asymmetries, mistrust, societal risks, harms, and lack of transparency) will be discussed, along with proposals on how to avoid or reduce these conficts in practice (e.g., building trust, fair allocation of responsibility, protecting employees’ autonomy, and encouraging ethical training and practice). Altogether, we suggest the following steps to help reduce ethical issues within AI organisations: improved and diverse ethics education and training within businesses; internal and external ethics auditing; the establishment of AI ethics ombudsmen, AI ethics review committees and an AI ethics watchdog; as well as access to trustworthy AI ethics whistle-blower organisations

    Learning to perceive non-native tones via distributional training : effects of task and acoustic cue weighting

    Get PDF
    As many distributional learning (DL) studies have shown, adult listeners can achieve discrimination of a difficult non-native contrast after a short repetitive exposure to tokens falling at the extremes of that contrast. Such studies have shown using behavioural methods that a short distributional training can induce perceptual learning of vowel and consonant contrasts. However, much less is known about the neurological correlates of DL, and few studies have examined nonnative lexical tone contrasts. Here, Australian-English speakers underwent DL training on a Mandarin tone contrast using behavioural (discrimination, identification) and neural (oddball-EEG) tasks, with listeners hearing either a bimodal or a unimodal distribution. Behavioural results show that listeners learned to discriminate tones after both unimodal and bimodal training; while EEG responses revealed more learning for listeners exposed to the bimodal distribution. Thus, perceptual learning through exposure to brief sound distributions (a) extends to non-native tonal contrasts, and (b) is sensitive to task, phonetic distance, and acoustic cue-weighting. Our findings have implications for models of how auditory and phonetic constraints influence speech learning
    • 

    corecore