1 research outputs found

    AI report

    No full text
    We have seen in the previous discussion and scenarios that AI has the potential to deliver great benefits for education. However, we have also seen that there are also risks associated with its use. In many cases, we may determine that these are minimal risk. Examples we’ve discussed include the provision of formative feedback, help for teachers in creating lesson plans, and assistance in some of the administrative functions of schools. As we move away from the use of AI as a support system, so the risk increases. As we have seen, using AI for learning analytics may help teachers adjust their teaching strategies to cater to individual needs. However, using learning analytics without adequate teacher oversight may disadvantage students dealing with adverse life circumstances that are impacting their performance, thus increasing the risk level. When it comes to relying on AI for decisions that may impact a learner’s future opportunities, we are moving into the ‘high’ and perhaps ‘unacceptable’ risk territories. Therefore, we can see that the level of risk resides not so much within the tool as within the contexts in which they are used. While human oversight may help to mitigate some of the risks, we should be aware of the danger of dependence lock-in, in which humans become increasingly dependent to AI to make decisions. All this underscores the importance of the development of Explainable AI, as discussed above. In order to ensure its responsible use in educational settings, it is important to remain ever aware of the balance that needs to be struck between leveraging AI’s benefits and evaluating and mitigating potential risks and ensuring that human oversight is included and human values are served </p
    corecore