150 research outputs found

    A Metacognition-Based Digital Problem-Solving Worksheet: a Design-Based Research: An Empirical Study Focused on Automotive Fault Diagnosis Learning for Indonesian Prospective Automotive Vocational Teachers

    Get PDF
    Vocational teachers need to equip their students with meaningful and relevant required workplace competencies. As a result, vocational teachers should always stay updated on their knowledge and skills regarding the development of science and technology in the world of work. More specifically, in the automotive vocational expertise domain, problem-solving abilities become the crucial skills students need to master. Hence, equipping prospective automotive vocational teachers with sustainable learning and problem-solving abilities is indispensable. In this case, the metacognition theory could facilitate students with learning-how-to-learn activities, which is an essential skill for sustainable learning and learning to teach and equip them with problem-solving abilities. Therefore, bringing the metacognition theory, supported by other relevant theories, into teaching and learning activities would be beneficial in dealing with those issues. This study aimed to design and develop a metacognition-based digital problem-solving worksheet. This digital worksheet was expected to facilitate students with learning-how-to-learn activities and equip them with problem-solving abilities effectively. There were four main research objectives and questions in this study, which were related to; 1) the practical problem that needs to be addressed, 2) the didactic design, 3) the usability, and 4) the effectiveness of the digital worksheet. Design-based research was used to answer the research questions. This is a multi-methods research design, which means many methods exist to achieve the research aim and objectives. This research design comprised six stages; analysis and exploration (stage 1), design and construction (stage 2), evaluation and reflection (stage 3), analysis and exploration (stage 4), design and construction (stage 5), and evaluation and reflection (stage 6). Stage 1 was used to explore the practical problem as the answer to the first research question. Stages 2 up to 5 were used to formulate the digital worksheet's didactic design as the answer to the second research question. Stage 6 was used to evaluate the usability and effectiveness of the digital worksheet as the answer to the third and fourth research questions, respectively. Firstly, in answering the first research question, three semi-structured interviews were used as the data collection techniques in the first research stage. The findings of this stage stated that sustainable learning, learning to teach, and problem-solving abilities became the needed competencies prospective automotive vocational teachers need to master. Additionally, the findings stated that automotive fault diagnosis learning was the highest-order thinking subject that had a practical problem on it. The quality of the instructional toolkit for this subject required to be improved since the existing toolkit was a conventional observation sheet and still allowed students to conduct trial-and-error stages. This was the practical problem that this research would address. Secondly, in answering the second research question, a focus group discussion, expert-based evaluations, user-based evaluations (formative usability evaluation), and final revisions were used in the second, third, fourth, and fifth research stages, respectively. The focus group discussion aimed to discuss the materials needed to develop the digital worksheet. Following that, expert-based evaluations and user-based evaluations were conducted to evaluate the initial digital worksheet based on the experts' and users' perspectives, respectively. Several revisions were done based on those evaluation results, and the digital worksheet's final didactic design was finally realized. The findings of the didactic design stated that the worksheet was in a digital form and used blended learning with flipped classroom strategy, so students need to have three different learning times; 1) before, 2) during, and 3) after classroom activities. Furthermore, constructivism learning theory, adult learning theory, metacognition theory, experiential learning theory, and reflection theory became the fundamental theoretical knowledge basis for developing this digital worksheet. Moreover, problem-based learning, automotive fault diagnosis procedures, and worksheet stages became the digital worksheet development's fundamental practical knowledge basis. There were seven stages that students need to do during the problem-solving learning; 1) introduction, 2) observing, 3) collecting information, 4) analyzing, 5) testing, 6) rectifying, and 7) checking all systems. There were many steps in every stage above, and many instructions and self-reflection questions in every single step. Additionally, in every step, the teachers had an opportunity to give feedback on the student's work, and the students could have discussions with other students at the end of every stage. The self-reflection questions on every instruction, the teacher's feedback on every step, and the discussion results at every stage were used to regulate the students' self-cognition. Thirdly, in answering the third and fourth research questions, a questionnaire survey and an experimental study were used as the final research stage, respectively. First, the survey of summative usability evaluation consisted of four elements: usefulness, ease of use, ease of learning, and satisfaction. The findings of this evaluation stated that the usability level and those elements' usability levels were all in very high categories. Additionally, it could be determined that the usefulness, ease of use, and ease of learning significantly influenced the students' satisfaction simultaneously and independently, except for the variable of ease of learning. Second, the effectiveness findings stated that the digital worksheet significantly effectively facilitated students' learning-how-to-learn activities and equipped them with problem-solving abilities.:ABSTRACT (EXECUTIVE SUMMARY) ABSTRAKT (ZUSAMMENFASSUNG) ACKNOWLEDGMENTS AND DEDICATION TABLE OF CONTENT LIST OF FIGURES LIST OF TABLES LIST OF ABBREVIATIONS CHAPTER 1. INTRODUCTION TO THE STUDY 1.1. Introductory of the Chapter 1.2. Research Background, State of the Art, and Motivation 1.3. Research Empirical Problems and Context Justification 1.4. Research Rationale 1.5. Research Aim and Objectives 1.6. Research Questions 1.7. Research Scope and Context Limitations 1.8. Research Significance 1.9. Definitions of the Important Terms 1.10. List of the Research Project Publication 1.11. Summary of the Chapter CHAPTER 2. LITERATURE REVIEW 2.1 Introductory of the Chapter 2.2 Literature Review – Contextual Domain 2.2.1 Vocational Education 2.2.2 Education System in Indonesia 2.2.3 Vocational Education (SMK-MAK) in Indonesia 2.2.4 Problems and Challenges of Vocational Education in Indonesia 2.2.5 Vocational Teachers 2.2.6 Vocational Teacher Education 2.3 Literature Review – Theoretical and Conceptual Domain 2.3.1 Constructivism and Adult Learning Theory 2.3.2 Metacognition Theory – Metacognitive Learning Strategies (Learning-How-to-Learn) 2.3.3 Experiential Learning Theory - Reflection Theory 2.3.4 Problem-Based Learning Method – Problem-Solving Ability 2.3.5 Blended Learning Technique – Flipped Classroom Learning Strategy 2.3.6 Instructional Media and Technology – Learning Worksheet 2.3.7 Usability Evaluation in Instructional Media and Technology 2.3.8 The Research Theoretical and Conceptual Framework 2.4 Literature Review – Methodological Domain 2.4.1 Research Methodologies in Instructional Media and Technology Development 2.4.2 Design-Based Research 2.5 Research Hypotheses 2.6 Summary of the Chapter CHAPTER 3. RESEARCH METHODOLOGY 3.1. Introductory of the Chapter 3.2. Research Paradigm, Philosophy, and Research Type 3.3. Research Design, Strategies, and Methods 3.4. Research Context and Participants 3.5. Research Data Collection Techniques and the Tools 3.5.1. Stage 1 – Semi-Structured Interview and the Protocol 3.5.2. Stages 2 & 4 – Focus Group and the Protocols 3.5.3. Stage 3 – Expert-Based Evaluation and the Questionnaires 3.5.4. Stages 4 & 6 – Survey and the USE Questionnaire 3.5.5. Stage 6 – Experimental Study and the Assessment Tools 3.6. Research Data Analysis Techniques 3.6.1. Stage 1 – Semi-Structured Interview 3.6.2. Stage 2 – Focus Group Discussion 3.6.3. Stage 3 – Expert-Based Evaluation (Survey Questionnaire) 3.6.4. Stage 4 – User-Based Evaluation (Survey Questionnaire and Focus Group Interview) 3.6.5. Stage 6 – Usability Evaluation (Survey Questionnaire) 3.6.6. Stage 6 – Effectiveness Evaluation (Experimental Study) 3.7. Summary of the Chapter CHAPTER 4. RESEARCH FINDINGS 4.1. Introductory of the Chapter 4.2. Finding 1: The Practical Problem 4.2.1. Stage 1 – First Semi-Structured Interview 4.2.2. Stage 1 – Second Semi-Structured Interview 4.2.3. Stage 1 – Third Semi-Structured Interview 4.3. Finding 2: The Didactic Design 4.3.1. Stage 2 – Focus Group Discussion 4.3.2. Stage 3 – Expert-Based Evaluation 4.3.3. Stage 4 – User-Based Evaluation 4.3.4. Stage 5 – Final Revision (The Didactic Design) 4.4. Finding 3: The Usability 4.5. Finding 4: The Effectiveness 4.5.1. Stage 6 – The Effectiveness Evaluation in Facilitating Students with Leaning-How-to-Learn Activities 4.5.2. Stage 6 – The Effectiveness Evaluation in Equipping Students with Problem-Solving Abilities 4.6. Summary of the Chapter CHAPTER 5. RESEARCH DISCUSSION AND CONCLUSION 5.1. Introductory of the Chapter 5.2. Discussion 1 – The Practical Problem 5.3. Discussion 2 – The Didactic Design 5.4. Discussion 3 – The Usability 5.5. Discussion 4 – The Effectiveness 5.6. Overall Discussion – The Research Findings' Interpretations and Implications in Intercultural-Global Contexts and Theoretical Design Principles 5.6.1. The Research Findings' Interpretations and Implications in Intercultural-Global Contexts 5.6.2. The Research Findings' Interpretations and Implications in Theoretical Insights and Design Principles 5.7. Research Conclusion 5.8. Research Limitations and Further Research 5.9. Summary of the Chapter REFERENCES STATEMENT OF AUTHORSHIP APPENDICE

    The impact of Artificial Intelligence (AI) technologies on legal practitioners in law firms and legal publishers.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.Artificial Intelligence (AI) solutions currently have the capabilities to perform tasks quicker, more accurately and consistently than legal professionals. This could result in inducing the opinion in employers at private law firms and legal publishers that AI software may have a quicker return on investment and a lower total cost of ownership. The purpose of this study is to discover whether the availability of yield-producing, affordable AI technologies in the legal industry could lead to legal practitioners and their roles becoming redundant. An explanatory quantitative study was established using a cross-sectional descriptive survey design to achieve the objectives of the research. A self-administered structured questionnaire was developed and delivered via hardcopy and e-mail to 102 legal professionals by means of snowball sampling. These respondents were drawn from 19 different private law firms, legal publishers and legal departments at private corporations. Statistical analysis performed on the data collected was analysed and interpreted using descriptive and inferential statistics. The results revealed that there was a general awareness of advancement in certain legal AI solutions and there was a general agreement that legal professionals would advocate that their companies invest in AI Solutions if it produced additional accurate work yield while being cost-effective. The final revelation was that legal professionals agreed that AI solutions were not yet mature enough to replace human legal professionals. Regardless of this sentiment, they felt that they and their companies, would hire fewer legal professionals presented with the opportunity of value-adding legal AI solutions. Recommendations include legal professionals investigating the advancement and availability of AI solutions for the purposes of utilising it to strategically augment and bolster their job functions. Further recommendations include investigations into understanding their company’s current capability and strength in comparison to their competitors and to understand how AI would augment their company performance to provide additional value in terms of insight and improve turn-around times. The final recommendation was for South African tertiary institutions of higher learning to start incorporating the topics of AI and Law into its Law Degree curriculum in an effort to make students aware of the advancement of AI in the area of Law and how it will affect their lives. The importance of this study is in the opinion of the professionals surveyed who believe that there was a strong possibility that they and their companies would hire fewer legal professionals if there was the availability of an economically beneficial legal AI solution which produced accurate, consistent, yield-producing output

    Patent\u27s New Salience

    Get PDF
    The vast majority of patents do not matter. They are almost never enforced or licensed and, in consequence, are almost always ignored. This is a well-accepted feature of the patent system and has a tremendous impact on patent policy. In particular, while there are many aspects of patent law that are potentially troubling—including grants of unmerited patents, high transaction costs in obtaining necessary patent licenses, and patents’ potential to block innovation and hinder economic growth—these problems may be insignificant in practice because patents are under-enforced and routinely infringed without consequence. This Article argues that technological developments are greatly increasing the salience of patents by making patents easier and cheaper to find and enforce. These developments—including private platforms’ adjudication systems and AI-driven patent analytics—profoundly impact how the patent system functions and upend the system’s present dependence on under-enforcement and ignorance. Where most patents could previously be safely disregarded, formerly forgotten patents now matter. This Article makes four contributions to the literature. First, this Article explores the technology that is rendering patents newly salient and explains how this alters basic assumptions underlying the patent system. Second, this Article demonstrates that although new technology is increasing the number of patents that can be reviewed and enforced, this transformation sometimes decreases the depth of patent analysis. Because it is difficult to draw conclusions about patent scope or validity without in-depth analysis, this omission means that technological review of patents may give patents unmerited influence. Third, this Article shows a sharp divergence between public policy goals and private use of patents. For several decades, the courts and Congress have been reforming patent policy to decrease the impact of patents to alleviate concerns that patent owners hinder innovation by others. This Article demonstrates, in clear contrast to this goal, an increase in patent salience that is due exclusively to the use of private platforms and technologies. Further, the use of private platforms to find, analyze, and enforce patents creates the risk that choices made by companies and software developers will displace substantive patent law. Finally, this Article suggests policy reform, including ways to improve technology and patents and adjusted approaches to patent doctrine and theory

    Towards Unstructured Knowledge Integration in Natural Language Processing

    Get PDF
    In the last decades, Artificial Intelligence has witnessed multiple breakthroughs in deep learning. In particular, purely data-driven approaches have opened to a wide variety of successful applications due to the large availability of data. Nonetheless, the integration of prior knowledge is still required to compensate for specific issues like lack of generalization from limited data, fairness, robustness, and biases. In this thesis, we analyze the methodology of integrating knowledge into deep learning models in the field of Natural Language Processing (NLP). We start by remarking on the importance of knowledge integration. We highlight the possible shortcomings of these approaches and investigate the implications of integrating unstructured textual knowledge. We introduce Unstructured Knowledge Integration (UKI) as the process of integrating unstructured knowledge into machine learning models. We discuss UKI in the field of NLP, where knowledge is represented in a natural language format. We identify UKI as a complex process comprised of multiple sub-processes, different knowledge types, and knowledge integration properties to guarantee. We remark on the challenges of integrating unstructured textual knowledge and bridge connections with well-known research areas in NLP. We provide a unified vision of structured knowledge extraction (KE) and UKI by identifying KE as a sub-process of UKI. We investigate some challenging scenarios where structured knowledge is not a feasible prior assumption and formulate each task from the point of view of UKI. We adopt simple yet effective neural architectures and discuss the challenges of such an approach. Finally, we identify KE as a form of symbolic representation. From this perspective, we remark on the need of defining sophisticated UKI processes to verify the validity of knowledge integration. To this end, we foresee frameworks capable of combining symbolic and sub-symbolic representations for learning as a solution

    From Fully-Supervised Single-Task to Semi-Supervised Multi-Task Deep Learning Architectures for Segmentation in Medical Imaging Applications

    Get PDF
    Medical imaging is routinely performed in clinics worldwide for the diagnosis and treatment of numerous medical conditions in children and adults. With the advent of these medical imaging modalities, radiologists can visualize both the structure of the body as well as the tissues within the body. However, analyzing these high-dimensional (2D/3D/4D) images demands a significant amount of time and effort from radiologists. Hence, there is an ever-growing need for medical image computing tools to extract relevant information from the image data to help radiologists perform efficiently. Image analysis based on machine learning has pivotal potential to improve the entire medical imaging pipeline, providing support for clinical decision-making and computer-aided diagnosis. To be effective in addressing challenging image analysis tasks such as classification, detection, registration, and segmentation, specifically for medical imaging applications, deep learning approaches have shown significant improvement in performance. While deep learning has shown its potential in a variety of medical image analysis problems including segmentation, motion estimation, etc., generalizability is still an unsolved problem and many of these successes are achieved at the cost of a large pool of datasets. For most practical applications, getting access to a copious dataset can be very difficult, often impossible. Annotation is tedious and time-consuming. This cost is further amplified when annotation must be done by a clinical expert in medical imaging applications. Additionally, the applications of deep learning in the real-world clinical setting are still limited due to the lack of reliability caused by the limited prediction capabilities of some deep learning models. Moreover, while using a CNN in an automated image analysis pipeline, it’s critical to understand which segmentation results are problematic and require further manual examination. To this extent, the estimation of uncertainty calibration in a semi-supervised setting for medical image segmentation is still rarely reported. This thesis focuses on developing and evaluating optimized machine learning models for a variety of medical imaging applications, ranging from fully-supervised, single-task learning to semi-supervised, multi-task learning that makes efficient use of annotated training data. The contributions of this dissertation are as follows: (1) developing a fully-supervised, single-task transfer learning for the surgical instrument segmentation from laparoscopic images; and (2) utilizing supervised, single-task, transfer learning for segmenting and digitally removing the surgical instruments from endoscopic/laparoscopic videos to allow the visualization of the anatomy being obscured by the tool. The tool removal algorithms use a tool segmentation mask and either instrument-free reference frames or previous instrument-containing frames to fill in (inpaint) the instrument segmentation mask; (3) developing fully-supervised, single-task learning via efficient weight pruning and learned group convolution for accurate left ventricle (LV), right ventricle (RV) blood pool and myocardium localization and segmentation from 4D cine cardiac MR images; (4) demonstrating the use of our fully-supervised memory-efficient model to generate dynamic patient-specific right ventricle (RV) models from cine cardiac MRI dataset via an unsupervised learning-based deformable registration field; and (5) integrating a Monte Carlo dropout into our fully-supervised memory-efficient model with inherent uncertainty estimation, with the overall goal to estimate the uncertainty associated with the obtained segmentation and error, as a means to flag regions that feature less than optimal segmentation results; (6) developing semi-supervised, single-task learning via self-training (through meta pseudo-labeling) in concert with a Teacher network that instructs the Student network by generating pseudo-labels given unlabeled input data; (7) proposing largely-unsupervised, multi-task learning to demonstrate the power of a simple combination of a disentanglement block, variational autoencoder (VAE), generative adversarial network (GAN), and a conditioning layer-based reconstructor for performing two of the foremost critical tasks in medical imaging — segmentation of cardiac structures and reconstruction of the cine cardiac MR images; (8) demonstrating the use of 3D semi-supervised, multi-task learning for jointly learning multiple tasks in a single backbone module – uncertainty estimation, geometric shape generation, and cardiac anatomical structure segmentation of the left atrial cavity from 3D Gadolinium-enhanced magnetic resonance (GE-MR) images. This dissertation summarizes the impact of the contributions of our work in terms of demonstrating the adaptation and use of deep learning architectures featuring different levels of supervision to build a variety of image segmentation tools and techniques that can be used across a wide spectrum of medical image computing applications centered on facilitating and promoting the wide-spread computer-integrated diagnosis and therapy data science

    Utopian Literature And Imperialism

    Get PDF
    This dissertation argues that the utopian literary genre is an imperial construct that is contingent upon its imperial discourse. I argue that imperialism and utopian literature are intertwined with each other not only because of the different themes related to imperialism present in utopian literature, but also because utopian literature can only speak through imperial tropes and language. This dissertation traces the relationship between utopian literature and imperialism through the 16th, 19th, and late 20th century. The texts it discusses are More’s Utopia, Bacon’s New Atlantis, Harrington’s Commonwealth of Oceana, Bulwer-Lytton’s The Coming Race, Bellamy’s Looking Backward, Morris’ News From Nowhere, Rodenberry’s Star Trek and Le Guin’s The Dispossessed

    Out-of-Distribution Generalization of Gigapixel Image Representation

    Get PDF
    This thesis addresses the significant challenge of improving the generalization capabilities of artificial deep neural networks in the classification of whole slide images (WSIs) in histopathology across different and unseen hospitals. It is a critical issue in AI applications for vision-based healthcare tasks, given that current standard methodologies struggle with out-of-distribution (OOD) data from varying hospital sources. In histopathology, distribution shifts can arise due to image acquisition variances across different scanner vendors, differences in laboratory routines and staining procedures, and diversity in patient demographics. This work investigates two critical forms of generalization within histopathology: magnification generalization and OOD generalization towards different hospitals. One chapter of this thesis is dedicated to the exploration of magnification generalization, acknowledging the variability in histopathological images due to distinct magnification levels and seeking to enhance the model's robustness by learning invariant features across these levels. However, the major part of this work focuses on OOD generalization, specifically unseen hospital data. The objective is to leverage knowledge encapsulated in pre-existing models to help new models adapt to diverse data scenarios and ensure their efficient operation in different hospital environments. Additionally, the concept of Hospital-Agnostic (HA) learning regimes is introduced, focusing on invariant characteristics across hospitals and aiming to establish a learning model that sustains stable performance in varied hospital settings. The culmination of this research introduces a comprehensive method, termed ALFA (Exploiting All Levels of Feature Abstraction), that not only considers invariant features across hospitals but also extracts a broader set of features from input images, thus maximizing the model's generalization potential. The findings of this research are expected to have significant implications for the deployment of medical image classification systems using deep models in clinical settings. The proposed methods allow for more accurate and reliable diagnostic support across various hospital environments, thereby improving diagnostic accuracy and reliability, and paving the way for enhanced generalization in histopathology diagnostics using deep learning techniques. Future research directions may build on expanding these investigations to further improve generalization in histopathology

    Bias and Fairness in Large Language Models: A Survey

    Full text link
    Rapid advancements of large language models (LLMs) have enabled the processing, understanding, and generation of human-like text, with increasing integration into systems that touch our social sphere. Despite this success, these models can learn, perpetuate, and amplify harmful social biases. In this paper, we present a comprehensive survey of bias evaluation and mitigation techniques for LLMs. We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing, defining distinct facets of harm and introducing several desiderata to operationalize fairness for LLMs. We then unify the literature by proposing three intuitive taxonomies, two for bias evaluation, namely metrics and datasets, and one for mitigation. Our first taxonomy of metrics for bias evaluation disambiguates the relationship between metrics and evaluation datasets, and organizes metrics by the different levels at which they operate in a model: embeddings, probabilities, and generated text. Our second taxonomy of datasets for bias evaluation categorizes datasets by their structure as counterfactual inputs or prompts, and identifies the targeted harms and social groups; we also release a consolidation of publicly-available datasets for improved access. Our third taxonomy of techniques for bias mitigation classifies methods by their intervention during pre-processing, in-training, intra-processing, and post-processing, with granular subcategories that elucidate research trends. Finally, we identify open problems and challenges for future work. Synthesizing a wide range of recent research, we aim to provide a clear guide of the existing literature that empowers researchers and practitioners to better understand and prevent the propagation of bias in LLMs

    Training curriculum for internet-based event-based surveillance and event-based surveillance in health facilities and communities

    Get PDF
    EveEvent-based surveillance is the organized collection, monitoring, assessment, and interpretation of mainly unstructured ad hoc information regarding health events that may represent an acute public health risk. Event- based surveillance is an essential component of early warning within public health surveillance systems and can expedite the detection and notification of health events.The Training Curriculum for Event-Based Surveillance in Health Facilities and Communities and Internet-Event Based Surveillance offers guidance and instructions to public health practitioners to facilitate training for event- based surveillance implementation in health facilities and communities within a country. This curriculum focuses on training relevant public health practitioners at the intermediate administrative level, in health facilities, and communities. Additionally, it provides trainers with guidance on how best to facilitate training and to mentor those who have been trained, as well as training evaluation tools, to ensure that the knowledge and skills required for event-based surveillance are transferred sustainably.Acknowledgements -- Glossary of Terms -- Executive Summary -- Module 1: Overview of Event-Based Surveillance -- Module 2: Facilitation and Mentorship (Module 2.1: The Role of Facilitation in Event-Based Surveillance Training ; Module 2.2: The Role of Mentorship in Event-Based Surveillance) -- Module 3: Intermediate Level Event-Based Surveillance (Module 3.1: Intermediate Level Event-Based Surveillance Facilitator Guide ; Module 3.2: Intermediate Level Event-Based Surveillance Participant Guide) -- Module 4: Health Facility Event-Based Surveillance (Module 4.1: Health Facility Event-Based Surveillance Facilitator Guide ; Module 4.2: Health Facility Event-Based Surveillance Participant Guide) -- Module 5: Community-Based Surveillance (Module 5.1: Community-Based Surveillance Facilitator Guide ; Module 5.2: Community-Based Surveillance Participant Guide) -- Module 6: Internet Event-Based Surveillance Training Module ( Module 6.1: Internet Event-Based Surveillance Training Module Facilitator Guide ; Module 6.2: Internet Event-Based Surveillance Training Module Participant Guide) -- Module 7: Event-Based Surveillance Training Evaluation Tools -- References.20221206
    • …
    corecore