2,591 research outputs found

    Facets of metacognition and collaborative complex problem-solving performance

    Get PDF
    Abstract. Metacognition refers to students’ ability to reflect upon, understand and control their own learning. Previous accounts of metacognition have distinguished between two major facets: metacognitive knowledge and metacognitive regulation, in which each major facet includes several sub-facets. Although many studies on metacognition facets have examined their relationship with problem-solving performance, few studies have investigated their relationship with non-routine, complex problem-solving performance in collaborative context. In light of this, the current study investigated the impact of different facets of metacognition on perceived and objective complex problem-solving (CPS) task performance in collaborative situation. Data was collected from 77 students at the University of Oulu, Finland. The Metacognitive Awareness Inventory (MAI) self-report was used to measure subjects’ beliefs on the facets of their metacognition before the task. After filling out MAI self-report individually, participants gathered in groups of 3 to carry out the collaborative CPS task. The Tailorshop Microworld simulation was employed as the CPS task and used to measure objective group performance. Perceived individual and group performances were measured with self-report. A generalized estimating equation was used to observe the relationships between individuals’ awareness of metacognition facets and perceived individual CPS performance. Best Linear Unbiased Predictors (BLUP) function was utilized to yield groups’ unbiased MAI scores and unbiased perceived group performance. Pearson correlation coefficient was calculated to observe relationships between group MAI scores and objective group CPS performance, as well as perceived group performance and objective group CPS performance. In general, the results showed significant correlations between several regulatory facets of metacognition and perceived individual CPS performance as well as objective group CPS performance. Since the majority of the significant correlations were negative, the results reinforced previous findings on students’ overconfidence in their skills in relation with their perceived and objective performance as well as contribute to the overall understanding of the impact metacognitive facets have on collaborative CPS performance. Further discussions were addressed in this study. Limitations and future research were also outlined

    Scalable factorization model to discover implicit and explicit similarities across domains

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.E-commerce businesses increasingly depend on recommendation systems to introduce personalized services and products to their target customers. Achieving accurate recommendations requires a sufficient understanding of user preferences and item characteristics. Given the current innovations on the Web, coupled datasets are abundantly available across domains. An analysis of these datasets can provide a broader knowledge to understand the underlying relationship between users and items. This thorough understanding results in more collaborative filtering power and leads to a higher recommendation accuracy. However, how to effectively use this knowledge for recommendation is still a challenging problem. In this research, we propose to exploit both explicit and implicit similarities extracted from latent factors across domains with matrix tri-factorization. On the coupled dimensions, common parts of the coupled factors across domains are shared among them. At the same time, their domain-specific parts are preserved. We show that such a configuration of both common and domain-specific parts benefits cross-domain recommendations significantly. Moreover, on the non-coupled dimensions, the middle factor of the tri-factorization is proposed to use to match the closely related clusters across datasets and align the matched ones to transfer cross-domain implicit similarities, further improving the recommendation. Furthermore, when dealing with data coupled from different sources, the scalability of the analytical method is another significant concern. We design a distributed factorization model that can scale up as the observed data across domains increases. Our data parallelism, based on Apache Spark, enables the model to have the smallest communication cost. Also, the model is equipped with an optimized solver that converges faster. We demonstrate that these key features stabilize our model’s performance when the data grows. Validated on real-world datasets, our developed model outperforms the existing algorithms regarding recommendation accuracy and scalability. These empirical results illustrate the potential of our research in exploiting both explicit and implicit similarities across domains for improving recommendation performance

    Candidate gene prioritization using graph embedding

    Get PDF
    International audienceCandidate genes prioritization allows to rank among a large number of genes, those that are strongly associated with a phenotype or a disease. Due to the important amount of data that needs to be integrate and analyse, gene-to-phenotype association is still a challenging task. In this paper, we evaluated a knowledge graph approach combined with embedding methods to overcome these challenges. We first introduced a dataset of rice genes created from several open-access databases. Then, we used the Translating Embedding model and Convolution Knowledge Base model, to vectorize gene information. Finally, we evaluated the results using link prediction performance and vectors representation using some unsupervised learning techniques

    Generalization in an evidence accumulation task

    Get PDF
    https://2023.ccneuro.org/proceedings/0000314.pdfPublished versio

    Automatic algorithm for determining bone and soft-tissue factors in dual-energy subtraction chest radiography

    Get PDF
    Lung cancer is currently the first leading cause of worldwide cancer deaths since the early stage of lung cancer detection is still a challenge. In lung diagnosis, nodules sometimes overlap with ribs and tissues on lung chest radiographic images, which are complex for doctors and radiologists. Dual-energy subtraction (DES) is a suitable solution to solve those issues. This article will develop an efficient iterative DES for lung chest radiographic images. Moreover, we propose an automatic algorithm for accurately determining bone and soft-tissue factors for subtraction. The proposed algorithm for determining the bone and soft-tissue factors is based on window/level ratio and radiographic histogram analysis. First, we take the image sampling from the original size 3072 × 3072 to 512 × 512 to reduce the processing time while achieving the bone and soft-tissue factors. Next, we compute the window/level ratio on the soft-tissue image. Finally, we determine the minimum value of the ratio to obtain the optimal soft-tissue and bone factors. Our experimental results show that our proposed algorithm achieves a minimized runtime of 200 ms, outperforming the GE algorithm’s time of 4 s. The runtime of our DES of 6.066 s is shorter than the Fujifilm algorithm of 10 s while visualizing nodules on soft-tissue images and obtaining a similar quality of the soft-tissue images compared with the other algorithms. The academic contributions include the proposed algorithm for determining bone and soft-tissue factors and the optimized iterative DES algorithm to minimize time and dose consumption

    Understanding School Shootings Using Qualitatively-Informed Natural Language Processing

    Get PDF
    Prior literature has investigated the connection between school shootings and factors of familial trauma and mental health. Specifically, experiences related to parental suicide, physical or sexual abuse, neglect, marital violence, or severe bullying have been associated with a propensity for carrying out a mass shooting. Given prior research has shown common histories among school shooters, it follows that a person\u27s violent tendencies can be revealed by their previous communications with others, thus aiding in predicting an individual\u27s proclivity for school shootings. However, previous literature found no conclusions were drawn from online posts made by the shooters prior to the mass shootings. This thesis applies NVivo-supported thematic analysis and Natural Language Processing (NLP) to study school shootings by comparing the online speech patterns of known school terrorists versus those of non-violent extremists and ordinary teenagers online. Findings indicate that out of all the possible NLP indicators, conversation, HarmVice, negative tone, and conflict are the most suitable school shootings indicators. Ordinary people score eight times higher than known school shooters and online extremists in conversation. Known shooters score more than 14 times higher in HarmVice, than in both ordinary people and online extremists. Known shooters also score higher in negative tone (1.37 times higher than ordinary people and 1.78 times higher than online extremists) and conflict (more than three times higher than ordinary people and 1.8 times higher than online extremists). The implications for domestic violence prediction and prevention can be used to protect citizens inside educational infrastructure by linking the flagged accounts to the schools or colleges that they attend. Further research is needed to determine the severity of emotional coping displayed in online posts, as well as the amount of information and frequency with which weapons and killing are discussed

    Life Cycle Carbon Dioxide Emissions Assessment in the Design Phase: A Case of a Green Building in Vietnam

    Get PDF
    Buildings are responsible for about 30% of the total CO2 emissions globally. To reduce this amount of CO2, developing green buildings is one of the best approaches. However, this approach is undeveloped in Vietnam due to lacking methods to evaluate design alternatives to meet the criteria of green buildings. This paper presents a life-cycle CO2 analysis (LCCO2A) as a tool to support the decision-making process in the design phase of a 75-year-lifespan green building in Vietnam. The study conducts LCCO2A for two design alternatives (with different bricks usage and glass types) and points out the reasons for the differences. Comparing the first alternative with the second one, the results show slight variations in the amount of CO2 emissions in the erection and demolition phases (with an increase of 21.81 tons and a reduction of 106.1 tons of CO2eq, respectively), and a significant difference in the operation phase (10,631.52 tons of CO2eq or 58.34% reduction). For the whole life-cycle, the second design scenario, which uses “greener” materials shows a great decrease of 10,715.81 tons of CO2eq or 37.54%. By comparing its results with the findings in the literature, this research proves the environmental dominance of green buildings over other building categories

    A Generalization Bound of Deep Neural Networks for Dependent Data

    Full text link
    Existing generalization bounds for deep neural networks require data to be independent and identically distributed (iid). This assumption may not hold in real-life applications such as evolutionary biology, infectious disease epidemiology, and stock price prediction. This work establishes a generalization bound of feed-forward neural networks for non-stationary Ď•\phi-mixing data

    A Feature-based Generalizable Prediction Model for Both Perceptual and Abstract Reasoning

    Full text link
    A hallmark of human intelligence is the ability to infer abstract rules from limited experience and apply these rules to unfamiliar situations. This capacity is widely studied in the visual domain using the Raven's Progressive Matrices. Recent advances in deep learning have led to multiple artificial neural network models matching or even surpassing human performance. However, while humans can identify and express the rule underlying these tasks with little to no exposure, contemporary neural networks often rely on massive pattern-based training and cannot express or extrapolate the rule inferred from the task. Furthermore, most Raven's Progressive Matrices or Raven-like tasks used for neural network training used symbolic representations, whereas humans can flexibly switch between symbolic and continuous perceptual representations. In this work, we present an algorithmic approach to rule detection and application using feature detection, affine transformation estimation and search. We applied our model to a simplified Raven's Progressive Matrices task, previously designed for behavioral testing and neuroimaging in humans. The model exhibited one-shot learning and achieved near human-level performance in the symbolic reasoning condition of the simplified task. Furthermore, the model can express the relationships discovered and generate multi-step predictions in accordance with the underlying rule. Finally, the model can reason using continuous patterns. We discuss our results and their relevance to studying abstract reasoning in humans, as well as their implications for improving intelligent machines
    • …
    corecore