50 research outputs found

    The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

    Get PDF
    The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further

    Assessing schematic knowledge of introductory probability theory

    Get PDF
    [Abstract]: The ability to identify schematic knowledge is an important goal for both assessment and instruction. In the current paper, schematic knowledge of statistical probability theory is explored from the declarative-procedural framework using multiple methods of assessment. A sample of 90 undergraduate introductory statistics students was required to classify 10 pairs of probability problems as similar or different; to identify whether 15 problems contained sufficient, irrelevant, or missing information (text-edit); and to solve 10 additional problems. The complexity of the schema on which the problems were based was also manipulated. Detailed analyses compared text-editing and solution accuracy as a function of text-editing category and schema complexity. Results showed that text-editing tends to be easier than solution and differentially sensitive to schema complexity. While text-editing and classification were correlated with solution, only text-editing problems with missing information uniquely predicted success. In light of previous research these results suggest that text-editing is suitable for supplementing the assessment of schematic knowledge in development

    Intelligence IS Cognitive Flexibility: Why Multilevel Models of Within-Individual Processes Are Needed to Realise This

    Get PDF
    Despite substantial evidence for the link between an individual’s intelligence and successful life outcomes, questions about what defines intelligence have remained the focus of heated dispute. The most common approach to understanding intelligence has been to investigate what performance on tests of intellect is and is not associated with. This psychometric approach, based on correlations and factor analysis is deficient. In this review, we aim to substantiate why classic psychometrics which focus on between-person accounts will necessarily provide a limited account of intelligence until theoretical considerations of within-person accounts are incorporated. First, we consider the impact of entrenched psychometric presumptions that support the status quo and impede alternative views. Second, we review the importance of process-theories, which are critical for any serious attempt to build a within-person account of intelligence. Third, features of dynamic tasks are reviewed, and we outline how static tasks can be modified to target within-person processes. Finally, we explain how multilevel models are conceptually and psychometrically well-suited to building and testing within-individual notions of intelligence, which at its core, we argue is cognitive flexibility. We conclude by describing an application of these ideas in the context of microworlds as a case study

    A Novel Approach to Measuring an Old Construct: Aligning the Conceptualisation and Operationalisation of Cognitive Flexibility

    Get PDF
    A successful adjustment to dynamic changes in one’s environment requires contingent adaptive behaviour. Such behaviour is underpinned by cognitive flexibility, which conceptually is part of fluid intelligence. We argue, however, that conventional approaches to measuring fluid intelligence are insufficient in capturing cognitive flexibility. We address the discrepancy between conceptualisation and operationalisation by introducing two newly developed tasks that aim at capturing within-person processes of dealing with novelty. In an exploratory proof-of-concept study, the two flexibility tasks were administered to 307 university students, together with a battery of conventional measures of fluid intelligence. Participants also provided information about their Grade Point Averages obtained in high school and in their first year at university. We tested (1) whether an experimental manipulation of a requirement for cognitive inhibition resulted in systematic differ- ences in difficulty, (2) whether these complexity differences reflect psychometrically differentiable effects, and (3) whether these newly developed flexibility tasks show incremental value in predicting success in the transition from high school to university over conventional operationalisations of fluid intelligence. Our findings support the notion that cognitive flexibility, when conceptualised and operationalised as individual differences in within-person processes of dealing with novelty, more appropriately reflects the dynamics of individuals’ behaviour when attempting to cope with changing demands

    Ensembl regulation resources

    Get PDF
    New experimental techniques in epigenomics allow researchers to assay a diversity of highly dynamic features such as histone marks, DNA modifications or chromatin structure. The study of their fluctuations should provide insights into gene expression regulation, cell differentiation and disease. The Ensembl project collects and maintains the Ensembl regulation data resources on epigenetic marks, transcription factor binding and DNA methylation for human and mouse, as well as microarray probe mappings and annotations for a variety of chordate genomes. From this data, we produce a functional annotation of the regulatory elements along the human and mouse genomes with plans to expand to other species as data becomes available. Starting from well-studied cell lines, we will progressively expand our library of measurements to a greater variety of samples. Ensembl's regulation resources provide a central and easy-to-query repository for reference epigenomes. As with all Ensembl data, it is freely available at http://www.ensembl.org, from the Perl and REST APIs and from the public Ensembl MySQL database server at ensembldb.ensembl.org.Database URL: http://www.ensembl.org.Wellcome Trust grant: (WT098051); National Human Genome Research Institute grants: (U41HG007234, 1U01 HG004695); Biotechnology and Biological Sciences Research Council grant: (BB/L024225/1); European Molecular Biology Laboratory; European Union’s Seventh Framework Programme; European Research Council

    Finding and sharing: new approaches to registries of databases and services for the biomedical sciences

    Get PDF
    The recent explosion of biological data and the concomitant proliferation of distributed databases make it challenging for biologists and bioinformaticians to discover the best data resources for their needs, and the most efficient way to access and use them. Despite a rapid acceleration in uptake of syntactic and semantic standards for interoperability, it is still difficult for users to find which databases support the standards and interfaces that they need. To solve these problems, several groups are developing registries of databases that capture key metadata describing the biological scope, utility, accessibility, ease-of-use and existence of web services allowing interoperability between resources. Here, we describe some of these initiatives including a novel formalism, the Database Description Framework, for describing database operations and functionality and encouraging good database practise. We expect such approaches will result in improved discovery, uptake and utilization of data resources. Database URL: http://www.casimir.org.uk/casimir_dd

    Ensembl’s 10th year

    Get PDF
    Ensembl (http://www.ensembl.org) integrates genomic information for a comprehensive set of chordate genomes with a particular focus on resources for human, mouse, rat, zebrafish and other high-value sequenced genomes. We provide complete gene annotations for all supported species in addition to specific resources that target genome variation, function and evolution. Ensembl data is accessible in a variety of formats including via our genome browser, API and BioMart. This year marks the tenth anniversary of Ensembl and in that time the project has grown with advances in genome technology. As of release 56 (September 2009), Ensembl supports 51 species including marmoset, pig, zebra finch, lizard, gorilla and wallaby, which were added in the past year. Major additions and improvements to Ensembl since our previous report include the incorporation of the human GRCh37 assembly, enhanced visualisation and data-mining options for the Ensembl regulatory features and continued development of our software infrastructure
    corecore