5,590 research outputs found

    Outcomes from Institutional Audit: 2009-11 Student engagement Third series

    Get PDF

    Modeling and improving Spatial Data Infrastructure (SDI)

    Get PDF
    Spatial Data Infrastructure (SDI) development is widely known to be a challenging process owing to its complex and dynamic nature. Although great effort has been made to conceptually explain the complexity and dynamics of SDIs, few studies thus far have actually modeled these complexities. In fact, better modeling of SDI complexities will lead to more reliable plans for its development. A state-of-the-art simulation model of SDI development, hereafter referred to as SMSDI, was created by using the system dynamics (SD) technique. The SMSDI enables policy-makers to test various investment scenarios in different aspects of SDI and helps them to determine the optimum policy for further development of an SDI. This thesis begins with adaption of the SMSDI to a new case study in Tanzania by using the community of participant concept, and further development of the model is performed by using fuzzy logic. It is argued that the techniques and models proposed in this part of the study enable SDI planning to be conducted in a more reliable manner, which facilitates receiving the support of stakeholders for the development of SDI.Developing a collaborative platform such as SDI would highlight the differences among stakeholders including the heterogeneous data they produce and share. This makes the reuse of spatial data difficult mainly because the shared data need to be integrated with other datasets and used in applications that differ from those originally produced for. The integration of authoritative data and Volunteered Geographic Information (VGI), which has a lower level structure and production standards, is a new, challenging area. The second part of this study focuses on proposing techniques to improve the matching and integration of spatial datasets. It is shown that the proposed solutions, which are based on pattern recognition and ontology, can considerably improve the integration of spatial data in SDIs and enable the reuse or multipurpose usage of available data resources

    Metacognition and Reflection by Interdisciplinary Experts: Insights from Cognitive Science and Philosophy

    Get PDF
    Interdisciplinary understanding requires integration of insights from different perspectives, yet it appears questionable whether disciplinary experts are well prepared for this. Indeed, psychological and cognitive scientific studies suggest that expertise can be disadvantageous because experts are often more biased than non-experts, for example, or fixed on certain approaches, and less flexible in novel situations or situations outside their domain of expertise. An explanation is that experts’ conscious and unconscious cognition and behavior depend upon their learning and acquisition of a set of mental representations or knowledge structures. Compared to beginners in a field, experts have assembled a much larger set of representations that are also more complex, facilitating fast and adequate perception in responding to relevant situations. This article argues how metacognition should be employed in order to mitigate such disadvantages of expertise: By metacognitively monitoring and regulating their own cognitive processes and representations, experts can prepare themselves for interdisciplinary understanding. Interdisciplinary collaboration is further facilitated by team metacognition about the team, tasks, process, goals, and representations developed in the team. Drawing attention to the need for metacognition, the article explains how philosophical reflection on the assumptions involved in different disciplinary perspectives must also be considered in a process complementary to metacognition and not completely overlapping with it. (Disciplinary assumptions are here understood as determining and constraining how the complex mental representations of experts are chunked and structured.) The article concludes with a brief reflection on how the process of Reflective Equilibrium should be added to the processes of metacognition and philosophical reflection in order for experts involved in interdisciplinary collaboration to reach a justifiable and coherent form of interdisciplinary integration. An Appendix of β€œPrompts or Questions for Metacognition” that can elicit metacognitive knowledge, monitoring, or regulation in individuals or teams is included at the end of the article

    Catalytic Change: Lessons Learned from the Racial Justice Grantmaking Assessment

    Get PDF
    ARC and PRE designed the Racial Justice Grantmaking Assessment to help foundation staff and leaders understand the benefits of being explicit about racial equity, and to determine the degree to which their work is advancing racial justice. This report is based on the pilot process, and is intended to share insights into some of the barriers within the philanthropic sector that stand in the way of achieving racial justice outcomes. It is organized into five segments:This introduction, which provides brief profiles of ARC and PRE, and of the assessment team;A description of the assessment process, including definitions, assumptions, and methodology;An overview of the assessments of the Consumer Health Foundation and the Barr Foundation, including brief profiles of each, summary findings, recommendations, and impacts to date;Lessons learned from the pilot process by the ARC-PRE assessment team; andAppendices with more detailed findings, recommendations, and initial impacts for each foundation

    Digital Image

    Full text link
    This paper considers the ontological significance of invisibility in relation to the question β€˜what is a digital image?’ Its argument in a nutshell is that the emphasis on visibility comes at the expense of latency and is symptomatic of the style of thinking that dominated Western philosophy since Plato. This privileging of visible content necessarily binds images to linguistic (semiotic and structuralist) paradigms of interpretation which promote representation, subjectivity, identity and negation over multiplicity, indeterminacy and affect. Photography is the case in point because until recently critical approaches to photography had one thing in common: they all shared in the implicit and incontrovertible understanding that photographs are a medium that must be approached visually; they took it as a given that photographs are there to be looked at and they all agreed that it is only through the practices of spectatorship that the secrets of the image can be unlocked. Whatever subsequent interpretations followed, the priori- ty of vision in relation to the image remained unperturbed. This undisputed belief in the visibility of the image has such a strong grasp on theory that it imperceptibly bonded together otherwise dissimilar and sometimes contradictory methodol- ogies, preventing them from noticing that which is the most unexplained about images: the precedence of looking itself. This self-evident truth of visibility casts a long shadow on im- age theory because it blocks the possibility of inquiring after everything that is invisible, latent and hidden

    The World According to, and After, McCutcheon v. FEC, and Why It Matters

    Get PDF

    Understanding and Comparing Scalable Gaussian Process Regression for Big Data

    Full text link
    As a non-parametric Bayesian model which produces informative predictive distribution, Gaussian process (GP) has been widely used in various fields, like regression, classification and optimization. The cubic complexity of standard GP however leads to poor scalability, which poses challenges in the era of big data. Hence, various scalable GPs have been developed in the literature in order to improve the scalability while retaining desirable prediction accuracy. This paper devotes to investigating the methodological characteristics and performance of representative global and local scalable GPs including sparse approximations and local aggregations from four main perspectives: scalability, capability, controllability and robustness. The numerical experiments on two toy examples and five real-world datasets with up to 250K points offer the following findings. In terms of scalability, most of the scalable GPs own a time complexity that is linear to the training size. In terms of capability, the sparse approximations capture the long-term spatial correlations, the local aggregations capture the local patterns but suffer from over-fitting in some scenarios. In terms of controllability, we could improve the performance of sparse approximations by simply increasing the inducing size. But this is not the case for local aggregations. In terms of robustness, local aggregations are robust to various initializations of hyperparameters due to the local attention mechanism. Finally, we highlight that the proper hybrid of global and local scalable GPs may be a promising way to improve both the model capability and scalability for big data.Comment: 25 pages, 15 figures, preprint submitted to KB
    • …
    corecore