51,167 research outputs found

    Aspect-Oriented Programming with Type Classes

    Get PDF
    We consider the problem of adding aspects to a strongly typed language which supports type classes. We show that type classes as supported by the Glasgow Haskell Compiler can model an AOP style of programming via a simple syntax-directed transformation scheme where AOP programming idioms are mapped to type classes. The drawback of this approach is that we cannot easily advise functions in programs which carry type annotations. We sketch a more principled approach which is free of such problems by combining ideas from intentional type analysis with advanced overloading resolution strategies. Our results show that type-directed static weaving is closely related to type class resolution -- the process of typing and translating type class programs

    Transformative Learning: Lessons from First-Semester Honors Narratives

    Get PDF
    Although the National Collegiate Honors Council has clearly articulated the common characteristics of “fully developed” honors programs and colleges, these elements describe the structures and processes that frame honors education but do not directly describe the intended honors outcomes for student learners (Spurrier). Implicitly, however, the intended outcomes of distinct curricula, smaller course sizes, honors living communities, international programming, capstone or thesis requirements, and any number of other innovative forms of pedagogy are qualitatively different from faster degree completion, better jobs, or higher recognition at graduation. When intentionally directed, honors education promotes the full transformation of the student (Mihelich, Storrs, & Pellet). Both the potential and challenges inherent in promoting transformative learning have a long and rich tradition in the scholarship of pedagogy, with different theorists prioritizing distinct features of the process and targeting different outcomes. Dewey, Freire, and Mezirow (in Transformative Dimensions), for instance, each argue—independent of the specifics of their models—that transformation is best accomplished when it is the explicit goal and attention is given to facilitating key learning processes. While honors programs may be well positioned to support these learning processes and while transformation may be an implicit goal of honors education, few honors mission statements frame learning goals in these terms (Bartelds, Drayer, & Wolfensberger; Camarena & Pauley). Working from the premise that honors education is well-situated to make transformative learning a higher-order goal in an era of debates about learning outcomes and metrics of change (e.g., Digby), we examine the personal transformation experiences of first-semester honors students and explore how the intentional processes integrated into these experiences played a role in that transformation. To put this work in context, we first describe the transformative learning models and identify the intentional structures built into the first-semester honors experience

    Non-human Intention and Meaning-Making: An Ecological Theory

    Get PDF
    © Springer Nature Switzerland AG 2019. The final publication is available at Springer via https://doi.org/10.1007/978-3-319-97550-4_12Social robots have the potential to problematize many attributes that have previously been considered, in philosophical discourse, to be unique to human beings. Thus, if one construes the explicit programming of robots as constituting specific objectives and the overall design and structure of AI as having aims, in the sense of embedded directives, one might conclude that social robots are motivated to fulfil these objectives, and therefore act intentionally towards fulfilling those goals. The purpose of this paper is to consider the impact of this description of social robotics on traditional notions of intention and meaningmaking, and, in particular, to link meaning-making to a social ecology that is being impacted by the presence of social robots. To the extent that intelligent non-human agents are occupying our world alongside us, this paper suggests that there is no benefit in differentiating them from human agents because they are actively changing the context that we share with them, and therefore influencing our meaningmaking like any other agent. This is not suggested as some kind of Turing Test, in which we can no longer differentiate between humans and robots, but rather to observe that the argument in which human agency is defined in terms of free will, motivation, and intention can equally be used as a description of the agency of social robots. Furthermore, all of this occurs within a shared context in which the actions of the human impinge upon the non-human, and vice versa, thereby problematising Anscombe's classic account of intention.Peer reviewedFinal Accepted Versio
    corecore