2,174 research outputs found

    Mathematics teachers’ work with resources: four cases of secondary teachers using technology

    Get PDF
    This study examines teachers’ work with paper-based, technology and social resources with the use of two theoretical frameworks: the Documentational approach and the Knowledge Quartet. The former affords looking at teachers’ resources and resource systems and how these are utilized under schemes of work. The latter affords a closer look at teachers’ work during lessons and at their knowledge-in-action. Specifically, the study investigates how four upper secondary teachers use, re-use and balance their resources by looking at their schemes of work in class, through lesson observations; and, by reflecting on the details of their work and knowledge-in-action in pre- and post-observation interviews. Analysis examines five themes in relation to teachers’ work. First, teachers use students’ contributions as a resource during lessons. Second, teachers connect (or not) different resources. Third, institutional factors, such as examinations requirements and school policy, have impact on teachers’ decisions and on how they balance their resource use. Fourth, when mathematics-education software is used, teacher knowledge of the software comes into play. Fifth, there is ambiguity in the identification of contingency moments, particularly regarding whether these moments were anticipated (or not) or provoked by the teacher. These five themes also suggest theoretical findings. In relation to the Knowledge Quartet, the findings indicate the potency of adding a few new codes or extending existing codes. This is especially pertinent in the context of teaching upper secondary mathematics with technology resources. In relation to the Documentational approach, this study introduces two constructs: scheme-in-action and re-scheming. A scheme-in-action is the scheme followed in class and documented from the classroom. Re-scheming is scheming again or differently from one lesson to another. Finally, the study discusses implications for practice and proposes the use of key incidents extracted from classroom observations towards the development of teacher education resources (e.g. for the MathTASK programme)

    Clafer: Lightweight Modeling of Structure, Behaviour, and Variability

    Get PDF
    Embedded software is growing fast in size and complexity, leading to intimate mixture of complex architectures and complex control. Consequently, software specification requires modeling both structures and behaviour of systems. Unfortunately, existing languages do not integrate these aspects well, usually prioritizing one of them. It is common to develop a separate language for each of these facets. In this paper, we contribute Clafer: a small language that attempts to tackle this challenge. It combines rich structural modeling with state of the art behavioural formalisms. We are not aware of any other modeling language that seamlessly combines these facets common to system and software modeling. We show how Clafer, in a single unified syntax and semantics, allows capturing feature models (variability), component models, discrete control models (automata) and variability encompassing all these aspects. The language is built on top of first order logic with quantifiers over basic entities (for modeling structures) combined with linear temporal logic (for modeling behaviour). On top of this semantic foundation we build a simple but expressive syntax, enriched with carefully selected syntactic expansions that cover hierarchical modeling, associations, automata, scenarios, and Dwyer's property patterns. We evaluate Clafer using a power window case study, and comparing it against other notations that substantially overlap with its scope (SysML, AADL, Temporal OCL and Live Sequence Charts), discussing benefits and perils of using a single notation for the purpose

    A Conformally Invariant Holographic Two-Point Function on the Berger Sphere

    Full text link
    We apply our previous work on Green's functions for the four-dimensional quaternionic Taub-NUT manifold to obtain a scalar two-point function on the homogeneously squashed three-sphere (otherwise known as the Berger sphere), which lies at its conformal infinity. Using basic notions from conformal geometry and the theory of boundary value problems, in particular the Dirichlet-to-Robin operator, we establish that our two-point correlation function is conformally invariant and corresponds to a boundary operator of conformal dimension one. It is plausible that the methods we use could have more general applications in an AdS/CFT context.Comment: 1+49 pages, no figures. v2: Several typos correcte

    Object detection and activity recognition in digital image and video libraries

    Get PDF
    This thesis is a comprehensive study of object-based image and video retrieval, specifically for car and human detection and activity recognition purposes. The thesis focuses on the problem of connecting low level features to high level semantics by developing relational object and activity presentations. With the rapid growth of multimedia information in forms of digital image and video libraries, there is an increasing need for intelligent database management tools. The traditional text based query systems based on manual annotation process are impractical for today\u27s large libraries requiring an efficient information retrieval system. For this purpose, a hierarchical information retrieval system is proposed where shape, color and motion characteristics of objects of interest are captured in compressed and uncompressed domains. The proposed retrieval method provides object detection and activity recognition at different resolution levels from low complexity to low false rates. The thesis first examines extraction of low level features from images and videos using intensity, color and motion of pixels and blocks. Local consistency based on these features and geometrical characteristics of the regions is used to group object parts. The problem of managing the segmentation process is solved by a new approach that uses object based knowledge in order to group the regions according to a global consistency. A new model-based segmentation algorithm is introduced that uses a feedback from relational representation of the object. The selected unary and binary attributes are further extended for application specific algorithms. Object detection is achieved by matching the relational graphs of objects with the reference model. The major advantages of the algorithm can be summarized as improving the object extraction by reducing the dependence on the low level segmentation process and combining the boundary and region properties. The thesis then addresses the problem of object detection and activity recognition in compressed domain in order to reduce computational complexity. New algorithms for object detection and activity recognition in JPEG images and MPEG videos are developed. It is shown that significant information can be obtained from the compressed domain in order to connect to high level semantics. Since our aim is to retrieve information from images and videos compressed using standard algorithms such as JPEG and MPEG, our approach differentiates from previous compressed domain object detection techniques where the compression algorithms are governed by characteristics of object of interest to be retrieved. An algorithm is developed using the principal component analysis of MPEG motion vectors to detect the human activities; namely, walking, running, and kicking. Object detection in JPEG compressed still images and MPEG I frames is achieved by using DC-DCT coefficients of the luminance and chrominance values in the graph based object detection algorithm. The thesis finally addresses the problem of object detection in lower resolution and monochrome images. Specifically, it is demonstrated that the structural information of human silhouettes can be captured from AC-DCT coefficients

    Purposive three-dimensional reconstruction by means of a controlled environment

    Get PDF
    Retrieving 3D data using imaging devices is a relevant task for many applications in medical imaging, surveillance, industrial quality control, and others. As soon as we gain procedural control over parameters of the imaging device, we encounter the necessity of well-defined reconstruction goals and we need methods to achieve them. Hence, we enter next-best-view planning. In this work, we present a formalization of the abstract view planning problem and deal with different planning aspects, whereat we focus on using an intensity camera without active illumination. As one aspect of view planning, employing a controlled environment also provides the planning and reconstruction methods with additional information. We incorporate the additional knowledge of camera parameters into the Kanade-Lucas-Tomasi method used for feature tracking. The resulting Guided KLT tracking method benefits from a constrained optimization space and yields improved accuracy while regarding the uncertainty of the additional input. Serving other planning tasks dealing with known objects, we propose a method for coarse registration of 3D surface triangulations. By the means of exact surface moments of surface triangulations we establish invariant surface descriptors based on moment invariants. These descriptors allow to tackle tasks of surface registration, classification, retrieval, and clustering, which are also relevant to view planning. In the main part of this work, we present a modular, online approach to view planning for 3D reconstruction. Based on the outcome of the Guided KLT tracking, we design a planning module for accuracy optimization with respect to an extended E-criterion. Further planning modules endow non-discrete surface estimation and visibility analysis. The modular nature of the proposed planning system allows to address a wide range of specific instances of view planning. The theoretical findings in this work are underlined by experiments evaluating the relevant terms

    Capture-based Automated Test Input Generation

    Get PDF
    Testing object-oriented software is critical because object-oriented languages have been commonly used in developing modern software systems. Many efficient test input generation techniques for object-oriented software have been proposed; however, state-of-the-art algorithms yield very low code coverage (e.g., less than 50%) on large-scale software. Therefore, one important and yet challenging problem is to generate desirable input objects for receivers and arguments that can achieve high code coverage (such as branch coverage) or help reveal bugs. Desirable objects help tests exercise the new parts of the code. However, generating desirable objects has been a significant challenge for automated test input generation tools, partly because the search space for such desirable objects is huge. To address this significant challenge, we propose a novel approach called Capture-based Automated Test Input Generation for Objected-Oriented Unit Testing (CAPTIG). The contributions of this proposed research are the following. First, CAPTIG enhances method-sequence generation techniques. Our approach intro-duces a set of new algorithms for guided input and method selection that increase code coverage. In addition, CAPTIG efficently reduces the amount of generated input. Second, CAPTIG captures objects dynamically from program execution during either system testing or real use. These captured inputs can support existing automated test input generation tools, such as a random testing tool called Randoop, to achieve higher code coverage. Third, CAPTIG statically analyzes the observed branches that had not been covered and attempts to exercise them by mutating existing inputs, based on the weakest precon-dition analysis. This technique also contributes to achieve higher code coverage. Fourth, CAPTIG can be used to reproduce software crashes, based on crash stack trace. This feature can considerably reduce cost for analyzing and removing causes of the crashes. In addition, each CAPTIG technique can be independently applied to leverage existing testing techniques. We anticipate our approach can achieve higher code coverage with a reduced duration of time with smaller amount of test input. To evaluate this new approach, we performed experiments with well-known large-scale open-source software and discovered our approach can help achieve higher code coverage with fewer amounts of time and test inputs

    Two Steps Towards Kairos-Awareness

    Get PDF
    This thesis describes a research inspired by a concept of the classical discipline of rhetoric: kairos, the right moment to deliver a message in order to maximize its effect. The research followed two threads that, ultimately, lead to the same ending: the maximization of the potential of technology to deliver the right interaction at the right time. The first research thread is an operationalization of the concept of kairos. It entailed the development of EveWorks and EveXL, a framework for capturing daily life events in mobile devices and a domain-specific language to express them, respectively. The largely extended use of mobile devices and their proximity with their owners offers exceptional potential for capturing opportunity for interaction. Leveraging on this potential, the EveWorks-EveXL dyad was developed to allow mobile application programmers to specify the precise delivery circumstances of an interaction in order to maximize its potential, i.e., to specify its kairos. Contrasting to most event processing engines found in the literature that implement data-based event models, the EveWorks-EveXL dyad proposes a model based on temporality, through the articulation of intervals of time. This is a more natural way of representing a concept as broad as “daily life events” since, across cultures, temporal concepts like duration and time intervals are fundamental to the way people make sense of their experience. The results of the present work demonstrate that the EveWorks-EveXL dyad makes for an adequate and interesting way to express contextual events, in a way that is “closer” to our everyday understanding of daily life. Ultimately, in user centered applications, kairos can be influenced by the user’s emotional state, thereby making emotion assessment relevant. Addressing this, as well as the growing interest in the topic of emotions by the scientific community, the second research thread of the present thesis led to the development of the CAAT, a widget designed to perform quick and reliable assessments of affective states – a paramount task in a variety of scientific fields, including HCI. While there are already a number of tools for this purpose, in psychology, emotion assessments are largely conducted through the use of pen-and-paper questionnaires applied after the affective experience has occurred. As emotional states vary significantly over time, this entails the loss of important details, warranting the need for immediate, in situ, measurements of affect. In line with this requirement, the CAAT enables quick emotion assessment in a reliable fashion, as attested by the results of then validation studies conducted in order to assess its overall viability along relevant dimensions of usability and psychometrics. As such, aside from being a good fit for longitudinal studies and applications whenever the quick assessment of emotions is required, the CAAT has the potential to be integrated as one of EveWorks’ sensors to enhance its ability to find that sometimes elusive opportunity for interaction, i.e., their kairos. In this way, it becomes apparent how the two threads of research of the current work may be intertwined into a consolidated contribution to the HCI field
    • …
    corecore