4,325 research outputs found

    More-than-words: Reconceptualising Two-year-old Children’s Onto-epistemologies Through Improvisation and the Temporal Arts

    Get PDF
    This thesis project takes place at a time of increasing focus upon two-year-old children and the words they speak. On the one hand there is a mounting pressure, driven by the school readiness agenda, to make children talk as early as possible. On the other hand, there is an increased interest in understanding children’s communication in order to create effective pedagogies. More-than-words (MTW) examines an improvised art-education practice that combines heterogenous elements: sound, movement and materials (such as silk, string, light) to create encounters for young children, educators and practitioners from diverse backgrounds. During these encounters, adults adopt a practice of stripping back their words in order to tune into the polyphonic ways that children are becoming-with the world. For this research-creation, two MTW sessions for two-year-old children and their carers took place in a specially created installation. These sessions were filmed on a 360˚ camera, nursery school iPad and on a specially made child-friendly Toddler-cam (Tcam) that rolled around in the installation-event with the children. Through using the frameless technology of 360˚ film, I hoped to make tangible the relation and movement of an emergent and improvised happening and the way in which young children operate fluidly through multiple modes. Travelling with posthuman, Deleuzio-Guattarian and feminist vital material philosophy, I wander and wonder speculatively through practice, memory, and film data as a bag lady, a Haraway-ian writer/artist/researcher-creator who resists the story of the wordless child as lacking and tragic; the story that positions the word as heroic. Instead, through returning to the uncertainty of improvisation, I attempt to tune into the savage, untamed and wild music of young children’s animistic onto-epistemologies

    SMEs in the (food) global value chain : a European private law perspective

    Get PDF
    Defence date: 28 January 2020Examining Board: Professor Hans-W. Micklitz (supervisor), European University Institute; Professor Martijn Hesselink, European University Institute; Professor Antonina Bakardjieva Engelbrekt, Stockholm University; Professor Sergio CĂĄmara Lapuente, University of La RiojaThis dissertation is about the approach of EU private law towards the regulation of fair trading practices along the global value chain and about the parallel development of SMEs as a new legal status. The thesis starts from the assumption that the transformation of the global economy into global supply chains has undermined traditional private laws as historically embodying the diverse cultural traditions and socioeconomic realities of the member states. These traditions portray the socioeconomic role of small businesses in various ways. However, the conventional schemas of national private laws struggle, both in their substance and enforcement dimensions, with the destabilizing effect brought about by the global chain. At the same time, the supply chain has provided leeway for innovative forms of private regulation by means of contract. The EU uses this leeway to manage persistent national differences in B2b trading practices. By means of coregulation, the EU transforms national fair trading laws through three parallel mechanisms: the re-definition of SMEs as actors in the internal market; the establishment of new mechanisms for enforcement; the promotion of new substantive standards for trading practices

    Forest planning utilizing high spatial resolution data

    Get PDF
    This thesis presents planning approaches adapted for high spatial resolution data from remote sensing and evaluate whether such approaches can enhance the provision of ecosystem services from forests. The presented methods are compared with conventional, stand-level methods. The main focus lies on the planning concept of dynamic treatment units (DTU), where treatments in small units for modelling ecosystem processes and forest management are clustered spatiotemporally to form treatment units realistic in practical forestry. The methodological foundation of the thesis is mainly airborne laser scanning data (raster cells 12.5x12.5 m2), different optimization methods and the forest decision support system Heureka. Paper I demonstrates a mixed-integer programming model for DTU planning, and the results highlight the economic advances of clustering harvests. Paper II and III presents an addition to a DTU heuristic from the literature and further evaluates its performance. Results show that direct modelling of fixed costs for harvest operations can improve plans and that DTU planning enhances the economic outcome of forestry. The higher spatial resolution of data in the DTU approach enables the planning model to assign management with higher precision than if stand-based planning is applied. Paper IV evaluates whether this phenomenon is also valid for ecological values. Here, an approach adapted for cell-level data is compared to a schematic approach, dealing with stand-level data, for the purpose of allocating retention patches. The evaluation of economic and ecological values indicate that high spatial resolution data and an adapted planning approach increased the ecological values, while differences in economy were small. In conclusion, the studies in this thesis demonstrate how forest planning can utilize high spatial resolution data from remote sensing, and the results suggest that there is a potential to increase the overall provision of ecosystem services if such methods are applied

    The development of an international model for technology adoption: the case of Hong Kong

    Get PDF
    The purpose of this study is to examine the causal relationships between the internal beliefs formation of a decision-maker on technology adoption and the extent of the development of a technology adoptive behaviour. In particular, this study aims to develop an International Model For Technology Adoption (IMTA), which builds upon the Theory of Planned Behaviour (Ajzen 1992) and improves on the framework of the Technology Acceptance Model (Davis 1986). The development of such a model requires an understanding of the environmental factors which shape the cognitive processes of the decision maker. Hence, this is a behavioural model which investigates the constructs influencing the adoption behaviour and how the interaction between these constructs and the external variables can impact on the decision making process at the level of the firm. Previous research on technology transfer and innovation diffusion has classified factors affecting the diffusion process into two dimensions: 1) external-influence and 2) internal-influence. Hence, in this research, the International Model For Technology Adoption looks at how the endogenous and exogenous factors enter into the cognitive process of a technology adoption decision through which attitudes and behavioural intentions are shaped. Under the IMTA, the behavioural intention to adopt is a function of two exogenous variables, 1) Strategic Choice, and 2) Environmental Control. The Environmental Control factor is further categorised by two exogenous factors, namely, 1) Government Influence, and 2) Competitive Influence. In addition, the Competitive Influence factor is, in turn, classified into five forces: namely, 1) Industry Structure, 2) Price Intensity, 3) Demand Uncertainty, 4) Information Exposure, 5) Domestic Availability. Regarding the cognitive process which forms the attitude to adopt, it is hypothesised to be affected by six other endogenous beliefs: 1) Compatibility; 2) Enhanced Value; 3) Perceived Benefits; 4)Adaptative Experiences, 5) Perceived Difficulty; and 6) Suppliers’ Commitment. A survey research method was utilised in this study and the research instrument was developed after a comprehensive review of the relevant literature and an expert interview. A total of 298 completed questionnaires were returned; giving a response rate of 13.56%. Of the 298 questionnaires, 39 of the responses were unusable with missing date. This gives a total of 259 usable questionnaires and an effective response rate of 11.78%. The results of the analysis suggested that the fitness of the International Model For Technology Adoption was good and the data of this study supported the overall structure of the IMTA. When compared with the null model, which was used by the EQS as a baseline model to judge to overall fitness for the IMTA, the IMTA yielded a value of 0.914 in the Comparative Fit index; hence, indication of a good fit model. In addition, the results of the principal component analysis also illustrated that the 16-factor International Model For Technology Adoption was an adequate model to capture the information collected during the survey. The results shown that this 16-factor structure represented nearly 77% of the total variance of all items. A further analysis into the factor structure, again, revealed that there existed a perfect match between the conceptual dimensionality of the International Model For Technology Adoption and the empirical data collected in the survey. However, the results of the hypotheses testing on the individual constructs were mixed. While not all the magnitude of these ten hypotheses was statistically significant, almost all pointed to the direction conceptualised by the IMTA. From these results, it can be interpreted that while the results of the structural equation modelling analysis provided overall support to the International Model For Technology Adoption, the results of individual constructs of the Model revealed that some constructs were forming a larger impact than others in the decision making process to adopt foreign technology. In particular, the intention to adopt was greatly affected by the attitude of the prospective adopters, the influence of the government and the degree of industry rivalry. However, the impact of the overall competitive influence factor on the intention to adopt was not supported by the results. Again, the existence of investment alternative was also not a serious concern for the prospective adopters

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Spectrum auctions: designing markets to benefit the public, industry and the economy

    Get PDF
    Access to the radio spectrum is vital for modern digital communication. It is an essential component for smartphone capabilities, the Cloud, the Internet of Things, autonomous vehicles, and multiple other new technologies. Governments use spectrum auctions to decide which companies should use what parts of the radio spectrum. Successful auctions can fuel rapid innovation in products and services, unlock substantial economic benefits, build comparative advantage across all regions, and create billions of dollars of government revenues. Poor auction strategies can leave bandwidth unsold and delay innovation, sell national assets to firms too cheaply, or create uncompetitive markets with high mobile prices and patchy coverage that stifles economic growth. Corporate bidders regularly complain that auctions raise their costs, while government critics argue that insufficient revenues are raised. The cross-national record shows many examples of both highly successful auctions and miserable failures. Drawing on experience from the UK and other countries, senior regulator Geoffrey Myers explains how to optimise the regulatory design of auctions, from initial planning to final implementation. Spectrum Auctions offers unrivalled expertise for regulators and economists engaged in practical auction design or company executives planning bidding strategies. For applied economists, teachers, and advanced students this book provides unrivalled insights in market design and public management. Providing clear analytical frameworks, case studies of auctions, and stage-by-stage advice, it is essential reading for anyone interested in designing public-interested and successful spectrum auctions

    A Critical Review Of Post-Secondary Education Writing During A 21st Century Education Revolution

    Get PDF
    Educational materials are effective instruments which provide information and report new discoveries uncovered by researchers in specific areas of academia. Higher education, like other education institutions, rely on instructional materials to inform its practice of educating adult learners. In post-secondary education, developmental English programs are tasked with meeting the needs of dynamic populations, thus there is a continuous need for research in this area to support its changing landscape. However, the majority of scholarly thought in this area centers on K-12 reading and writing. This paucity presents a phenomenon to the post-secondary community. This research study uses a qualitative content analysis to examine peer-reviewed journals from 2003-2017, developmental online websites, and a government issued document directed toward reforming post-secondary developmental education programs. These highly relevant sources aid educators in discovering informational support to apply best practices for student success. Developmental education serves the purpose of addressing literacy gaps for students transitioning to college-level work. The findings here illuminate the dearth of material offered to developmental educators. This study suggests the field of literacy research is fragmented and highlights an apparent blind spot in scholarly literature with regard to English writing instruction. This poses a quandary for post-secondary literacy researchers in the 21st century and establishes the necessity for the literacy research community to commit future scholarship toward equipping college educators teaching writing instruction to underprepared adult learners

    2023-2024 Catalog

    Get PDF
    The 2023-2024 Governors State University Undergraduate and Graduate Catalog is a comprehensive listing of current information regarding:Degree RequirementsCourse OfferingsUndergraduate and Graduate Rules and Regulation

    Undergraduate Catalog of Studies, 2022-2023

    Get PDF

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer
    • 

    corecore