2,060 research outputs found

    The use of technology for improving throughput rates in an ODL context by lecturers in the School of Computing

    Get PDF
    The improvement of throughput rates is a crucial factor at higher education institutions; hence, university departments focus on improving pass rates per module. This study investigated how lecturers in the School of Computing (SoC) at the University of South Africa, use technology for improving throughput rates in an Open Distance Learning (ODL) context. The study sought answers to the main research question on how lecturers in the SoC use technology for improving throughput rates in an ODL institution. A mixed research methods approach was used, where quantitative data was extracted from the university systems and integrated with the qualitative data collected from interviews. Thirteen lecturers for the thirty modules under investigation were interviewed. A thematic analysis was used on the qualitative data, and quantitative data was analysed using rankings and correlation coefficients, leading to the interpretation that the use of myUnisa technology assisted to improve throughput on the modules.Mathematics EducationM. Sc. (Computing Education

    OSS architecture for mixed-criticality systems – a dual view from a software and system engineering perspective

    Get PDF
    Computer-based automation in industrial appliances led to a growing number of logically dependent, but physically separated embedded control units per appliance. Many of those components are safety-critical systems, and require adherence to safety standards, which is inconsonant with the relentless demand for features in those appliances. Features lead to a growing amount of control units per appliance, and to a increasing complexity of the overall software stack, being unfavourable for safety certifications. Modern CPUs provide means to revise traditional separation of concerns design primitives: the consolidation of systems, which yields new engineering challenges that concern the entire software and system stack. Multi-core CPUs favour economic consolidation of formerly separated systems with one efficient single hardware unit. Nonetheless, the system architecture must provide means to guarantee the freedom from interference between domains of different criticality. System consolidation demands for architectural and engineering strategies to fulfil requirements (e.g., real-time or certifiability criteria) in safety-critical environments. In parallel, there is an ongoing trend to substitute ordinary proprietary base platform software components by mature OSS variants for economic and engineering reasons. There are fundamental differences of processual properties in development processes of OSS and proprietary software. OSS in safety-critical systems requires development process assessment techniques to build an evidence-based fundament for certification efforts that is based upon empirical software engineering methods. In this thesis, I will approach from both sides: the software and system engineering perspective. In the first part of this thesis, I focus on the assessment of OSS components: I develop software engineering techniques that allow to quantify characteristics of distributed OSS development processes. I show that ex-post analyses of software development processes can be used to serve as a foundation for certification efforts, as it is required for safety-critical systems. In the second part of this thesis, I present a system architecture based on OSS components that allows for consolidation of mixed-criticality systems on a single platform. Therefore, I exploit virtualisation extensions of modern CPUs to strictly isolate domains of different criticality. The proposed architecture shall eradicate any remaining hypervisor activity in order to preserve real-time capabilities of the hardware by design, while guaranteeing strict isolation across domains.Computergestützte Automatisierung industrieller Systeme führt zu einer wachsenden Anzahl an logisch abhängigen, aber physisch voneinander getrennten Steuergeräten pro System. Viele der Einzelgeräte sind sicherheitskritische Systeme, welche die Einhaltung von Sicherheitsstandards erfordern, was durch die unermüdliche Nachfrage an Funktionalitäten erschwert wird. Diese führt zu einer wachsenden Gesamtzahl an Steuergeräten, einhergehend mit wachsender Komplexität des gesamten Softwarekorpus, wodurch Zertifizierungsvorhaben erschwert werden. Moderne Prozessoren stellen Mittel zur Verfügung, welche es ermöglichen, das traditionelle >Trennung von Belangen< Designprinzip zu erneuern: die Systemkonsolidierung. Sie stellt neue ingenieurstechnische Herausforderungen, die den gesamten Software und Systemstapel betreffen. Mehrkernprozessoren begünstigen die ökonomische und effiziente Konsolidierung vormals getrennter Systemen zu einer effizienten Hardwareeinheit. Geeignete Systemarchitekturen müssen jedoch die Rückwirkungsfreiheit zwischen Domänen unterschiedlicher Kritikalität sicherstellen. Die Konsolidierung erfordert architektonische, als auch ingenieurstechnische Strategien um die Anforderungen (etwa Echtzeit- oder Zertifizierbarkeitskriterien) in sicherheitskritischen Umgebungen erfüllen zu können. Zunehmend werden herkömmliche proprietär entwickelte Basisplattformkomponenten aus ökonomischen und technischen Gründen vermehrt durch ausgereifte OSS Alternativen ersetzt. Jedoch hindern fundamentale Unterschiede bei prozessualen Eigenschaften des Entwicklungsprozesses bei OSS den Einsatz in sicherheitskritischen Systemen. Dieser erfordert Techniken, welche es erlauben die Entwicklungsprozesse zu bewerten um ein evidenzbasiertes Fundament für Zertifizierungsvorhaben basierend auf empirischen Methoden des Software Engineerings zur Verfügung zu stellen. In dieser Arbeit nähere ich mich von beiden Seiten: der Softwaretechnik, und der Systemarchitektur. Im ersten Teil befasse ich mich mit der Beurteilung von OSS Komponenten: Ich entwickle Softwareanalysetechniken, welche es ermöglichen, prozessuale Charakteristika von verteilten OSS Entwicklungsvorhaben zu quantifizieren. Ich zeige, dass rückschauende Analysen des Entwicklungsprozess als Grundlage für Softwarezertifizierungsvorhaben genutzt werden können. Im zweiten Teil dieser Arbeit widme ich mich der Systemarchitektur. Ich stelle eine OSS-basierte Systemarchitektur vor, welche die Konsolidierung von Systemen gemischter Kritikalität auf einer alleinstehenden Plattform ermöglicht. Dazu nutze ich Virtualisierungserweiterungen moderner Prozessoren aus, um die Hardware in strikt voneinander isolierten Rechendomänen unterschiedlicher Kritikalität unterteilen zu können. Die vorgeschlagene Architektur soll jegliche Betriebsstörungen des Hypervisors beseitigen, um die Echtzeitfähigkeiten der Hardware bauartbedingt aufrecht zu erhalten, während strikte Isolierung zwischen Domänen stets sicher gestellt ist

    Pedagogical Innovation in New Learning Communities: An In-depth Study of Twelve Online Learning Communities

    Get PDF
    The main aim of this study is to collect evidence on the learning innovation emerging in online communities and to draw conclusions on the lessons learnt and on emerging models and features that could eventually be transferred to Education and Training systems to support Lifelong Learning, innovation and change in Europe. The results presented are based on an in-depth analysis of the pedagogical and organisational innovation emerging from twelve online communities belonging to one (or more) of the following categorisation: Organization-driven communities; Production-driven communities; Topic-driven communities; Socially driven communities.JRC.DDG.J.4-Information Societ

    Achieving Justice Through Public Participation: Measuring the Effectiveness of New York\u27s Enhanced Public Participation Plan for Environmental Justice Communities

    Get PDF
    Public participation is at the heart of democracy and of the environmental justice movement. Most state-level environmental justice policies and regulations focus on improving public participation within administrative processes to ensure that communities have a voice in the environmental decisions that affect them. New York has adopted an environmental justice policy that follows this model and requires enhanced notice, accessible comment opportunities, and improved access to technical information for new major environmental permits issued to facilities proposed in low-income or minority communities. However, New York\u27s policy, like other state participation-focused environmental justice policies, has yet to be evaluated. To address that gap, I develop six theoretically-tethered criteria of effective public participation (access, fair process, voice, dialogue, recognition, and legitimacy) through a review of the literature on relevant democracy and justice theory, particularly procedural justice and justice as recognition; the role of the administrative state; and the theory and history of environmental justice. I then refine and ground those measures through interviews with community activists, environmental justice advocates and regulatory agency staff and apply the grounded measures in a comparative case study of permitting processes that did or did not trigger New York\u27s environmental justice policy. The data, collected through participant interviews, document review, and survey work and analyzed qualitatively, suggest that New York\u27s policy improves the external framework for participation with marked improvements in objective measures of access and, to a lesser extent, social recognition. The policy creates the space for improvements in voice, dialogue, and institutional recognition, but does not ensure the internal changes to the decision-making structure necessary to guarantee these improvements. Organizational culture of the applicant and/or agency, community identity and composition, source and content of notice, and public meeting structure may also have significant impacts on the effectiveness of public participation and merit further investigation

    E-mentoring in Online Programming Communities : Opportunities, Challenges, Activities and Strategies

    Full text link
    Mentoring is known to effectively improve professional development. The advancements in Information Technology area have positively impacted the process of mentoring through a more technology-mediated form of mentoring known as e-mentoring or online mentoring. Online mentoring had a particularly strong effect in improving the learning opportunities in online programming communities where mentees and mentors interact with each other from around the world in a mutually beneficial learning experience and collaboration. Yet, the lack of a coherent understanding of different characteristics (e.g., opportunities, challenges, activities, and strategies employed by mentees and mentors) of e-mentoring in online programming communities and lack of knowledge about mentoring aspects of applying e-mentoring in different types of online programming platforms inhibit us from an informed design or redesign of systems for e-mentoring in such communities. With a specific focus on those shortcomings, this research presents several empirical studies to advance the understanding of e-mentoring in online programming communities. First, we investigate the emerging opportunities and challenges faced by e-mentoring in online programming community. Next, we identify and classify e-mentoring activities carried out in this context. We investigate the strategies employed to overcome e-mentoring challenges in online programming communities. Finally, based on our findings, this dissertation proposes a conceptual framework for augmenting socio-technical systems with e-mentoring. The dissertation also provides comprehensive contributions that enhance the understanding of e-mentoring in online communities and provides improvement recommendations (e.g., encouraging academic members to help by offering their services to online communities as a part of their university work, using chatbots for automated responses to queries, and improving features to manage e-mentoring tasks and projects)

    Naturalistic Allocation: Working Memory and Cued-Attention Effects on Resource Allocation

    Get PDF
    The allocation of resources is a ubiquitous decision making task. In the workplace, resource allocation, in the context of multiple task and/or work demands, is significantly related to task performance as the commitment of more resources generally results in better performance on a given task. I apply both resource and naturalistic decision making theories to better understand resource allocation behavior and related performance. Resource theories suggest that individuals have limited cognitive capacity: limited capacity may limit performance in dynamic situations such as situations that involve the allocation of attentional resources. Additionally, the naturalistic decision making framework highlights the role of context cues as key aids to effective decision making. Therefore, I proposed an interactive relationship between working memory, a cognitive resource, and allocation cue, a contextual variable. Specifically, I conducted an experimental study in which I manipulated allocation cue type and examined the individual difference of working memory on allocation behavior and task performance. I hypothesized a moderated-mediated effect including cue type, working memory, and proportion of time on task on task performance (i.e., accuracy and efficiency). The effect of cue type on both the proportion of time spent on task and task performance was expected to be contingent on working memory capacity. As working memory increased, both time on task and performance were expected to increase for participants exposed to either goal- or both task- and goal-related cues, as opposed to task cues. Conversely, as working memory decreased both time on task and performance were expected to increase for participants exposed to task cues in comparison to those exposed to either goal- or both task- and goal-related cues. Additionally, as proportion of time on task increased, performance was expected to improve. Results from this study did not find support for the hypothesized moderated-mediated effect. However, results indicated an effect of task cue on task efficiency. Specifically, individuals cued to allocate their attention based stimulus-related features (i.e., task cue) completed the task more quickly. Theoretical and practical implications as well as study limitations are discussed in detail

    A QUALITATIVE STUDY OF WOODLAND COUNTY PUBLIC SCHOOLS’ SOCIAL MEDIA POLICY FOR EMPLOYEES: ITS DEVELOPMENT, INTERPRETATION, AND SIGNIFICANCE

    Get PDF
    The popularity of social networking sites on the World Wide Web has exploded during the past two decades. As more and more K-12 public school teachers choose to actively participate on social networking sites, school leaders and school boards face the increasingly difficult decision about whether or not to enact policies which will enable them to discipline teachers for their online behavior. The purpose of this qualitative case study was to explore the development, interpretation, and significance of one such policy
    • …
    corecore