5,930 research outputs found

    Making the third mission possible: investigating academic staff experiences of community-engaged learning

    Get PDF
    Community-engaged Learning (CEL) is an intentional and structured pedagogical approach, which links learning objectives with community needs. Most of the existing literature is centred on Service-learning practice in the United States. To date, there have been no in-depth studies on the experiences and perspectives of practitioners who engage with CEL in a UK or more specifically, a Scottish Higher Education context. The thesis presents data collected from a qualitative study utilising documentary analysis of government and institutional literature and 23 in-depth interviews with University practitioners, managers and leaders. I explored factors which influence the perspectives and experiences of CEL practitioners at one Scottish, research-intensive Russell Group university. Adopting a research ontology informed by Margaret Archer’s Morphogenetic, Critical Realist approach, I analyse the data collected through the lens of an emancipatory Neo-Aristotelian virtue-ethics framework and argue that CEL practice at this university contributes to, what the evidence suggests is, its ultimate purpose: promoting and cultivating individual flourishing and emancipatory critical thinking for the common good. Focussing on university-community engagement, the findings suggest that there are some inconsistencies between how the University is portrayed in public-facing literature compared to the level of institutional support individual practitioners of CEL report receiving. I conclude that failure to adequately support CEL activity in the future could negatively impact the sustainability and quality of community engagement at Alba University

    TANDEM: taming failures in next-generation datacenters with emerging memory

    Get PDF
    The explosive growth of online services, leading to unforeseen scales, has made modern datacenters highly prone to failures. Taming these failures hinges on fast and correct recovery, minimizing service interruptions. Applications, owing to recovery, entail additional measures to maintain a recoverable state of data and computation logic during their failure-free execution. However, these precautionary measures have severe implications on performance, correctness, and programmability, making recovery incredibly challenging to realize in practice. Emerging memory, particularly non-volatile memory (NVM) and disaggregated memory (DM), offers a promising opportunity to achieve fast recovery with maximum performance. However, incorporating these technologies into datacenter architecture presents significant challenges; Their distinct architectural attributes, differing significantly from traditional memory devices, introduce new semantic challenges for implementing recovery, complicating correctness and programmability. Can emerging memory enable fast, performant, and correct recovery in the datacenter? This thesis aims to answer this question while addressing the associated challenges. When architecting datacenters with emerging memory, system architects face four key challenges: (1) how to guarantee correct semantics; (2) how to efficiently enforce correctness with optimal performance; (3) how to validate end-to-end correctness including recovery; and (4) how to preserve programmer productivity (Programmability). This thesis aims to address these challenges through the following approaches: (a) defining precise consistency models that formally specify correct end-to-end semantics in the presence of failures (consistency models also play a crucial role in programmability); (b) developing new low-level mechanisms to efficiently enforce the prescribed models given the capabilities of emerging memory; and (c) creating robust testing frameworks to validate end-to-end correctness and recovery. We start our exploration with non-volatile memory (NVM), which offers fast persistence capabilities directly accessible through the processor’s load-store (memory) interface. Notably, these capabilities can be leveraged to enable fast recovery for Log-Free Data Structures (LFDs) while maximizing performance. However, due to the complexity of modern cache hierarchies, data hardly persist in any specific order, jeop- ardizing recovery and correctness. Therefore, recovery needs primitives that explicitly control the order of updates to NVM (known as persistency models). We outline the precise specification of a novel persistency model – Release Persistency (RP) – that provides a consistency guarantee for LFDs on what remains in non-volatile memory upon failure. To efficiently enforce RP, we propose a novel microarchitecture mechanism, lazy release persistence (LRP). Using standard LFDs benchmarks, we show that LRP achieves fast recovery while incurring minimal overhead on performance. We continue our discussion with memory disaggregation which decouples memory from traditional monolithic servers, offering a promising pathway for achieving very high availability in replicated in-memory data stores. Achieving such availability hinges on transaction protocols that can efficiently handle recovery in this setting, where compute and memory are independent. However, there is a challenge: disaggregated memory (DM) fails to work with RPC-style protocols, mandating one-sided transaction protocols. Exacerbating the problem, one-sided transactions expose critical low-level ordering to architects, posing a threat to correctness. We present a highly available transaction protocol, Pandora, that is specifically designed to achieve fast recovery in disaggregated key-value stores (DKVSes). Pandora is the first one-sided transactional protocol that ensures correct, non-blocking, and fast recovery in DKVS. Our experimental implementation artifacts demonstrate that Pandora achieves fast recovery and high availability while causing minimal disruption to services. Finally, we introduce a novel target litmus-testing framework – DART – to validate the end-to-end correctness of transactional protocols with recovery. Using DART’s target testing capabilities, we have found several critical bugs in Pandora, highlighting the need for robust end-to-end testing methods in the design loop to iteratively fix correctness bugs. Crucially, DART is lightweight and black-box, thereby eliminating any intervention from the programmers

    Understanding the complexities of supporting children’s career aspirations and preparedness for the changing world of work

    Get PDF
    Aspiring to and preparing for a career is becoming an increasingly complex activity for children and adolescents. A young person entering the job market is now more likely to experience multiple career transitions. Recent technological advances are making many routine and some non-routine jobs more susceptible to automation, as well as changing the skill and task requirements within occupations. These changing career conditions are raising complexities for those who support children’s and adolescents’ career aspirations and preparedness. Career theories and interventions incorporating person-occupation matching approaches may increasingly encounter problems as it may become more difficult to explore rapidly evolving occupational requirements with young people and achieve good-fitting or sustained matches between their career aspirations/choices and future job opportunities. Possible mismatches could mean that more young people experience unemployment and/or various opportunity costs from pursuing or failing to adequately prepare for careers affected by automation. To contribute to an enhanced understanding of these problems, this thesis investigated the complexities of supporting children’s and adolescents’ career aspirations and preparedness for the changing world of work. To understand different aspects of the problem, this thesis reports three consecutive and interrelated studies carried out using a mixed methods design. These studies were conducted to address the following overarching research question: what are the complexities of supporting children’s career aspirations and preparedness for automation and job change? The research studies were informed by Social Cognitive Career Theory. First, to estimate the automation-related career risks different groups of school students may encounter over the coming decades, an analysis was conducted using large-scale survey data covering primary and secondary school students’ career aspirations and probability statistics on job automation. Study results revealed that adolescents, male students, lower-income groups, and students’ parental occupation were associated with a greater likelihood of aspiring to an occupation at higher risk of automation. While the results from Study One highlighted possible automation-related risks for school children and different subgroups, these risks are potentially nuanced, affecting some higher status and non-routine occupations as well as routine roles. Due to this emerging complexity, it was important to review how recent career aspiration interventions have approached the changing career conditions to critically evaluate a range of possible approaches to the problem. The second study comprised of a systematic review of career aspiration intervention studies involving children (aged 5-18) to gain insights and identify gaps in how recent intervention approaches have/have not addressed job change. Review findings showed that the interventions often focused on select demographic groups and job sectors, with STEM occupations, females, and adolescents targeted more frequently. It was also shown that the intervention objectives and learning content largely did not address changes within occupations or job markets. Because of the limited approaches to addressing job change identified in Study Two, along with the automation-related career risks estimated in Study One, there was reason to explore how a contemporary career education provision could address automation and job change with children. Considering the context-specific complexities involved in supporting children’s preparedness for automation and job change, it was important to examine the perspectives of stakeholders who contribute to children’s career education to reveal possible conceptual issues and the practical opportunities and challenges they encounter. Focusing on the Scottish career education system, a case study was conducted using a thematic analysis of career documentation, a focus group, and interviews with career policymakers, practitioners, and a sample of primary and secondary schoolteachers. Findings revealed that the career education stakeholders conceptualised automation as creating new occupations rather than resulting in the mass displacement of jobs. However, despite recognising that specific job types were becoming more susceptible to automation and that different groups of children tend to aspire to certain types of occupations, stakeholders did not infer that automation may contribute to differential career impacts across groups. Several practical challenges were raised by stakeholders, including managing some children’s anxiety due to future career uncertainty. Consistent with insights from Study Two, Study Three findings showed stakeholders focused on general career skills and adaptability without also exploring the reasons and principles underlying automation and job change. Fostering this meta understanding of job change could aid young people in their career preparedness and decision making by enabling them to discern likely changes to occupations and job markets. After synthesising findings from the three studies, recommendations for advancing career theory and practice were provided. A key contribution of this research was highlighting potential limitations of aspiration-occupation matching theories and interventions by identifying automation-related career risks and supplementary approaches to address the nuanced changes within occupations and job markets. In sum, this thesis revealed how automation and job change may serve as both an environmental barrier and a potential source of inspiration for children as they pursue and prepare for their future careers

    Resource-aware scheduling for 2D/3D multi-/many-core processor-memory systems

    Get PDF
    This dissertation addresses the complexities of 2D/3D multi-/many-core processor-memory systems, focusing on two key areas: enhancing timing predictability in real-time multi-core processors and optimizing performance within thermal constraints. The integration of an increasing number of transistors into compact chip designs, while boosting computational capacity, presents challenges in resource contention and thermal management. The first part of the thesis improves timing predictability. We enhance shared cache interference analysis for set-associative caches, advancing the calculation of Worst-Case Execution Time (WCET). This development enables accurate assessment of cache interference and the effectiveness of partitioned schedulers in real-world scenarios. We introduce TCPS, a novel task and cache-aware partitioned scheduler that optimizes cache partitioning based on task-specific WCET sensitivity, leading to improved schedulability and predictability. Our research explores various cache and scheduling configurations, providing insights into their performance trade-offs. The second part focuses on thermal management in 2D/3D many-core systems. Recognizing the limitations of Dynamic Voltage and Frequency Scaling (DVFS) in S-NUCA many-core processors, we propose synchronous thread migrations as a thermal management strategy. This approach culminates in the HotPotato scheduler, which balances performance and thermal safety. We also introduce 3D-TTP, a transient temperature-aware power budgeting strategy for 3D-stacked systems, reducing the need for Dynamic Thermal Management (DTM) activation. Finally, we present 3QUTM, a novel method for 3D-stacked systems that combines core DVFS and memory bank Low Power Modes with a learning algorithm, optimizing response times within thermal limits. This research contributes significantly to enhancing performance and thermal management in advanced processor-memory systems

    Research Assessment Exercise : Report 2023 : International evaluation of research at the University of Vaasa

    Get PDF
    The University of Vaasa is a business-oriented and multidisciplinary science university established in 1968. The university’s strategy focuses on three areas of research: management and change, finance and economic decision-making, and energy and sustainable development. It highlights multidisciplinary research with strong disciplinary knowledge integrated through research platforms to support solving important global challenges. The core mission is to advance new knowledge and to “Energise Business and Society.” The University of Vaasa has a core faculty of 584 and 5,203 students with 190 international students and 296 PhD students. International accreditations, unique research infrastructure, and partnerships with global businesses and organisations make the University of Vaasa a trusted and valued partner within both regional and international innovation ecosystems. The Universities Act (Section 87. Evaluation (Amendment 1302/2013)) stipulates that universities must evaluate their research activities. In line with the strategy of the University of Vaasa, the university evaluates its research activities every five years in order to strengthen the quality of the research internationally, to advance academic and societal impacts of the research, and to further develop the research activities and environment. The previous research evaluations were carried out in 2010 and in 2015. This third research evaluation covered research activities from 2015 to 2020. Diversity, meaningfulness, and focus on future were important features of the research assessment exercise (RAE). The RAE was carried out as a multilevel and multidimensional evaluation targeting research environment, research cooperation and funding, publications, and scientific activities including societal impact. In addition to research groups and the university as a whole, it focused on schools and platforms. The evaluation material and the expert panels’ interviews thus covered three different levels of the university organisation. A Steering Committee consisting of members of the Research Council of the University of Vaasa (2021–2023) was nominated to support and guide the research evaluation. The RAE Univaasa 2022 followed practices of responsible evaluation. Engagement of the research units and researchers was an important aspect of the evaluation process. The evaluation team designed, organised, and implemented the different phases of the RAE in collaboration with the heads of the schools, platforms, and research group leaders. All evaluated units got basic summaries of their research output and bibliometric reports before preparing their self-evaluation reports. The material and the bibliometric reports aimed to provide the units tools for self-reflection and further development of their research. In addition to the CWTS analysis prepared by Leiden University, SciVal analyses on Scopus publications were performed for each unit by the Tritonia Academic Library. Bibliometric analyses also included results from AI-analysis of the themes of open access publications (OSUVA, 2018-2021). The external evaluation was performed by five panels of independent scientific experts. Four of the panels were discipline-specific (based on the school’s disciplines). These school-based panels were asked to provide written comments by comparing each research group’s research to the international and national level of research in the respective field. Based on the research group level evaluations, each school-based panel was asked to offer an overall assessment of the school’s research activities and quality of research. A separate team of the panellists were responsible for the assessment of the three research platforms. The University Panel, consisting of the panel chair and the chairs of the school-based panels, was asked to provide an integrating evaluation of the quality of research activities and environment at the University of Vaasa and to offer recommendations for how the university should develop its research. The results of the assessment and the expert panels’ reports and recommendations will have an effect on the strategic development of research within the university from 2023 onwards. Evaluation indicated that several research groups are currently at a high international level. The areas represented at the University of Vaasa are ones where excellent researchers have many possibilities. The societal impact of research and the industrial cooperation with regional businesses and also the wider interaction with the society work very well at the University of Vaasa. The flexibility of the cooperation seems to be far greater than in many other universities. Many of the projects contribute clearly to the research and the education of the university and provide useful information for the companies the research groups partner with. However, building international research capacity will remain challenging. This is partly a product of the size of the University and the research groups, most of which are relatively small and rely on a small number of high performing professors. The international experts gave several recommendations on how to improve the quality of research at the University of Vaasa. Externally funded projects that support the university’s aim to become an international research university should be encouraged. The experts suggested that the strategy is augmented with more concrete goals on the research focus, quality, and volume. The implementation plan should specify at some level what would be the areas, or modes of operation, in which the university wants to excel, and how this excellence is going to be measured. Recruitment should be prioritised based on the strategy of the university and the availability of excellent people. The university also should consider using international Professors of Practice and inviting more international Visiting Professorships. Moreover, increased possibilities for faculty and PhD students to engage in international activities could boost production of top-level research. The panels also assessed the role of the evaluated units and the internal cooperation within the university. The research groups vary a lot in their size, but also in their cohesion. The panellists saw that in terms of organisation, some groups were tight clusters, while other groups did not seem to have a clear structure. They considered that it would be very useful if each researcher would have an intellectual home base at the university. The panellists perceived the relationship between research groups and platforms to be unclear. The model was considered complicated relative to the size of the schools and the university. The panellists suggested reviewing the role and form of the platforms. In particular, the panellists suggested that in relation to the service of schools and their research groups, the platforms should have a supporting role, instead of trying to form research identities of their own. However, the panellists also considered that there is no definite need to have all the platforms operate in the same way

    Auditable and performant Byzantine consensus for permissioned ledgers

    Get PDF
    Permissioned ledgers allow users to execute transactions against a data store, and retain proof of their execution in a replicated ledger. Each replica verifies the transactions’ execution and ensures that, in perpetuity, a committed transaction cannot be removed from the ledger. Unfortunately, this is not guaranteed by today’s permissioned ledgers, which can be re-written if an arbitrary number of replicas collude. In addition, the transaction throughput of permissioned ledgers is low, hampering real-world deployments, by not taking advantage of multi-core CPUs and hardware accelerators. This thesis explores how permissioned ledgers and their consensus protocols can be made auditable in perpetuity; even when all replicas collude and re-write the ledger. It also addresses how Byzantine consensus protocols can be changed to increase the execution throughput of complex transactions. This thesis makes the following contributions: 1. Always auditable Byzantine consensus protocols. We present a permissioned ledger system that can assign blame to individual replicas regardless of how many of them misbehave. This is achieved by signing and storing consensus protocol messages in the ledger and providing clients with signed, universally-verifiable receipts. 2. Performant transaction execution with hardware accelerators. Next, we describe a cloud-based ML inference service that provides strong integrity guarantees, while staying compatible with current inference APIs. We change the Byzantine consensus protocol to execute machine learning (ML) inference computation on GPUs to optimize throughput and latency of ML inference computation. 3. Parallel transactions execution on multi-core CPUs. Finally, we introduce a permissioned ledger that executes transactions, in parallel, on multi-core CPUs. We separate the execution of transactions between the primary and secondary replicas. The primary replica executes transactions on multiple CPU cores and creates a dependency graph of the transactions that the backup replicas utilize to execute transactions in parallel.Open Acces

    A BIM - GIS Integrated Information Model Using Semantic Web and RDF Graph Databases

    Get PDF
    In recent years, 3D virtual indoor and outdoor urban modelling has become an essential geospatial information framework for civil and engineering applications such as emergency response, evacuation planning, and facility management. Building multi-sourced and multi-scale 3D urban models are in high demand among architects, engineers, and construction professionals to achieve these tasks and provide relevant information to decision support systems. Spatial modelling technologies such as Building Information Modelling (BIM) and Geographical Information Systems (GIS) are frequently used to meet such high demands. However, sharing data and information between these two domains is still challenging. At the same time, the semantic or syntactic strategies for inter-communication between BIM and GIS do not fully provide rich semantic and geometric information exchange of BIM into GIS or vice-versa. This research study proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graph databases. The suggested solution's originality and novelty come from combining the advantages of integrating BIM and GIS models into a semantically unified data model using a semantic framework and ontology engineering approaches. The new model will be named Integrated Geospatial Information Model (IGIM). It is constructed through three stages. The first stage requires BIMRDF and GISRDF graphs generation from BIM and GIS datasets. Then graph integration from BIM and GIS semantic models creates IGIMRDF. Lastly, the information from IGIMRDF unified graph is filtered using a graph query language and graph data analytics tools. The linkage between BIMRDF and GISRDF is completed through SPARQL endpoints defined by queries using elements and entity classes with similar or complementary information from properties, relationships, and geometries from an ontology-matching process during model construction. The resulting model (or sub-model) can be managed in a graph database system and used in the backend as a data-tier serving web services feeding a front-tier domain-oriented application. A case study was designed, developed, and tested using the semantic integrated information model for validating the newly proposed solution, architecture, and performance

    Microcredentials to support PBL

    Get PDF
    • …
    corecore