869 research outputs found
Optimisation for Optical Data Centre Switching and Networking with Artificial Intelligence
Cloud and cluster computing platforms have become standard across almost every domain of business, and their scale quickly approaches servers in a single warehouse. However, the tier-based opto-electronically packet switched network infrastructure that is standard across these systems gives way to several scalability bottlenecks including resource fragmentation and high energy requirements. Experimental results show that optical circuit switched networks pose a promising alternative that could avoid these.
However, optimality challenges are encountered at realistic commercial scales. Where exhaustive optimisation techniques are not applicable for problems at the scale of Cloud-scale computer networks, and expert-designed heuristics are performance-limited and typically biased in their design, artificial intelligence can discover more scalable and better performing optimisation strategies.
This thesis demonstrates these benefits through experimental and theoretical work spanning all of component, system and commercial optimisation problems which stand in the way of practical Cloud-scale computer network systems. Firstly, optical components are optimised to gate in and are demonstrated in a proof-of-concept switching architecture for optical data centres with better wavelength and component scalability than previous demonstrations. Secondly, network-aware resource allocation schemes for optically composable data centres are learnt end-to-end with deep reinforcement learning and graph neural networks, where less networking resources are required to achieve the same resource efficiency compared to conventional methods. Finally, a deep reinforcement learning based method for optimising PID-control parameters is presented which generates tailored parameters for unseen devices in . This method is demonstrated on a market leading optical switching product based on piezoelectric actuation, where switching speed is improved with no compromise to optical loss and the manufacturing yield of actuators is improved. This method was licensed to and integrated within the manufacturing pipeline of this company. As such, crucial public and private infrastructure utilising these products will benefit from this work
A Survey of Graph Pre-processing Methods: From Algorithmic to Hardware Perspectives
Graph-related applications have experienced significant growth in academia
and industry, driven by the powerful representation capabilities of graph.
However, efficiently executing these applications faces various challenges,
such as load imbalance, random memory access, etc. To address these challenges,
researchers have proposed various acceleration systems, including software
frameworks and hardware accelerators, all of which incorporate graph
pre-processing (GPP). GPP serves as a preparatory step before the formal
execution of applications, involving techniques such as sampling, reorder, etc.
However, GPP execution often remains overlooked, as the primary focus is
directed towards enhancing graph applications themselves. This oversight is
concerning, especially considering the explosive growth of real-world graph
data, where GPP becomes essential and even dominates system running overhead.
Furthermore, GPP methods exhibit significant variations across devices and
applications due to high customization. Unfortunately, no comprehensive work
systematically summarizes GPP. To address this gap and foster a better
understanding of GPP, we present a comprehensive survey dedicated to this area.
We propose a double-level taxonomy of GPP, considering both algorithmic and
hardware perspectives. Through listing relavent works, we illustrate our
taxonomy and conduct a thorough analysis and summary of diverse GPP techniques.
Lastly, we discuss challenges in GPP and potential future directions
Agency and professionalism in translation and interpreting: navigating conflicting role identities among translation and interpreting practitioners working for local government in Japan
This thesis investigates the ethical choices of Coordinators for International Relations (CIRs), a group of largely non-professional translators and interpreters working for local government bodies in Japan. In addition to T&I, CIRs are tasked with engaging in intercultural relations, “internationalising” their local areas, and working with the public as members of the civil service. The thesis examines the different roles and particular circumstances of CIRs to describe and explain how they make ethical decisions in T&I.
This was explored using an ethnographic methodology featuring both traditional and online sites. Specifically, data was collected from participant observation of an internet forum created by CIRs, through online surveys, and also by employing focus groups and
interviews held with CIRs in Japan. Analysis of forum and survey data illuminated the ethical struggles experienced by CIRs in T&I. It indicated that professionalism and agency were of particular concern for these CIRs when dealing with questions of ethics. Through focus groups, more detailed data was elicited surrounding the ethical struggles faced by CIRs, with a particular focus on professionalism and agency. Forum and focus group data combined to create a set of hypothetical ethical scenarios discussed during semistructured interviews held to understand factors that influence CIR decision making.
A theoretical framework combining Agency Theory (Mitnick, 1975) and Role Identity Theory (Stryker, 1968) was used to describe and explain CIR ethical decision making; foregrounding their potential to effect change in their workplaces (agency) and the prioritisation afforded to different roles with which they identify in their work (role identity). Ultimately, CIRs were most disposed to translate or interpret in a manner that they believed was in keeping with the wishes of their employers, based on their superior ability to monitor and control the CIRs. However, in instances where the CIR operated with free will, their choices were a result of complex structuring of the various identities that they had normalised within themselves.
Keywords: translation, interpreting, Coordinator for International Relations (CIR), Japan Exchange and Teaching (JET) Programme, agency, professionalism, role identity
Rethinking FPGA Architectures for Deep Neural Network applications
The prominence of machine learning-powered solutions instituted an unprecedented trend of integration into virtually all applications with a broad range of deployment constraints from tiny embedded systems to large-scale warehouse computing machines. While recent research confirms the edges of using contemporary FPGAs to deploy or accelerate machine learning applications, especially where the latency and energy consumption are strictly limited, their pre-machine learning optimised architectures remain a barrier to the overall efficiency and performance.
Realizing this shortcoming, this thesis demonstrates an architectural study aiming at solutions that enable hidden potentials in the FPGA technology, primarily for machine learning algorithms. Particularly, it shows how slight alterations to the state-of-the-art architectures could significantly enhance the FPGAs toward becoming more machine learning-friendly while maintaining the near-promised performance for the rest of the applications. Eventually, it presents a novel systematic approach to deriving new block architectures guided by designing limitations and machine learning algorithm characteristics through benchmarking.
First, through three modifications to Xilinx DSP48E2 blocks, an enhanced digital signal processing (DSP) block for important computations in embedded deep neural network (DNN) accelerators is described. Then, two tiers of modifications to FPGA logic cell architecture are explained that deliver a variety of performance and utilisation benefits with only minor area overheads. Eventually, with the goal of exploring this new design space in a methodical manner, a problem formulation involving computing nested loops over multiply-accumulate (MAC) operations is first proposed. A quantitative methodology for deriving efficient coarse-grained compute block architectures from benchmarks is then suggested together with a family of new embedded blocks, called MLBlocks
Examining the Relationships Between Distance Education Students’ Self-Efficacy and Their Achievement
This study aimed to examine the relationships between students’ self-efficacy (SSE) and students’ achievement (SA) in distance education. The instruments were administered to 100 undergraduate students in a distance university who work as migrant workers in Taiwan to gather data, while their SA scores were obtained from the university. The semi-structured interviews for 8 participants consisted of questions that showed the specific conditions of SSE and SA. The findings of this study were reported as follows: There was a significantly positive correlation between targeted SSE (overall scales and general self-efficacy) and SA. Targeted students' self-efficacy effectively predicted their achievement; besides, general self- efficacy had the most significant influence. In the qualitative findings, four themes were extracted for those students with lower self-efficacy but higher achievement—physical and emotional condition, teaching and learning strategy, positive social interaction, and intrinsic motivation. Moreover, three themes were extracted for those students with moderate or higher self-efficacy but lower achievement—more time for leisure (not hard-working), less social interaction, and external excuses. Providing effective learning environments, social interactions, and teaching and learning strategies are suggested in distance education
Application of knowledge management principles to support maintenance strategies in healthcare organisations
Healthcare is a vital service that touches people's lives on a daily basis by providing treatment and
resolving patients' health problems through the staff. Human lives are ultimately dependent on the skilled
hands of the staff and those who manage the infrastructure that supports the daily operations of the
service, making it a compelling reason for a dedicated research study. However, the UK healthcare sector
is undergoing rapid changes, driven by rising costs, technological advancements, changing patient
expectations, and increasing pressure to deliver sustainable healthcare. With the global rise in healthcare
challenges, the need for sustainable healthcare delivery has become imperative. Sustainable healthcare
delivery requires the integration of various practices that enhance the efficiency and effectiveness of
healthcare infrastructural assets. One critical area that requires attention is the management of
healthcare facilities.
Healthcare facilitiesis considered one of the core elements in the delivery of effective healthcare services,
as shortcomings in the provision of facilities management (FM) services in hospitals may have much more
drastic negative effects than in any other general forms of buildings. An essential element in healthcare
FM is linked to the relationship between action and knowledge. With a full sense of understanding of
infrastructural assets, it is possible to improve, manage and make buildings suitable to the needs of users
and to ensure the functionality of the structure and processes.
The premise of FM is that an organisation's effectiveness and efficiency are linked to the physical
environment in which it operates and that improving the environment can result in direct benefits in
operational performance. The goal of healthcare FM is to support the achievement of organisational
mission and goals by designing and managing space and infrastructural assets in the best combination of
suitability, efficiency, and cost. In operational terms, performance refers to how well a building
contributes to fulfilling its intended functions.
Therefore, comprehensive deployment of efficient FM approaches is essential for ensuring quality
healthcare provision while positively impacting overall patient experiences. In this regard, incorporating
knowledge management (KM) principles into hospitals' FM processes contributes significantly to ensuring
sustainable healthcare provision and enhancement of patient experiences. Organisations implementing
KM principles are better positioned to navigate the constantly evolving business ecosystem easily.
Furthermore, KM is vital in processes and service improvement, strategic decision-making, and
organisational adaptation and renewal.
In this regard, KM principles can be applied to improve hospital FM, thereby ensuring sustainable
healthcare delivery. Knowledge management assumes that organisations that manage their
organisational and individual knowledge more effectively will be able to cope more successfully with the challenges of the new business ecosystem. There is also the argument that KM plays a crucial role in
improving processes and services, strategic decision-making, and adapting and renewing an organisation.
The goal of KM is to aid action – providing "a knowledge pull" rather than the information overload most
people experience in healthcare FM. Other motivations for seeking better KM in healthcare FM include
patient safety, evidence-based care, and cost efficiency as the dominant drivers. The most evidence exists
for the success of such approaches at knowledge bottlenecks, such as infection prevention and control,
working safely, compliances, automated systems and reminders, and recall based on best practices. The
ability to cultivate, nurture and maximise knowledge at multiple levels and in multiple contexts is one of
the most significant challenges for those responsible for KM. However, despite the potential benefits,
applying KM principles in hospital facilities is still limited. There is a lack of understanding of how KM can
be effectively applied in this context, and few studies have explored the potential challenges and
opportunities associated with implementing KM principles in hospitals facilities for sustainable healthcare
delivery.
This study explores applying KM principles to support maintenance strategies in healthcare organisations.
The study also explores the challenges and opportunities, for healthcare organisations and FM
practitioners, in operationalising a framework which draws the interconnectedness between healthcare.
The study begins by defining healthcare FM and its importance in the healthcare industry. It then discusses
the concept of KM and the different types of knowledge that are relevant in the healthcare FM sector.
The study also examines the challenges that healthcare FM face in managing knowledge and how the
application of KM principles can help to overcome these challenges. The study then explores the different
KM strategies that can be applied in healthcare FM. The KM benefits include improved patient outcomes,
reduced costs, increased efficiency, and enhanced collaboration among healthcare professionals.
Additionally, issues like creating a culture of innovation, technology, and benchmarking are considered.
In addition, a framework that integrates the essential concepts of KM in healthcare FM will be presented
and discussed.
The field of KM is introduced as a complex adaptive system with numerous possibilities and challenges.
In this context, and in consideration of healthcare FM, five objectives have been formulated to achieve
the research aim. As part of the research, a number of objectives will be evaluated, including appraising
the concept of KM and how knowledge is created, stored, transferred, and utilised in healthcare FM,
evaluating the impact of organisational structure on job satisfaction as well as exploring how cultural
differences impact knowledge sharing and performance in healthcare FM organisations.
This study uses a combination of qualitative methods, such as meetings, observations, document analysis
(internal and external), and semi-structured interviews, to discover the subjective experiences of
healthcare FM employees and to understand the phenomenon within a real-world context and attitudes of healthcare FM as the data collection method, using open questions to allow probing where appropriate
and facilitating KM development in the delivery and practice of healthcare FM.
The study describes the research methodology using the theoretical concept of the "research onion". The
qualitative research was conducted in the NHS acute and non-acute hospitals in Northwest England.
Findings from the research study revealed that while the concept of KM has grown significantly in recent
years, KM in healthcare FM has received little or no attention. The target population was fifty (five FM
directors, five academics, five industry experts, ten managers, ten supervisors, five team leaders and ten
operatives). These seven groups were purposively selected as the target population because they play a
crucial role in KM enhancement in healthcare FM. Face-to-face interviews were conducted with all
participants based on their pre-determined availability. Out of the 50-target population, only 25 were
successfully interviewed to the point of saturation. Data collected from the interview were coded and
analysed using NVivo to identify themes and patterns related to KM in healthcare FM.
The study is divided into eight major sections. First, it discusses literature findings regarding healthcare
FM and KM, including underlying trends in FM, KM in general, and KM in healthcare FM. Second, the
research establishes the study's methodology, introducing the five research objectives, questions and
hypothesis. The chapter introduces the literature on methodology elements, including philosophical views
and inquiry strategies. The interview and data analysis look at the feedback from the interviews. Lastly, a
conclusion and recommendation summarise the research objectives and suggest further research.
Overall, this study highlights the importance of KM in healthcare FM and provides insights for healthcare
FM directors, managers, supervisors, academia, researchers and operatives on effectively leveraging
knowledge to improve patient care and organisational effectiveness
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Low Power Memory/Memristor Devices and Systems
This reprint focusses on achieving low-power computation using memristive devices. The topic was designed as a convenient reference point: it contains a mix of techniques starting from the fundamental manufacturing of memristive devices all the way to applications such as physically unclonable functions, and also covers perspectives on, e.g., in-memory computing, which is inextricably linked with emerging memory devices such as memristors. Finally, the reprint contains a few articles representing how other communities (from typical CMOS design to photonics) are fighting on their own fronts in the quest towards low-power computation, as a comparison with the memristor literature. We hope that readers will enjoy discovering the articles within
LASSO – an observatorium for the dynamic selection, analysis and comparison of software
Mining software repositories at the scale of 'big code' (i.e., big data) is a challenging activity. As well as finding a suitable software corpus and making it programmatically accessible through an index or database, researchers and practitioners have to establish an efficient analysis infrastructure and precisely define the metrics and data extraction approaches to be applied. Moreover, for analysis results to be generalisable, these tasks have to be applied at a large enough scale to have statistical significance, and if they are to be repeatable, the artefacts need to be carefully maintained and curated over time. Today, however, a lot of this work is still performed by human beings on a case-by-case basis, with the level of effort involved often having a significant negative impact on the generalisability and repeatability of studies, and thus on their overall scientific value.
The general purpose, 'code mining' repositories and infrastructures that have emerged in recent years represent a significant step forward because they automate many software mining tasks at an ultra-large scale and allow researchers and practitioners to focus on defining the questions they would like to explore at an abstract level. However, they are currently limited to static analysis and data extraction techniques, and thus cannot support (i.e., help automate) any studies which involve the execution of software systems. This includes experimental validations of techniques and tools that hypothesise about the behaviour (i.e., semantics) of software, or data analysis and extraction techniques that aim to measure dynamic properties of software.
In this thesis a platform called LASSO (Large-Scale Software Observatorium) is introduced that overcomes this limitation by automating the collection of dynamic (i.e., execution-based) information about software alongside static information. It features a single, ultra-large scale corpus of executable software systems created by amalgamating existing Open Source software repositories and a dedicated DSL for defining abstract selection and analysis pipelines. Its key innovations are integrated capabilities for searching for selecting software systems based on their exhibited behaviour and an 'arena' that allows their responses to software tests to be compared in a purely data-driven way. We call the platform a 'software observatorium' since it is a place where the behaviour of large numbers of software systems can be observed, analysed and compared
Deployment of Deep Neural Networks on Dedicated Hardware Accelerators
Deep Neural Networks (DNNs) have established themselves as powerful tools for
a wide range of complex tasks, for example computer vision or natural language
processing. DNNs are notoriously demanding on compute resources and as a
result, dedicated hardware accelerators for all use cases are developed. Different
accelerators provide solutions from hyper scaling cloud environments for the
training of DNNs to inference devices in embedded systems. They implement
intrinsics for complex operations directly in hardware. A common example
are intrinsics for matrix multiplication. However, there exists a gap between
the ecosystems of applications for deep learning practitioners and hardware
accelerators. HowDNNs can efficiently utilize the specialized hardware intrinsics
is still mainly defined by human hardware and software experts.
Methods to automatically utilize hardware intrinsics in DNN operators are a
subject of active research. Existing literature often works with transformationdriven
approaches, which aim to establish a sequence of program rewrites and
data-layout transformations such that the hardware intrinsic can be used to
compute the operator. However, the complexity this of task has not yet been
explored, especially for less frequently used operators like Capsule Routing. And
not only the implementation of DNN operators with intrinsics is challenging,
also their optimization on the target device is difficult. Hardware-in-the-loop
tools are often used for this problem. They use latency measurements of implementations
candidates to find the fastest one. However, specialized accelerators
can have memory and programming limitations, so that not every arithmetically
correct implementation is a valid program for the accelerator. These invalid
implementations can lead to unnecessary long the optimization time.
This work investigates the complexity of transformation-driven processes to
automatically embed hardware intrinsics into DNN operators. It is explored
with a custom, graph-based intermediate representation (IR). While operators
like Fully Connected Layers can be handled with reasonable effort, increasing
operator complexity or advanced data-layout transformation can lead to scaling issues.
Building on these insights, this work proposes a novel method to embed
hardware intrinsics into DNN operators. It is based on a dataflow analysis.
The dataflow embedding method allows the exploration of how intrinsics and
operators match without explicit transformations. From the results it can derive
the data layout and program structure necessary to compute the operator with
the intrinsic. A prototype implementation for a dedicated hardware accelerator
demonstrates state-of-the art performance for a wide range of convolutions, while
being agnostic to the data layout. For some operators in the benchmark, the
presented method can also generate alternative implementation strategies to
improve hardware utilization, resulting in a geo-mean speed-up of ×2.813 while
reducing the memory footprint. Lastly, by curating the initial set of possible
implementations for the hardware-in-the-loop optimization, the median timeto-
solution is reduced by a factor of ×2.40. At the same time, the possibility to
have prolonged searches due a bad initial set of implementations is reduced,
improving the optimization’s robustness by ×2.35
- …