University of Edinburgh

Edinburgh Research Archive
Not a member yet
    41201 research outputs found

    Addressing microarchitectural implications of serverless functions

    Get PDF
    Serverless computing has emerged as a widely-used paradigm for running services in the cloud. In this model, developers organize applications as a set of functions invoked on-demand in response to events, such as HTTP requests. Developers are charged for CPU time and memory footprint during function execution, incentivising them to reduce runtime and memory consumption. Furthermore, to avoid long start-up delays, cloud providers keep recently-triggered instances idle (or warm) for some time, anticipating future invocations. Consequently, a server may host thousands of warm instances of various functions, their executions interleaved based on incoming invocations. This thesis investigates the workload characteristics of serverless and observes that: (1) there is high interleaving among warm instances on a given server; (2) individual warm functions are invoked relatively infrequently, often at intervals of seconds or minutes; and (3) many function invocations complete within milliseconds. This interleaved execution of rarely invoked functions leads to thrashing of each function's microarchitectural state between invocations. Meanwhile, the short execution time of functions impedes the amortization of warming up on-chip microarchitectural state. As a result, when a given memory-resident function is re-invoked, it commonly finds its on-chip microarchitectural state completely cold due to thrashing by other functions---a phenomenon we term lukewarm execution. Our analysis reveals that the cold microarchitectural state severely affects CPU performance, with the main source of degradation being the core front-end, comprising instruction delivery, branch identification via the BTB, and conditional branch prediction. Based on our analysis, we propose two mechanisms to address performance degradation due to lukewarm invocations. The first technique is Jukebox, a record-and-replay instruction prefetcher specifically designed to mitigate the high cost of off-chip instruction misses. We demonstrate that Jukebox's simplistic design effectively eliminates more than 95% of long-latency off-chip instruction misses. The second technique is Ignite, which builds on Jukebox to offer a comprehensive solution for restoring front-end microarchitectural state, including instructions, BTB, and branch predictor state, via unified metadata. Ignite records an invocation's control flow graph in compressed format and uses that to restore the state of the front-end structures the next time the function is invoked. Ignite significantly reduces instruction misses, BTB misses, and branch mispredictions, resulting in an average performance improvement of 43%. In summary, this thesis demonstrates that serverless systems present distinct workload characteristics that fail to match traditional CPU designs, severely impacting performance. Two simple techniques can overcome these bottlenecks by preserving the microarchitectural state across function invocations

    The Story of Ethiopian Governance: From a centralized public service monopoly to ‘big man’ politics

    Get PDF
    This report discusses how the EPRDF, from operating on the logic of a centralized political market, degenerated into an oligopolistic monopoly and later into a neo-patrimonial state that operates on patronage and sustaining disorder while maintaining a simulacrum of an institutionalized state. The report is divided into four parts. The first part discusses the genesis of the EPRDF and its highly centralized public service monopoly in government. The second part discusses the beginning of the end of the EPRDF’s centralized public service monopoly. The third part discusses the complete transformation of the state into what is called a typical African ‘big man’ politics. The fourth and final part is the conclusion

    Parental care and cooperation in the burying beetle Nicrophorus vespilloides

    Get PDF
    In species that provide parental care, individuals must choose how to split their resources between caring for their current offspring and investing in their own reproductive potential. These decisions are made based on factors that shift the balance of costs and benefits associated with allocating resources to current or future reproduction. For parents providing uniparental care such factors relate to the value of the current brood and the likelihood of future reproduction. Females and males that cooperate to provide biparental care, must also consider factors that may influence the contribution of their partner. In this thesis, I explore what affects the level of care parents provide for their offspring and how females and males that provide biparental care balance their relative contribution in the burying beetle Nicrophorus vespilloides. I focus on four factors: previous reproductive allocation, nutritional state, social environment, and synchrony in the onset of care. First, I found that females provided the same level of care to a subsequent brood regardless of previous reproductive allocation and resource access, which suggests that neither affected future ability to provide care. Next, I found that females adjusted their level of care in response to both their own nutritional state and that of their partner and that these decisions were independent of their partner’s contribution, while males only responded to the contribution of their partner. Then, I found that parents provided a similar level of care regardless of the presence of female or male intruders. Finally, I found that males provided more care when the female and male started providing care asynchronously in comparison to when they started synchronously while females provided a similar level of care regardless

    High precision tools for intraoperative brain cancer therapy

    Get PDF
    Glioblastoma remains to be the most aggressive brain malignancy to affect adults. The disease has a median survival of only 15 months, owing to tumour recurrence within 2 cm of the primary tumour site. Despite the poor prognosis, standard treatment still relies heavily on surgical resection of the tumours, followed by chemotherapy with temozolomide and radiotherapy. This treatment is hindered by the vast heterogeneity within and between glioblastoma subtypes. In this work, the author has proposed the use of photodynamic therapy as an adjuvant to surgical resection. Photodynamic therapy utilises light to induce a reaction between a photosensitiser and molecular oxygen in order to initiate cell death. This technique centres on three main components; the photosensitiser, light and oxygen. Each of these, in themselves, are non-toxic and cell-death is only elicited when all three are combined. While there are promising candidates for the treatment of glioblastoma through photodynamic therapy, these photosensitisers are often bulky molecules which lack specific targeting groups. This results in side- and off-target effects as well as patient-dependent differences in dosage in the target tissues at the time of treatment. Therefore, a library of photosensitisers based on the nitrobenzoselenadiazole (SeNBD) scaffold was designed. By taking advantage of the low molecular weight of SeNBD, this photosensitiser was readily conjugated to other small molecules, with the overarching aim of targeting glioblastoma metabolism. While differential uptake of metabolites was not effective enough to discriminate between healthy and cancer tissue, a further level of control was added through the design of an activatable photosensitiser. This was achieved by disrupting the electron density within the scaffold, thus quenching the absorbance of the photosensitiser. The nature of these quenching groups can be altered to tune the stimulus which uncages the photosensitiser. As a result, a library of orthogonal and environmentally sensitive photosensitisers based on SeNBD has been created, and this approach applied to other photosensitive scaffolds (i.e. 2-thioxocoumarin, thionaphthalimide, Nile Blue and methylene blue). This highlights the modularity of the approach and the wider implication on developing activatable photodynamic therapy agents. Combining these two strategies paves the way to the development of some of the first enzyme activatable, metabolically targeted photosensitisers. As a model, a cathepsin B sensitive pro-fluorophore was synthesised based on the ONBD scaffold. Transcriptomic data from glioblastoma stem cells (GCGR-E17, GCGR-E31, GCGR-E57) and healthy foetal controls (GCGR-NS12ST_A, GCGR-NS17ST_A, GCGR-NS9FB_B) suggest that this approach can provide an effective route to selective ablation of glioblastoma cells without significant damage to surrounding healthy tissues

    Breeding for reduced methane emissions in livestock

    Get PDF
    This project examined the potential reductions in livestock methane emissions through breeding, and the policy levers that could motivate these changes. We explored the technologies that detect and measure methane, manage data and are used in the breeding process and examined their potential availability in Scotland in 2030 and 2045. We also identified the relevant policy levers and behaviour changes and considered what Government, the post-farm market, pre-farm gate actors and farmers can do differently to encourage methane reductions through breeding

    Communication and complexity: layperson decision-making in the Scottish Children’s Hearings system

    Get PDF
    The Children’s Hearings system is a unitary, non-court-based legal tribunal system responding to child protection and youth justice concerns in Scotland. Decision-making is undertaken by three lay volunteers called panel members, and the participation of children and young people is viewed to be central to the process. Despite the importance of the decisions being made, there has been little examination of how panel members experience the decision-making process or how it can be theorised. An ethnographic research methodology was employed to explore how panel members make decisions and what helps or hinders the process. Observations of 67 children’s hearings and pre-hearing panels, and qualitative interviews with 20 panel members, were undertaken. Panel member decision-making involves two key processes; making sense of and forming judgements based on information presented in advance of a hearing, and exploration of this information in the context of a face-to-face meeting with the child, parents, social worker and other relevant people. Findings highlight the crucial role of written reports in preparing panel members for a hearing and the barriers to effective communication, the centrality of emotion and the need for panel members to manage their own and others’ emotional responses, and the importance of language use and interactional competence in managing hearings skilfully. These features affect how children’s views are heard and considered in hearings. This study addresses a gap in the literature by providing a detailed examination of the communicative and interactional processes central to panel member decision-making and locating these findings within wider judgement and decision-making frameworks. Recommendations aimed at supporting the decision-making process are proposed. The findings make a contribution to current knowledge and understanding of decision-making in child welfare proceedings and how lay decision-makers make sense of and reach decisions regarding the care of a child

    Estimating genetic and environmental sources of variance for depression

    Get PDF
    Major depressive disorder (MDD) is a highly prevalent psychiatric disorder that is now the leading cause of worldwide disability in terms of years lived with disability. In the majority of Western countries, the lifetime prevalence of MDD typically varies between 8% and 12%. There are consistently established relationships with female gender, alcohol abuse, diabetes, and poor social relationships. The high prevalence and disability associated with MDD make research aimed at understanding its aetiology and developing effective treatments a priority. MDD aggregates within families and the heritability of MDD has been estimated as 37% (SE 5%) in a meta-analysis of twin studies and 32% (SE 9%) using genomic similarity among unrelated individuals. Given the genetic contribution to MDD, genetic studies are a potential means of understanding its aetiology as well as identifying new drug targets. Despite this substantial genetic contribution to its aetiology, candidate gene and genome-wide association studies, including a mega-analysis of more than 20,000 individuals with 9240 cases and 9519 controls in the discovery sample, have failed to identify significantly associated specific genetic variants. Nonetheless, genome-wide association and related studies have shown that MDD is a genetically complex disorder where risk is proposed to result from the cumulative effects of many low-penetrance genetic variants. Increasingly it is also recognised that a diagnosis of MDD may group together individuals who suffer from causally distinct conditions. Some studies indicate that the heritability estimates of MDD differ by sex with female MDD showing higher heritability than male MDD suggesting that the genetic causes may be somewhat distinct. Further, it has been suggested that both age of onset and single versus recurrent episode illness course may have somewhat differing genetic aetiologies. These findings highlight the substantial heterogeneity of MDD, which may further impede the search for genetic causes. There is therefore an urgent need to increase sample sizes and to refine and stratify the phenotype to reduce heterogeneity of phenotypic measurements, and measurement error of MDD with the aim of identifying more genetically homogenous targets for better powered association studies. Pedigree-based genetic studies are an efficient means for dissecting trait heterogeneity because they are able to capture all additive heritability whilst matching for key confounds present in studies of unaffected subjects

    Classification and quantification of uncertainties for a tidal turbine power performance assessment

    No full text
    As part of the worldwide initiative to lower carbon emissions, renewable energy sources are playing a crucial role in decarbonising our energy systems. Tidal energy, known for its reliability and predictability, is emerging as a contributor to the expanding array of renewable energy sources. As the tidal industry advances towards commercialisation and larger arrays reducing uncertainties is increasingly important. Uncertainties around power production increase commercial risk. This thesis classifies and quantifies uncertainties in Power Performance Assessments (PPAs) and offers guidance for future deployments. It examines uncertainties arising from data processing and flow variations, using in-situ measurements obtained from Acoustic Doppler Current Profilers (ADCPs) situated in close proximity to an operational Tidal Energy Converter (TEC) . At present developers often follow guidance from International Electrotechnical commission (IEC) Technical Specification 62600-200, that provides recommended positioning of instruments relative to the turbine of two types: in-line and adjacent, where in-line involves deploying separate instruments upstream and downstream of the turbine, and adjacent involves placing an instrument to either side of the turbine, across the rotor plane. The performance of two closely-located in-line ADCPs (spaced 45 m apart) was assessed, demonstrating their ability to gather usable data in this layout. However, this work identified for the first time interference between the ADCPs throughout the campaign and quantified subsequent impact on Annual Energy Prediction (AEP) estimates. A method to remove data anomalies caused by interference between closely positioned ADCPs has been developed and demonstrated, resulting in a 7% variation in estimated AEP. It was found that instrument placement is critical. Whilst small differences in velocity were found for in-line ADCPs, for adjacent ADCPs an uncertainty in AEP of 2.6% and 7.3% was measured for flood and ebb tides respectively. Where the difference stems from flow structures causing measurement bias during the ebb tide. Results show that for regions of high vertical shear, AEP estimates can be misrepresented by up to 2.3% and 5.5% under an imposed vertical misalignment of 1 and 2 metres respectively. TEC developers require knowledge of flow direction to design and operate turbines. A study on methods to estimate characteristic flow directions found variation in direction of 1.2◦ when averages across the rotor plane were considered and, 4◦ when only operational velocities were considered. The flow direction naturally evolves over a tidal cycle resulting in non-yawing (nacelle rotation mechanism) TEC concepts being susceptible to off-axis currents. A sensitivity study to misalignment was conducted and under 5◦ , 10◦ and 15◦ misalignment, the AEP differed by 2%, 6% and 13% respectively. The work highlights that the impact of TEC misalignment on AEP estimates is influenced by the rated velocity of the TEC and the maximum velocity at the site, particularly affecting TECs rated close to or over the maximum flow velocity. However, the work highlights that non-yawing TECs rated at least 10% below the maximum flow velocity can still operate at full capacity even with a 25◦ misalignment. During periods of slow speeds (tidal velocities at the Fall of Warness were found to exhibit a difference of approximately 30% between neap and spring tides) misalignment is shown to become more important. The methodology of uncertainty analysis demonstrated in this thesis gives comprehensive instructions for PPA of tidal devices, including instrument setup, data processing and accurate uncertainty estimates. Recognising and dealing with uncertainties in measurements remains vital for tidal energy projects. By adopting the methods outlined in this work, data integrity, accuracy, and dependability can be enhanced, resulting in better decision-making, improved evaluations of performance, and heightened investor trust within the tidal energy sector

    First Scottish Forum on Future Electricity Markets - Report

    No full text
    This report summarises the presentations and discussions at the first Scottish forum on future electricity markets in December 2024. The main topic was reform of the UK electricity market options including zonal pricing or a reformed national market

    Self-supervised machine learning algorithms for hyperspectral image inpainting

    No full text
    Hyperspectral images (HSIs) can be captured by the satellite sensors at a very narrow band of the electromagnetic spectrum, each presenting the samples at different timeslots, by for example, using push-broom strategies, which are the state-of-the-art practical technologies. Different from RGB images, the nature of the HSI acquisition system makes HSI a 3D data cube that covers hundreds or thousands of narrow spectral bands, conveying a wealth of spatial and spectral information. However, due to instrumental errors and atmospheric changes, the HSI images obtained in practice are often contaminated by noise and dark pixels, which may severely compromise the subsequent processing. It is thus, of vital importance, to improve the quality of the image in the first place. This PhD thesis focuses on the design and analysis of HSI inpainting algorithms to accurately recover images from incomplete observations. Since the existing solutions either failed or behaved badly in the most challenging scenarios where all the spectral bands are missing, which may happen in practice due to instrumental or downlink failures. This study aims to solve this issue through exploiting the recent deep learning techniques. We hope this thesis is able to encourage fruitful discussions and stimulate future research on the exploration of more powerful deep models for solving HSI inpainting problems. Firstly, we introduce here a novel HSI missing pixel prediction algorithm, called Low Rank and Sparsity Constraint Plug-and-Play (LRS-PnP). It is shown that LRS-PnP can effectively cope with the aforementioned difficulties found by traditional methods. The proposed LRS-PnP algorithm is further extended to a self-supervised model by combining the LRS-PnP with the Deep Image Prior (DIP), called LRS-PnP-DIP. We show that the proposed LRS-PnP-DIP algorithm enjoys the specific learning capability of deep networks, called inductive bias, but without needing any external training data, \textit{i,e.} self-supervised learning. In a series of experiments with real data, we show that the LRS-PnP-DIP either achieves state-of-the-art inpainting performance compared to other learning-based methods or outperforms them. However, it is found that the instability inherited from the conventional DIP model makes the LRS-PnP-DIP algorithm sometimes diverge. This observation motivate us to conduct a theoretical analysis of the convergence of the proposed method. Secondly, we explore LRS-PnP and LRS-PnP-DIP in more depth by showing that their potential instability can be solved by slightly modifying both deep hyperspectral prior and plug-and-play denoiser. Under some mild assumptions, we give a fixed-point convergence proof for the LRS-PnP-DIP algorithm and introduce a variant to the LRS-PnP-DIP. We show through extensive experiments that the proposed solution can produce visually and qualitatively superior inpainting results, which achieves competitive performance compared to the original algorithm. Thirdly, we present a powerful HSI inpainting algorithm that dynamically combines self-supervised learning with the recent popular Diffusion model. The proposed Hyerspectral Diffusion based on the Equivariant Imaging (HyDiff-EI) algorithm exploits the strong learning capability of the neural network prior and leverages the high-level hierarchical information of the diffusion models. We empirically demonstrate the effectiveness of the proposed method on the HSI datasets, showing a big performance gap over existing methods based on deep priors/existing diffusion models, and established new state-of-the-art on the self-supervised HSI inpainting task

    31,717

    full texts

    41,210

    metadata records
    Updated in last 30 days.
    Edinburgh Research Archive is based in United Kingdom
    Access Repository Dashboard
    Do you manage Edinburgh Research Archive? Access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard!