508 research outputs found
Configuration Management of Distributed Systems over Unreliable and Hostile Networks
Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems.
This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration.
Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture.
The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn.
Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts
Novel neural architectures & algorithms for efficient inference
In the last decade, the machine learning universe embraced deep neural networks (DNNs) wholeheartedly with the advent of neural architectures such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), transformers, etc. These models have empowered many applications, such as ChatGPT, Imagen, etc., and have achieved state-of-the-art (SOTA) performance on many vision, speech, and language modeling tasks. However, SOTA performance comes with various issues, such as large model size, compute-intensive training, increased inference latency, higher working memory, etc. This thesis aims at improving the resource efficiency of neural architectures, i.e., significantly reducing the computational, storage, and energy consumption of a DNN without any significant loss in performance.
Towards this goal, we explore novel neural architectures as well as training algorithms that allow low-capacity models to achieve near SOTA performance. We divide this thesis into two dimensions: \textit{Efficient Low Complexity Models}, and \textit{Input Hardness Adaptive Models}.
Along the first dimension, i.e., \textit{Efficient Low Complexity Models}, we improve DNN performance by addressing instabilities in the existing architectures and training methods. We propose novel neural architectures inspired by ordinary differential equations (ODEs) to reinforce input signals and attend to salient feature regions. In addition, we show that carefully designed training schemes improve the performance of existing neural networks. We divide this exploration into two parts:
\textsc{(a) Efficient Low Complexity RNNs.} We improve RNN resource efficiency by addressing poor gradients, noise amplifications, and BPTT training issues. First, we improve RNNs by solving ODEs that eliminate vanishing and exploding gradients during the training. To do so, we present Incremental Recurrent Neural Networks (iRNNs) that keep track of increments in the equilibrium surface. Next, we propose Time Adaptive RNNs that mitigate the noise propagation issue in RNNs by modulating the time constants in the ODE-based transition function. We empirically demonstrate the superiority of ODE-based neural architectures over existing RNNs. Finally, we propose Forward Propagation Through Time (FPTT) algorithm for training RNNs. We show that FPTT yields significant gains compared to the more conventional Backward Propagation Through Time (BPTT) scheme.
\textsc{(b) Efficient Low Complexity CNNs.} Next, we improve CNN architectures by reducing their resource usage. They require greater depth to generate high-level features, resulting in computationally expensive models. We design a novel residual block, the Global layer, that constrains the input and output features by approximately solving partial differential equations (PDEs). It yields better receptive fields than traditional convolutional blocks and thus results in shallower networks. Further, we reduce the model footprint by enforcing a novel inductive bias that formulates the output of a residual block as a spatial interpolation between high-compute anchor pixels and low-compute cheaper pixels. This results in spatially interpolated convolutional blocks (SI-CNNs) that have better compute and performance trade-offs. Finally, we propose an algorithm that enforces various distributional constraints during training in order to achieve better generalization. We refer to this scheme as distributionally constrained learning (DCL).
In the second dimension, i.e., \textit{Input Hardness Adaptive Models}, we introduce the notion of the hardness of any input relative to any architecture. In the first dimension, a neural network allocates the same resources, such as compute, storage, and working memory, for all the inputs. It inherently assumes that all examples are equally hard for a model. In this dimension, we challenge this assumption using input hardness as our reasoning that some inputs are relatively easy for a network to predict compared to others. Input hardness enables us to create selective classifiers wherein a low-capacity network handles simple inputs while abstaining from a prediction on the complex inputs. Next, we create hybrid models that route the hard inputs from the low-capacity abstaining network to a high-capacity expert model. We design various architectures that adhere to this hybrid inference style. Further, input hardness enables us to selectively distill the knowledge of a high-capacity model into a low-capacity model by cleverly discarding hard inputs during the distillation procedure.
Finally, we conclude this thesis by sketching out various interesting future research directions that emerge as an extension of different ideas explored in this work
Development of a Bayesian calibration framework for archetype-based housing stock models of summer indoor temperature
Adverse effects to health and wellbeing from increased exposure to heat at home has been repeatedly identified as a major climate change adaptation risk in the United Kingdom by the Climate Change Committee and others. Despite recent progress, policy gaps in the adaptation of the housing stock exist. The development of such policies can be guided by housing stock models, that enable the assessment of the impact of climate change adaptation and energy efficiency measures on building performance under different climate scenarios. To ensure well-informed decision-making, uncertainties in these models should be considered. Motivated by the lack of work on this topic, this thesis aims to quantify and reduce uncertainties of archetype-based housing stock models of summer indoor temperature through a Bayesian calibration framework.
The framework includes the data-driven classification of dwellings into homogeneous groups, the characterisation of model input uncertainty in the form of probability distributions – which can be used as calibration priors – and their reduction through Bayesian inference. The framework’s implementation was demonstrated using the ‘UK Housing Stock Model’ (a bottom-up model based on EnergyPlus), the 2011 English Housing Survey and Energy Follow-Up Survey (EHS-EFUS), and the 2009 4M survey in Leicester. The model’s root-mean-square error reduced from 2.5 ⁰C (pre-calibration) to 0.6 ⁰C (post-calibration), while input and structural uncertainties were quantified.
This work offers several novel contributions, including a modular framework that can be adapted for the improvement of other archetype-based housing stock models, an open-source method for identifying model input probability distributions, and an alternative formulation of Gaussian processes that substantially reduces the computational cost of Bayesian calibration. Learnings from this first calibration of its type can inform future academic research. Finally, the analysis of 2011 EHS-EFUS provides evidence to building designers and policymakers on the dwelling and household characteristics associated with high summer indoor temperatures
Industrial insights on digital twins in manufacturing: application landscape, current practices, and future needs
The digital twin (DT) research field is experiencing rapid expansion; yet, the research on industrial practices in this area remains poorly understood. This paper aims to address this knowledge gap by sharing feedback and future requirements from the manufacturing industry. The methodology employed in this study involves an examination of a survey that received 99 responses and interviews with 14 experts from 10 prominent UK organisations, most of which are involved in the defence industry in the UK. The survey and interviews explored topics such as DT design, return on investment, drivers, inhibitors, and future directions for DT development in manufacturing. This study’s findings indicate that DTs should possess characteristics such as adaptability, scalability, interoperability, and the ability to support assets throughout their entire life cycle. On average, completed DT projects reach the breakeven point in less than two years. The primary motivators behind DT development were identified to be autonomy, customer satisfaction, safety, awareness, optimisation, and sustainability. Meanwhile, the main obstacles include a lack of expertise, funding, and interoperability. This study concludes that the federation of twins and a paradigm shift in industrial thinking are essential components for the future of DT development
A novel methodology for the assessment or wave energy opions at early stages
276 p.El aumento de la proporción de generación de electricidad a partir de fuentes renovables es clave para garantizar un sistema energético totalmente descarbonizado y luchar contra el cambio climático. La energía undimotriz es un recurso abundante pero, al mismo tiempo, es la menos desarrollada de todas las tecnologías renovables. El marco de evaluación común desarrollado en la tesis se basa en principios sólidos de ingeniería de sistemas y abarca el contexto externo, los requisitos del sistema y los criterios de evaluación. Se puede aplicar a diferentes niveles de madurez tecnológica y capta los aspectos cualitativos relacionados con las expectativas de las partes interesadas. El enfoque novedoso guía las decisiones de diseño a lo largo del proceso de desarrollo para la gestión adecuada del riesgo y la incertidumbre, y facilita la selección y evaluación comparativa de la tecnología undimotriz a diferentes niveles de madurez de manera controlada. Los métodos propuestos en esta investigación brindan información valiosa para enfocar los esfuerzos de innovación en aquellas áreas que tienen la mayor influencia en el desempeño de la tecnología. La incorporación de estrategias de innovación eficaces en el desarrollo de la energía undimotriz ayuda a gestionar la complejidad del sistema y canalizar la innovación hacia mejoras útiles.Tecnali
Packaging cost-effectiveness models in R: a tutorial.
Background: The use of programming languages such as R in health economics and decision science is increasing, and brings numerous benefits including increasing model development efficiency, improving transparency, and reducing human error. However, there is limited guidance on how to best develop models using R. So far, no clear consensus has emerged.
Methods: We present the advantages of creating health economic models as R packages - structured collections of functions, data sets, tests, and documentation. Assuming an intermediate understanding of R, we provide a tutorial to demonstrate how to construct a basic R package for health economic evaluation. All source code used in or referenced by this paper is available under an open-source licence.
Case Study: We use the Sick Sicker Model as a case study applying the steps from the tutorial to standardise model development, documentation and aid review. This can improve the distribution of code, thereby streamlining model development, and improving methods in health economic evaluation.
Conclusion: R packages offer a valuable framework for enhancing the quality and transparency of health economic evaluation models. Embracing better, more standardised software development practices, while fostering a collaborative culture, has the potential to significantly improve the quality of health economic models, and, ultimately, support better decision making in healthcare
Optimization of 5G Second Phase Heterogeneous Radio Access Networks with Small Cells
Due to the exponential increase in high data-demanding applications and their services per
coverage area, it is becoming challenging for the existing cellular network to handle the massive
sum of users with their demands. It is conceded to network operators that the current
wireless network may not be capable to shelter future traffic demands. To overcome the challenges
the operators are taking interest in efficiently deploying the heterogeneous network.
Currently, 5G is in the commercialization phase. Network evolution with addition of small
cells will develop the existing wireless network with its enriched capabilities and innovative
features. Presently, the 5G global standardization has introduced the 5G New Radio (NR) under
the 3rd Generation Partnership Project (3GPP). It can support a wide range of frequency
bands (<6 GHz to 100 GHz).
For different trends and verticals, 5G NR encounters, functional splitting and its cost evaluation
are well-thought-out. The aspects of network slicing to the assessment of the business
opportunities and allied standardization endeavours are illustrated. The study explores the
carrier aggregation (Pico cellular) technique for 4G to bring high spectral efficiency with the
support of small cell massification while benefiting from statistical multiplexing gain. One
has been able to obtain values for the goodput considering CA in LTE-Sim (4G), of 40 Mbps
for a cell radius of 500 m and of 29 Mbps for a cell radius of 50 m, which is 3 times higher
than without CA scenario (2.6 GHz plus 3.5 GHz frequency bands).
Heterogeneous networks have been under investigation for many years. Heterogeneous network
can improve users service quality and resource utilization compared to homogeneous
networks. Quality of service can be enhanced by putting the small cells (Femtocells or Picocells)
inside the Microcells or Macrocells coverage area. Deploying indoor Femtocells for 5G
inside the Macro cellular network can reduce the network cost. Some service providers have
started their solutions for indoor users but there are still many challenges to be addressed.
The 5G air-simulator is updated to deploy indoor Femto-cell with proposed assumptions with
uniform distribution. For all the possible combinations of apartments side length and transmitter
power, the maximum number of supported numbers surpassed the number of users
by more than two times compared to papers mentioned in the literature. Within outdoor environments,
this study also proposed small cells optimization by putting the Pico cells within
a Macro cell to obtain low latency and high data rate with the statistical multiplexing gain of
the associated users.
Results are presented 5G NR functional split six and split seven, for three frequency bands
(2.6 GHz, 3.5GHz and 5.62 GHz). Based on the analysis for shorter radius values, the best
is to select the 2.6 GHz to achieve lower PLR and to support a higher number of users, with
better goodput, and higher profit (for cell radius u to 400 m). In 4G, with CA, from the
analysis of the economic trade-off with Picocell, the Enhanced multi-band scheduler EMBS
provide higher revenue, compared to those without CA. It is clearly shown that the profit of
CA is more than 4 times than in the without CA scenario. This means that the slight increase
in the cost of CA gives back more than 4-time profit relatively to the ”without” CA scenario.Devido ao aumento exponencial de aplicações/serviços de elevado débito por unidade de
área, torna-se bastante exigente, para a rede celular existente, lidar com a enormes quantidades
de utilizadores e seus requisitos. É reconhecido que as redes móveis e sem fios atuais
podem não conseguir suportar a procura de tráfego junto dos operadores. Para responder
a estes desafios, os operadores estão-se a interessar pelo desenvolvimento de redes heterogéneas
eficientes. Atualmente, a 5G está na fase de comercialização. A evolução destas
redes concretizar-se-á com a introdução de pequenas células com aptidões melhoradas e
características inovadoras. No presente, os organismos de normalização da 5G globais introduziram
os Novos Rádios (NR) 5G no contexto do 3rd Generation Partnership Project
(3GPP). A 5G pode suportar uma gama alargada de bandas de frequência (<6 a 100 GHz).
Abordam-se as divisões funcionais e avaliam-se os seus custos para as diferentes tendências
e verticais dos NR 5G. Ilustram-se desde os aspetos de particionamento funcional da rede à
avaliação das oportunidades de negócio, aliadas aos esforços de normalização. Exploram-se
as técnicas de agregação de espetro (do inglês, CA) para pico células, em 4G, a disponibilização
de eficiência espetral, com o suporte da massificação de pequenas células, e o ganho
de multiplexagem estatística associado. Obtiveram-se valores do débito binário útil, considerando
CA no LTE-Sim (4G), de 40 e 29 Mb/s para células de raios 500 e 50 m, respetivamente,
três vezes superiores em relação ao caso sem CA (bandas de 2.6 mais 3.5 GHz).
Nas redes heterogéneas, alvo de investigação há vários anos, a qualidade de serviço e a utilização
de recursos podem ser melhoradas colocando pequenas células (femto- ou pico-células)
dentro da área de cobertura de micro- ou macro-células). O desenvolvimento de pequenas
células 5G dentro da rede com macro-células pode reduzir os custos da rede. Alguns prestadores
de serviços iniciaram as suas soluções para ambientes de interior, mas ainda existem
muitos desafios a ser ultrapassados. Atualizou-se o 5G air simulator para representar a
implantação de femto-células de interior com os pressupostos propostos e distribuição espacial
uniforme. Para todas as combinações possíveis do comprimento lado do apartamento, o
número máximo de utilizadores suportado ultrapassou o número de utilizadores suportado
(na literatura) em mais de duas vezes. Em ambientes de exterior, propuseram-se pico-células
no interior de macro-células, de forma a obter atraso extremo-a-extremo reduzido e taxa de
transmissão dados elevada, resultante do ganho de multiplexagem estatística associado.
Apresentam-se resultados para as divisões funcionais seis e sete dos NR 5G, para 2.6 GHz,
3.5GHz e 5.62 GHz. Para raios das células curtos, a melhor solução será selecionar a banda
dos 2.6 GHz para alcançar PLR (do inglês, PLR) reduzido e suportar um maior número de
utilizadores, com débito binário útil e lucro mais elevados (para raios das células até 400 m).
Em 4G, com CA, da análise do equilíbrio custos-proveitos com pico-células, o escalonamento
multi-banda EMBS (do inglês, Enhanced Multi-band Scheduler) disponibiliza proveitos superiores
em comparação com o caso sem CA. Mostra-se claramente que lucro com CA é mais
de quatro vezes superior do que no cenário sem CA, o que significa que um aumento ligeiro
no custo com CA resulta num aumento de 4-vezes no lucro relativamente ao cenário sem CA
Advanced manufacturing applied to nuclear fusion – challenges and solutions
Materials needed to achieve designed performance will require formulations and processing methods capable of delivering a compendium of metallic, ceramic and cermet chemistries, which must be finely tuned at source, and tolerant to down-stream thermomechanical adjustment. Structural steels and cermets are continuously being developed by researchers using computational thermodynamics modelling and modified thermomechanical treatments, with oxide dispersion strengthened steel (ODS)-reduced activated ferritic-martensitic steel (RAFM) steels based on 8%–16% wt.% Cr now being assessed. The combination of SiCf and CuCrZr as a metal matrix composite containing an active coolant would be seen as a major opportunity, furthermore, composite ceramic materials consisting of SiC fibres reinforcing a SiC matrix capable of being joined to metallic structures offer great potential in the development of advanced heat exchangers. Continuing the theme of advanced manufacturing, the use of solid-state processing technologies involving powder metallurgy–hot isostatic pressing and spark plasma sintering to produce near-net shaped products in metallics, ceramics and cermets are critical manufacturing research themes. Additive manufacturing (AM) to produce metallic and ceramic components is now becoming a feasible manufacturing route, and through the combination of AM and subtractive machining, capability exists to produce efficient fluid carrying structures that could not be manufactured by any other process. Extending this to using electron beam welding and advanced heat treatments to improve homogeneity and provide modularity, a two-pronged solution is now available to improve capability and integrity, whilst concurrently offering increased degrees of freedom for designers
- …