221 research outputs found
The Fifth NASA Symposium on VLSI Design
The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design
Complex scheduling models and analyses for property-based real-time embedded systems
Modern multi core architectures and parallel applications
pose a significant challenge to the worst-case centric real-time system verification
and design efforts.
The involved model and parameter uncertainty contest the fidelity of formal real-time analyses,
which are mostly based on exact model assumptions.
In this dissertation, various approaches that can accept parameter and model uncertainty
are presented.
In an attempt to improve predictability in worst-case centric analyses, the exploration of timing predictable protocols
are examined for parallel task scheduling on multiprocessors and network-on-chip arbitration.
A novel scheduling algorithm, called stationary rigid gang scheduling, for gang tasks on multiprocessors is proposed.
In regard to fixed-priority wormhole-switched network-on-chips, a more restrictive family of transmission protocols called
simultaneous progression switching protocols is proposed with predictability enhancing properties.
Moreover, hierarchical scheduling for parallel DAG tasks under parameter
uncertainty is studied to achieve temporal- and spatial isolation.
Fault-tolerance as a supplementary reliability aspect of real-time systems
is examined, in spite of dynamic external causes of fault.
Using various job variants, which trade off increased execution time demand with increased error protection,
a state-based policy selection strategy is proposed, which provably assures an acceptable quality-of-service (QoS).
Lastly, the temporal misalignment of sensor data in sensor fusion applications
in cyber-physical systems is examined. A modular analysis based on minimal properties to obtain an upper-bound for the
maximal sensor data time-stamp difference is proposed
Contribution à la convergence d'infrastructure entre le calcul haute performance et le traitement de données à large échelle
The amount of produced data, either in the scientific community or the commercialworld, is constantly growing. The field of Big Data has emerged to handle largeamounts of data on distributed computing infrastructures. High-Performance Computing (HPC) infrastructures are traditionally used for the execution of computeintensive workloads. However, the HPC community is also facing an increasingneed to process large amounts of data derived from high definition sensors andlarge physics apparati. The convergence of the two fields -HPC and Big Data- iscurrently taking place. In fact, the HPC community already uses Big Data tools,which are not always integrated correctly, especially at the level of the file systemand the Resource and Job Management System (RJMS).In order to understand how we can leverage HPC clusters for Big Data usage, andwhat are the challenges for the HPC infrastructures, we have studied multipleaspects of the convergence: We initially provide a survey on the software provisioning methods, with a focus on data-intensive applications. We contribute a newRJMS collaboration technique called BeBiDa which is based on 50 lines of codewhereas similar solutions use at least 1000 times more. We evaluate this mechanism on real conditions and in simulated environment with our simulator Batsim.Furthermore, we provide extensions to Batsim to support I/O, and showcase thedevelopments of a generic file system model along with a Big Data applicationmodel. This allows us to complement BeBiDa real conditions experiments withsimulations while enabling us to study file system dimensioning and trade-offs.All the experiments and analysis of this work have been done with reproducibilityin mind. Based on this experience, we propose to integrate the developmentworkflow and data analysis in the reproducibility mindset, and give feedback onour experiences with a list of best practices.RĂ©sumĂ©La quantitĂ© de donnĂ©es produites, que ce soit dans la communautĂ© scientifiqueou commerciale, est en croissance constante. Le domaine du Big Data a Ă©mergĂ©face au traitement de grandes quantitĂ©s de donnĂ©es sur les infrastructures informatiques distribuĂ©es. Les infrastructures de calcul haute performance (HPC) sont traditionnellement utilisĂ©es pour lâexĂ©cution de charges de travail intensives en calcul. Cependant, la communautĂ© HPC fait Ă©galement face Ă un nombre croissant debesoin de traitement de grandes quantitĂ©s de donnĂ©es dĂ©rivĂ©es de capteurs hautedĂ©finition et de grands appareils physique. La convergence des deux domaines-HPC et Big Data- est en cours. En fait, la communautĂ© HPC utilise dĂ©jĂ des outilsBig Data, qui ne sont pas toujours correctement intĂ©grĂ©s, en particulier au niveaudu systĂšme de fichiers ainsi que du systĂšme de gestion des ressources (RJMS).Afin de comprendre comment nous pouvons tirer parti des clusters HPC pourlâutilisation du Big Data, et quels sont les dĂ©fis pour les infrastructures HPC, nousavons Ă©tudiĂ© plusieurs aspects de la convergence: nous avons dâabord proposĂ© uneĂ©tude sur les mĂ©thodes de provisionnement logiciel, en mettant lâaccent sur lesapplications utilisant beaucoup de donnĂ©es. Nous contribuons a lâĂ©tat de lâart avecune nouvelle technique de collaboration entre RJMS appelĂ©e BeBiDa basĂ©e sur 50lignes de code alors que des solutions similaires en utilisent au moins 1000 fois plus.Nous Ă©valuons ce mĂ©canisme en conditions rĂ©elles et en environnement simulĂ©avec notre simulateur Batsim. En outre, nous fournissons des extensions Ă Batsimpour prendre en charge les entrĂ©es/sorties et prĂ©sentons le dĂ©veloppements dâunmodĂšle de systĂšme de fichiers gĂ©nĂ©rique accompagnĂ© dâun modĂšle dâapplicationBig Data. Cela nous permet de complĂ©ter les expĂ©riences en conditions rĂ©ellesde BeBiDa en simulation tout en Ă©tudiant le dimensionnement et les diffĂ©rentscompromis autours des systĂšmes de fichiers.Toutes les expĂ©riences et analyses de ce travail ont Ă©tĂ© effectuĂ©es avec la reproductibilitĂ© Ă lâesprit. Sur la base de cette expĂ©rience, nous proposons dâintĂ©grerle flux de travail du dĂ©veloppement et de lâanalyse des donnĂ©es dans lâesprit dela reproductibilitĂ©, et de donner un retour sur nos expĂ©riences avec une liste debonnes pratiques
Compilation Techniques for High-Performance Embedded Systems with Multiple Processors
Institute for Computing Systems ArchitectureDespite the progress made in developing more advanced compilers for embedded systems,
programming of embedded high-performance computing systems based on Digital
Signal Processors (DSPs) is still a highly skilled manual task. This is true for
single-processor systems, and even more for embedded systems based on multiple
DSPs. Compilers often fail to optimise existing DSP codes written in C due to the
employed programming style. Parallelisation is hampered by the complex multiple address
space memory architecture, which can be found in most commercial multi-DSP
configurations.
This thesis develops an integrated optimisation and parallelisation strategy that can
deal with low-level C codes and produces optimised parallel code for a homogeneous
multi-DSP architecture with distributed physical memory and multiple logical address
spaces. In a first step, low-level programming idioms are identified and recovered. This
enables the application of high-level code and data transformations well-known in the
field of scientific computing. Iterative feedback-driven search for âgoodâ transformation
sequences is being investigated. A novel approach to parallelisation based on a
unified data and loop transformation framework is presented and evaluated. Performance
optimisation is achieved through exploitation of data locality on the one hand,
and utilisation of DSP-specific architectural features such as Direct Memory Access
(DMA) transfers on the other hand.
The proposed methodology is evaluated against two benchmark suites (DSPstone
& UTDSP) and four different high-performance DSPs, one of which is part of a commercial
four processor multi-DSP board also used for evaluation. Experiments confirm
the effectiveness of the program recovery techniques as enablers of high-level transformations
and automatic parallelisation. Source-to-source transformations of DSP
codes yield an average speedup of 2.21 across four different DSP architectures. The
parallelisation scheme is â in conjunction with a set of locality optimisations â able to
produce linear and even super-linear speedups on a number of relevant DSP kernels
and applications
Containerization in Cloud Computing: performance analysis of virtualization architectures
La crescente adozione del cloud Ăš fortemente influenzata dallâemergere di tecnologie che mirano a migliorare i processi di sviluppo e deployment di applicazioni di livello enterprise. Lâobiettivo di questa tesi Ăš analizzare una di queste soluzioni, chiamata âcontainerizationâ e di valutare nel dettaglio come questa tecnologia possa essere adottata in infrastrutture cloud in alternativa a soluzioni complementari come le macchine virtuali. Fino ad oggi, il modello tradizionale âvirtual machineâ Ăš stata la soluzione predominante nel mercato. Lâimportante differenza architetturale che i container offrono ha portato questa tecnologia ad una rapida adozione poichĂš migliora di molto la gestione delle risorse, la loro condivisione e garantisce significativi miglioramenti in termini di provisioning delle singole istanze.
Nella tesi, verrĂ esaminata la âcontainerizationâ sia dal punto di vista infrastrutturale che applicativo. Per quanto riguarda il primo aspetto, verranno analizzate le performances confrontando LXD, Docker e KVM, come hypervisor dellâinfrastruttura cloud OpenStack, mentre il secondo punto concerne lo sviluppo di applicazioni di livello enterprise che devono essere installate su un insieme di server distribuiti. In tal caso, abbiamo bisogno di servizi di alto livello, come lâorchestrazione. Pertanto, verranno confrontate le performances delle seguenti soluzioni: Kubernetes, Docker Swarm, Apache Mesos e Cattle
The Self-Organisation of Biological Soft matter Systems at Different Length-scales
Spontaneous self-organisation occurs in physical, chemical and biological systems throughout the natural world when the components of an initially unstructured system arrange to form ordered structures. Research into the mechanisms underlying these systems has led to exciting developments in materials chemistry where a bottom-up approach based on directed self-organisation has the potential to yield novel materials with a wide range of technological and scientific applications. Owing to their high specificity and potency, biopharmaceutical therapeutics are often favoured over small molecule drugs. However, protein based biopharmaceuticals are prone to degradation as a result of physical and chemical instability, a process leading to devastating financial and safety outcomes. Accordingly, understanding and quantifying the adverse effects of protein degradation is imperative. One such form of degradation is protein self-organisation in the form of aggregation. For certain solution conditions, aggregated unfolded protein leads to the formation of gels. Hydrogels are a class of gel formed from hydrophilic polymer chains capable of holding large amounts of water in their three dimensional network and have numerous medical and pharmaceutical uses. Self-organisation drives gel formation. Therefore, understanding the principles of self-organisation is a prerequisite in the development of novel hydrogels with increased functionality. At longer length-scales cells self-associate to form tissues. Spheroids are self-organised entities comprised of a single-cell type. They are the archetypal model for tumours and are an ideal system to study the biophysical phenomena associated with self-organisation. Unlike tissues, when a single cell type is used to form the spheroid, compositionally identical replicates can easily be grown. Furthermore, unlike with explants, other factors including age and the biochemical environment, which have been shown to alter the mechanical characteristics of cells and tissues can be rigorously controlled. Here, the experimental techniques of the wider soft matter field are used to investigate the biophysical properties of systems that span the biologically relevant spectrum of length-scales in which soft matter contributions are important.
Differential scanning calorimetry analysis was used to quantify the reversibility of unfolding following thermal denaturation of lysozyme. Solution conditions (pH, ionic
strength and the presence/absence of disaccharides) were varied to systematically alter the temperature at which the protein unfolds, Tm. The enthalpies of unfolding during successive heating and cooling cycles were compared to quantify the degree of reversible unfolding that occurs following thermal denaturation. The sugars were used to evaluate whether a disaccharide induced increase in Tm affects the reversibility of thermally induced denaturation. It was shown that there was considerable overlap between the Tm values where reversible and non-reversible thermal denaturation occurred. Indeed, at the highest and lowest Tm no refolding was observed whereas at intermediate values refolding occurred. Furthermore, similar Tm values had different proportions of refolded protein. Using this novel analysis, it was possible to quantify the degree to which protein is lost to irreversible aggregation and show that an increase in the melt transition temperature does not necessarily confer an increase in reversibility. This type of analysis may be a useful tool for the biopharmaceutical and food industries to assess the stability of protein solutions.
Bigels are an emerging class of tuneable soft materials composed of two discrete but interpenetrating networks, both of which contribute to the physical and mechanical properties of the material. A bigel network was formed from two proteins, BSA and gelatin. Thorough control of the solution conditions and kinetics ensured that the inter-species attraction between the two protein systems were weak compared to the intra-protein attraction, leading to bigel formation. The protein bigel was shown to have an elastic modulus four times greater than the combined elastic moduli of the parent gels. Furthermore, the elastic response was maintained over several deformation cycles and the gel is both thermo- and chemo-responsive. These gels have the potential to be used in drug delivery, for biomedical applications such as wound healing or as a biomimetic in tissue culture.
Cavitation rheology was used to show that for spheroids formed from HEK293 cells the interfacial tension was dominated by cortical tension at length-scales < 30 ÎŒm. It was found that the elastic modulus could be related quantitatively to the disruption of cell-cell adhesion molecules which facilitates the formation of the cavity. A cascade of cadherin-cadherin dissociation events, totalling a disrupted surface are equivalent to 3, 8 and 117 cells was calculated for 5, 10 and 30 ÎŒm needles, respectively, was calculate. Furthermore, the process involved was shown to be largely elastic and a mechanism
involving a rapid cycle of âunzippingâ and âre-zippingâ the cadherin bonds was proposed to account for this elasticity. Since changes in cortical tension and cell-cell adhesion are associated with the transition from healthy to malignant cells, CR may prove a useful addition to the oncologistsâ toolbox
The emergence and early growth of new niche organisational forms in a highly regulated context: the role of regulatory architecture and analogy in the case of loan-based crowdfunding in the UK
What we are now observing across a variety of economic sectors is the emergence and development of new organisational forms based on technological infrastructures, algorithms and platform-based systems. While these organisational actors offer new services and enable new practices through technological developments, these processes do not happen in isolation from the wider institutional context (e.g. regulation) and the underlying characteristics of these innovations are not entirely new. These observations are particularly relevant to current developments in the highly regulated financial services context, where financial technology organisations (known as FinTech) have started to co-exist with traditional, incumbent financial institutions. The general purpose of the thesis, using the arguments from theories of organisations and institutions, and on the sociology of law and legal studies, is to uncover the elements and dynamics of the emergence process and early growth stages within the wider institutional environment.
The thesis is split into two main parts to research the empirical context of the UK crowdfunding market with a specific focus on the first generation of the regulated form of loan-based crowdfunding from 2004 to 2018. This organisational form became the first small, niche, innovative actors based on platform architecture that brought in different elements from financial and non-financial domains, creating a form at the intersection of financial, technological and social areas that enabled new form of lending. Therefore, the first part of the thesis extends the analysis to the era before the emergence in the UK from the 1980s through rich historical materials on the regulatory transformations of financial services to position the phenomenon of emergence within time. The second part of the thesis zooms into the emergence and early growth of loan-based crowdfunding. The main data sources for this longitudinal study are archival materials, interviews, reports, observations of the events and online sources. Hence one of my findings is the role of analogies as a certain device with cognitive elements that bring different elements into a recognisable form, maintaining a balance between novelty and tradition, between legality and illegality. My next contribution is the development of a concept of collective analogising that enabled the process of emergence by which different actors played certain roles in solving the indeterminacy of the product side and legal ambiguity. My aim is that the findings and models from this study of financial services clarify the similar processes in many other important societal sectors
- âŠ