17 research outputs found

    Distributed memory compiler design for sparse problems

    Get PDF
    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented

    Performance Optimization and Dynamics Control for Large-scale Data Transfer in Wide-area Networks

    Get PDF
    Transport control plays an important role in the performance of large-scale scientific and media streaming applications involving transfer of large data sets, media streaming, online computational steering, interactive visualization, and remote instrument control. In general, these applications have two distinctive classes of transport requirements: large-scale scientific applications require high bandwidths to move bulk data across wide-area networks, while media streaming applications require stable bandwidths to ensure smooth media playback. Unfortunately, the widely deployed Transmission Control Protocol is inadequate for such tasks due to its performance limitations. The purpose of this dissertation is to conduct rigorous analytical study of the design and performance of transport solutions, and develop an integrated transport solution in a systematical way to overcome the limitations of current transport methods. One of the primary challenges is to explore and compose a set of feasible route options with multiple constraints. Another challenge essentially arises from the randomness inherent in wide-area networks, particularly the Internet. This randomness must be explicitly accounted for to achieve both goodput maximization and stabilization over the constructed routes by suitably adjusting the source rate in response to both network and host dynamics.The superior and robust performance of the proposed transport solution is extensively evaluated in a simulated environment and further verified through real-life implementations and deployments over both Internet and dedicated connections under disparate network conditions in comparison with existing transport methods

    A multi-modal corpus approach to the analysis of backchanneling behaviour

    Get PDF
    Current methodologies in corpus linguistics have revolutionised the way we look at language. They allow us to make objective observations about written and spoken language in use. However, most corpora are limited in scope because they are unable to capture language and communication beyond the word. This is problematic given that interaction is in fact multi-modal, as meaning is constructed through the interplay of text, gesture and prosody; a combination of verbal and non-verbal characteristics. This thesis outlines, then utilises, a multi-modal approach to corpus linguistics, and examines how such can be used to facilitate our explorations of backchanneling phenomena in conversation, such as gestural and verbal signals of active listenership. Backchannels have been seen as being highly conventionalised, they differ considerably in form, function, interlocutor and location (in context and co-text). Therefore their relevance at any given time in a given conversation is highly conditional. The thesis provides an in-depth investigation of the use of, and the relationship between, spoken and non-verbal forms of this behaviour, focusing on a particular sub-set of gestural forms: head nods. This investigation is undertaken by analysing the patterned use of specific forms and functions of backchannels within and across sentence boundaries, as evidenced in a five-hour sub-corpus of dyadic multi-modal conversational episodes, taken from the Nottingham Multi-Modal Corpus (NMMC). The results from this investigation reveal 22 key findings regarding the collaborative and cooperative nature of backchannels, which function to both support and extend what is already known about such behaviours. Using these findings, the thesis presents an adapted pragmatic-functional linguistic coding matrix for the classification and examination of backchanneling phenomena. This fuses the different, dynamic properties of spoken and non-verbal forms of this behaviour into a single, integrated conceptual model, in order to provide the foundations, a theoretical point-of-entry, for future research of this nature

    A multi-modal corpus approach to the analysis of backchanneling behaviour

    Get PDF
    Current methodologies in corpus linguistics have revolutionised the way we look at language. They allow us to make objective observations about written and spoken language in use. However, most corpora are limited in scope because they are unable to capture language and communication beyond the word. This is problematic given that interaction is in fact multi-modal, as meaning is constructed through the interplay of text, gesture and prosody; a combination of verbal and non-verbal characteristics. This thesis outlines, then utilises, a multi-modal approach to corpus linguistics, and examines how such can be used to facilitate our explorations of backchanneling phenomena in conversation, such as gestural and verbal signals of active listenership. Backchannels have been seen as being highly conventionalised, they differ considerably in form, function, interlocutor and location (in context and co-text). Therefore their relevance at any given time in a given conversation is highly conditional. The thesis provides an in-depth investigation of the use of, and the relationship between, spoken and non-verbal forms of this behaviour, focusing on a particular sub-set of gestural forms: head nods. This investigation is undertaken by analysing the patterned use of specific forms and functions of backchannels within and across sentence boundaries, as evidenced in a five-hour sub-corpus of dyadic multi-modal conversational episodes, taken from the Nottingham Multi-Modal Corpus (NMMC). The results from this investigation reveal 22 key findings regarding the collaborative and cooperative nature of backchannels, which function to both support and extend what is already known about such behaviours. Using these findings, the thesis presents an adapted pragmatic-functional linguistic coding matrix for the classification and examination of backchanneling phenomena. This fuses the different, dynamic properties of spoken and non-verbal forms of this behaviour into a single, integrated conceptual model, in order to provide the foundations, a theoretical point-of-entry, for future research of this nature

    Computation of dynamic slices of aspect oriented programs

    Get PDF
    This thesis presents our work concerning computation of dynamic slicing of aspect oriented programs. Program slicing is a decomposition technique which extracts program elements related to a particular computation from a program. A program slice consists of those parts of a program that may directly or indirectly affect the values computed at some program point of interest, referred to as a slicing criterion. A program slice an be static or dynamic. Static slice contains all the statements that may affect the slicing criterion for every possible inputs to the program. Dynamic slice contains only those statements that actually affect the slicing criterion for a particular input to the program. Aspect-Oriented Programming is a new programming technique proposed for cleanly modularizing the cross- cutting structure of concerns. An aspect is an area of concern that cuts across the structure of a program. The main idea behind Aspect-Oriented Programming (AOP) is to allow a program to be constructed by describing each concern separately. Aspect J is an aspect-oriented extension to the Java programming language. Aspect J adds new concepts and associated constructs called join points, point cuts, advices, introductions, aspects to Java. We first store the statements executed for a particular execution in an execution trace le. Next, we develop a dependence-based representation alled Dynamic Aspect-Oriented Dependence Graph (DADG) as the intermediate program representation. The DADG is an arc-classied digraph which represents various dynamic dependences between the statements of an aspect-oriented program for a particular execution. Then, we present an efficient dynamic slicing technique for aspect-oriented programs using DADG. Taking any vertex as the starting point, our algorithm performs a graph traversal on the DADG using breadth-first graph traversal or depth-first graph traversal. Then, the traversed vertices are mapped to the original program to compute the dynamic slices. We have shown that our proposed algorithm efficiently calculates dynamic slices. The space complexity of the algorithm is O(S). The run-time complexity of the algorithm is O(S 2 ). We have also shown that our dynamic slicing algorithm computes correct dynamic slices

    AC/DC Smart Control and Power Sharing of DC Distribution Systems

    Get PDF
    The purpose of this research is to develop a grid connected DC distribution system to ensure efficient integration of different alternate sources to the power system. An investigation of different AC and DC converter topologies and their control is conducted. A new converter topology for sharing DC power was developed to enhance the efficiency and stability of the alternate sources connected to the DC Distribution System. Mathematical model and control system design of the developed converters were included in the thesis. A novel smart-PID controller for optimal control of DC-DC converter was used as voltage controller in PV systems. This controller maximizes the stable operating range by using genetic algorithm (GA) to tune the PID parameters ultimately at various loading conditions. A fuzzy logic approach was then used to add a factor of intelligence to the controller such that it can move among different values of proportional gain, derivative gain, and integral gain based on the system conditions. This controller allows optimal control of boost converter at any loading condition with no need to retune the parameters or possibility of failure. Moreover, a novel technique to move between the PI and PID configurations of the controller such that the minimum overshoot and ripple are achieved. This increases the controller applicability for utilization of PV systems in supplying sensitive loads. An effective algorithm for optimizing distribution system operation in a smart grid, from cost and system stability points of view, was developed. This algorithm mainly aims to control the power available from different sources so they satisfy the load demand with the least possible cost while giving the highest priority to renewable energy sources. Moreover, a smart battery charger was designed to control the batteries and allow them to discharge only when there is a small load predicted. During the period they become available, they act as a buffer for the predicted large load to increase the stability of the system and reduce voltage dips

    Sensor and model integration for the rapid prediction of concurrent flow flame spread

    Get PDF
    Fire Safety Engineering is required at every stage in the life cycle of modern-day buildings. Fire safety design, detection and suppression, and emergency response are all vital components of Structural Fire Safety but are usually perceived as independent issues. Sensor deployment and exploitation is now common place in modern buildings for means such as temperature, air quality and security management. Despite the potential wealth of information these sensors could afford fire fighters, the design of sensor networks within buildings is entirely detached from procedures associated to emergency management. The experiences of Dalmarnock Fire Test Two showed that streams of raw data emerging from sensors lead to a rapid information overload and do little to improve the understanding of the complex phenomenon and likely future events during a real fire. Despite current sensor technology in other fields being far more advanced than that of fire, there is no justification for more complex and expensive sensors in this context. In isolation therefore, sensors are not sufficient to aid emergency response. Fire modelling follows a similar path. Two studies of Dalmarnock Fire Test One demonstrate clearly the current state of the art of fire modelling. A Priori studies by Rein et al. 2009 showed that blind prediction of the evolution of a compartment fire is currently beyond the state of the art of fire modelling practice. A Posteriori studies by Jahn et al. 2007 demonstrated that even with the provision of large quantities of sensor data, video footage, and prior knowledge of the fire; producing a CFD reconstruction was an incredibly difficult, laborious, intuitive and repetitive task. Fire fighting is therefore left as an isolated activity that does not benefit from sensor data or the potential of modelling the event. In isolation sensors and fire modelling are found lacking. Together though they appear to form the perfect compliment. Sensors provide a plethora of information which lacks interpretation. Models provide a method of interpretation but lack the necessary information to make this output robust. Thus a mechanism to achieve accurate, timely predictions by means of theoretical models steered by continuous calibration against sensor measurements is proposed.Issues of accuracy aside, these models demand heavy resources and computational time periods that are far greater than the time associated with the processes being simulated. To be of use to emergency responders, the output would need to be produced faster than the event itself with lead time to enable planning of an intervention strategy. Therefore in isolation, model output is not robust or fast enough to be implemented in an emergency response scenario. The concept of super-real time predictions steered by measurements is studied in the simple yet meaningful scenario of concurrent flow flame spread. Experiments have been conducted with PMMA slabs to feed sensor data into a simple analytical model. Numerous sensing techniques have been adapted to feed a simple algebraic expression from the literature linking flame spread, flame characteristics and pyrolysis evolution in order to model upward flame spread. The measurements are continuously fed to the computations so that projections of the flame spread velocity and flame characteristics can be established at each instant in time, ahead of the real flame. It was observed that as the input parameters in the analytical models were optimised to the scenario, rapid convergence between the evolving experiment and the predictions was attained

    A distributed information sharing collaborative system (DISCS)

    Get PDF

    Analyse du temps d'exécution pire-cas de tâches temps-réel exécutées sur une architecture multi-cœurs

    Get PDF
    Software failures in hard real-time systems may have hazardous effects (industrial disasters, human lives endangering). The verification of timing constraints in a hard real-time system depends on the knowledge of the worst-case execution times (WCET) of the tasks accounting for the embedded program. Using multicore processors is a mean to improve embedded systems performances. However, determining worst-case execution times estimates on these architectures is made difficult by the sharing of some resources among cores, especially the interconnection bus that enables accesses to the shared memory. This document proposes a two-level arbitration scheme that makes it possible to improve executed tasks performances while complying with timing constraints. Described methods assess an optimal bus access priority level to each of the tasks. They also allow to find an optimal allocation of tasks to cores when tasks to execute are more numerous than available cores. Experimental results show a meaningful drop in worst-case execution times estimates and processor utilization.Les défaillances des applications embarquées dans les systèmes temps-réel strict peuvent avoir des conséquences graves (catastrophes industrielles, mise en danger de vies humaines). La vérification des contraintes temporelles d'un système temps-réel strict dépend de la connaissance du temps d'exécution pire-cas des tâches constituant l'application embarquée. L'utilisation de processeurs multi-cœurs est l'un des moyens actuellement mis en œuvre afin d'améliorer le niveau de performances des systèmes embarqués. Cependant, la détermination du temps d'exécution pire-cas d'une tâche sur ce type d'architecture est rendue difficile par le partage de certaines ressources par les cœurs, et notamment le bus d'interconnexion permettant l'accès à la mémoire centrale. Ce document propose un nouveau mécanisme d'arbitrage de bus à deux niveaux permettant d'améliorer les performances des ensembles de tâches exécutés tout en garantissant le respect des contraintes temporelles. Les méthodes décrites permettent d'établir un niveau de priorité d'accès au bus optimal pour chacune des tâches exécutées. Elles permettent également de trouver une allocation optimale des tâches aux cœurs lorsqu'il y a plus de tâches à exécuter que de cœurs disponibles. Les résultats expérimentaux montrent une diminution significative des estimations de temps d'exécution pire-cas et de l'utilisation du processeur
    corecore