4,235 research outputs found

    Utilising Located Functions to Model and Optimise Distributed Computations

    No full text
    With developments in Grid computing and Web based data storage the task of orchestrating computations is becoming ever more difficult. Identifying which of the available computation resources and datasets to use is not trivial: it requires reasoning about the problem itself and the cost of moving data to complete the computation efficiently. This paper presents a conceptual notation and performance model that enables e-researchers to reason about their computations and make choices about the best use of resources

    Synthesis of Topological Quantum Circuits

    Full text link
    Topological quantum computing has recently proven itself to be a very powerful model when considering large- scale, fully error corrected quantum architectures. In addition to its robust nature under hardware errors, it is a software driven method of error corrected computation, with the hardware responsible for only creating a generic quantum resource (the topological lattice). Computation in this scheme is achieved by the geometric manipulation of holes (defects) within the lattice. Interactions between logical qubits (quantum gate operations) are implemented by using particular arrangements of the defects, such as braids and junctions. We demonstrate that junction-based topological quantum gates allow highly regular and structured implementation of large CNOT (controlled-not) gate networks, which ultimately form the basis of the error corrected primitives that must be used for an error corrected algorithm. We present a number of heuristics to optimise the area of the resulting structures and therefore the number of the required hardware resources.Comment: 7 Pages, 10 Figures, 1 Tabl

    BCAS: A Web-enabled and GIS-based Decision Support System for the Diagnosis and Treatment of Breast Cancer

    Get PDF
    For decades, geographical variations in cancer rates have been observed but the precise determinants of such geographic differences in breast cancer development are unclear. Various statistical models have been proposed. Applications of these models, however, require that the data be assembled from a variety of sources, converted into the statistical models’ parameters and delivered effectively to researchers and policy makers. A web-enabled and GIS-based system can be developed to provide the needed functionality. This article overviews the conceptual web-enabled and GIS-based system (BCAS), illustrates the system’s use in diagnosing and treating breast cancer and examines the potential benefits and implications for breast cancer research and practice

    A service oriented architecture for engineering design

    Get PDF
    Decision making in engineering design can be effectively addressed by using genetic algorithms to solve multi-objective problems. These multi-objective genetic algorithms (MOGAs) are well suited to implementation in a Service Oriented Architecture. Often the evaluation process of the MOGA is compute-intensive due to the use of a complex computer model to represent the real-world system. The emerging paradigm of Grid Computing offers a potential solution to the compute-intensive nature of this objective function evaluation, by allowing access to large amounts of compute resources in a distributed manner. This paper presents a grid-enabled framework for multi-objective optimisation using genetic algorithms (MOGA-G) to aid decision making in engineering design

    Hierarchical and distributed control concept for distribution network congestion management

    Get PDF
    Congestion management is one of the core enablers of smart distribution systems where distributed energy resources are utilised in network control to enable cost-effective network interconnection of distributed generation (DG) and better utilisation of network assets. The primary aim of congestion management is to prevent voltage violations and network overloading. Congestion management algorithms can also be used to optimise the network state. This study proposes a hierarchical and distributed congestion management concept for future distribution networks having large-scale DG and other controllable resources in MV and LV networks. The control concept aims at operating the network at minimum costs while retaining an acceptable network state. The hierarchy consists of three levels: primary controllers operate based on local measurements, secondary control optimises the set points of the primary controllers in real-time and tertiary control utilises load and production forecasts as its inputs and realises network reconfiguration algorithm and connection to the market. Primary controllers are located at the connection point of the controllable resource, secondary controllers at primary and secondary substations and tertiary control at the control centre. Hence, the control is spatially distributed and operates in different time frames.The research leading to these results has received funding from the European Union seventh framework program FP7-SMARTCITIES-2013 under grant agreement 608860 IDE4L – Ideal grid for all

    SZTAKI desktop grid: a modular and scalable way of building large computing grids

    Get PDF
    So far BOINC based desktop grid systems have been applied at the global computing level. This paper describes an extended version of BOINC called SZTAKI desktop grid (SZDG) that aims at using desktop grids (DGs) at local (enterprise/institution) level. The novelty of SZDG is that it enables the hierarchical organisation of local DGs, i.e., clients of a DG can be DGs at a lower level that can take work units from their higher level DG server. More than that, even clusters can be connected at the client level and hence work units can contain complete MPI programs to be run on the client clusters. In order to easily create master/worker type DG applications a new API, called as the DC-API has been developed. SZDG and DC-API has been successfully applied both at the global and local level, both in academic institutions and in companies to solve problems requiring large computing power

    Performance Modelling and Optimisation of Multi-hop Networks

    Get PDF
    A major challenge in the design of large-scale networks is to predict and optimise the total time and energy consumption required to deliver a packet from a source node to a destination node. Examples of such complex networks include wireless ad hoc and sensor networks which need to deal with the effects of node mobility, routing inaccuracies, higher packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the computational limitations of the nodes. They also include more reliable communication environments, such as wired networks, that are susceptible to random failures, security threats and malicious behaviours which compromise their quality of service (QoS) guarantees. In such networks, packets traverse a number of hops that cannot be determined in advance and encounter non-homogeneous network conditions that have been largely ignored in the literature. This thesis examines analytical properties of packet travel in large networks and investigates the implications of some packet coding techniques on both QoS and resource utilisation. Specifically, we use a mixed jump and diffusion model to represent packet traversal through large networks. The model accounts for network non-homogeneity regarding routing and the loss rate that a packet experiences as it passes successive segments of a source to destination route. A mixed analytical-numerical method is developed to compute the average packet travel time and the energy it consumes. The model is able to capture the effects of increased loss rate in areas remote from the source and destination, variable rate of advancement towards destination over the route, as well as of defending against malicious packets within a certain distance from the destination. We then consider sending multiple coded packets that follow independent paths to the destination node so as to mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium and obtain the time-dependent properties of the packet’s travel process, allowing us to compare the merits and limitations of coding, both in terms of delivery times and energy efficiency. Finally, we propose models that can assist in the analysis and optimisation of the performance of inter-flow network coding (NC). We analyse two queueing models for a router that carries out NC, in addition to its standard packet routing function. The approach is extended to the study of multiple hops, which leads to an optimisation problem that characterises the optimal time that packets should be held back in a router, waiting for coding opportunities to arise, so that the total packet end-to-end delay is minimised
    corecore