3,438 research outputs found

    A Simplified Method for Optimising Sequentially Processed Access Control Lists

    Get PDF
    Among the various options for implementing Internet packet filters in the form of Access Control Lists (ACLs), is the intuitive – but potentially crude – method of processing the ACL rules in sequential order. Although such an approach leads to variable processing times for each packet matched against the ACL, it also offers the opportunity to reduce this time by reordering its rules in response to changing traffic characteristics. A number of heuristics exist for optimising rule order in sequentially processed ACLs and the most efficient of these can be shown to have a beneficial effect in a majority of cases and for ACLs with relatively small numbers of rules. This paper presents an enhancement to this algorithm by reducing part of its complexity. Although the simplification involved leads to an instantaneous lack of accuracy, the long-term trade-off between processing speed and performance can be seen, through experimentation, to be positive. This improvement, though small, is consistent and worthwhile and can be observed in the majority of cases

    AT-GIS: highly parallel spatial query processing with associative transducers

    Get PDF
    Users in many domains, including urban planning, transportation, and environmental science want to execute analytical queries over continuously updated spatial datasets. Current solutions for largescale spatial query processing either rely on extensions to RDBMS, which entails expensive loading and indexing phases when the data changes, or distributed map/reduce frameworks, running on resource-hungry compute clusters. Both solutions struggle with the sequential bottleneck of parsing complex, hierarchical spatial data formats, which frequently dominates query execution time. Our goal is to fully exploit the parallelism offered by modern multicore CPUs for parsing and query execution, thus providing the performance of a cluster with the resources of a single machine. We describe AT-GIS, a highly-parallel spatial query processing system that scales linearly to a large number of CPU cores. ATGIS integrates the parsing and querying of spatial data using a new computational abstraction called associative transducers(ATs). ATs can form a single data-parallel pipeline for computation without requiring the spatial input data to be split into logically independent blocks. Using ATs, AT-GIS can execute, in parallel, spatial query operators on the raw input data in multiple formats, without any pre-processing. On a single 64-core machine, AT-GIS provides 3× the performance of an 8-node Hadoop cluster with 192 cores for containment queries, and 10× for aggregation queries

    A collaborative platform for integrating and optimising Computational Fluid Dynamics analysis requests

    Get PDF
    A Virtual Integration Platform (VIP) is described which provides support for the integration of Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) analysis tools into an environment that supports the use of these tools in a distributed collaborative manner. The VIP has evolved through previous EU research conducted within the VRShips-ROPAX 2000 (VRShips) project and the current version discussed here was developed predominantly within the VIRTUE project but also within the SAFEDOR project. The VIP is described with respect to the support it provides to designers and analysts in coordinating and optimising CFD analysis requests. Two case studies are provided that illustrate the application of the VIP within HSVA: the use of a panel code for the evaluation of geometry variations in order to improve propeller efficiency; and, the use of a dedicated maritime RANS code (FreSCo) to improve the wake distribution for the VIRTUE tanker. A discussion is included detailing the background, application and results from the use of the VIP within these two case studies as well as how the platform was of benefit during the development and a consideration of how it can benefit HSVA in the future

    Improving the Performance of IP Filtering using a Hybrid Approach to ACLs

    Get PDF
    With the use of policy based security being implemented in Access Control Lists (ACLs) at the distribution layer and the increased speed of interfaces the delays introduced into networks by routers are becoming significant. This paper investigates the size of the problem that is encountered in a typical network installation. Additionally since specialized hardware is not always available a hybrid approach to optimizing the order of rules in an ACL is put forward. This approach is based on the off-line pre-processing of lists to enable them to be reordered dynamically based on the type of traffic being processed by the router

    Multi-objective decision-making for dietary assessment and advice

    Get PDF
    Unhealthy diets contribute substantially to the worldwide burden of non-communicable diseases, such as cardiovascular diseases, cancers, and diabetes. Globally, non-communicable diseases are the leading cause of death, and numbers are still rising, which makes healthy diets a global priority. In Nutrition Research, two fields are particularly relevant for formulating healthier diets: dietary assessment, which assesses food and nutrient intake in order to investigate the relation between diet and disease, and dietary advice, which translates food and nutrient recommendations into realistic food choices. Both fields face complex decision problems: which foods to include in dietary assessment or advice in order to pursue the multiple objectives of the researcher or fulfil the requirements of the consumer. This thesis connects the disciplines of Nutrition Research and Operations Research in order to contribute to formulating healthier diets. In the context of dietary assessment, the thesis proposes a MILP model for the selection of food items for food frequency questionnaires (a crucial tool in dietary assessment) that speeds up the selection process and increases standardisation, transparency, and reproducibility. An extension of this model gives rise to a 0-1 fractional programming problem with more than 200 fractional terms, of which in every feasible solution only a subset is actually defined. The thesis shows how this problem can be reformulated in order to eliminate the undefined fractional terms. The resulting MILP model can solved with standard software. In the context of dietary advice, the thesis proposes a diet model in which food and nutrient requirements are formulated via fuzzy sets. With this model, the impact of various achievement functions is demonstrated. The preference structures modelled via these achievement functions represent various ways in which multiple nutritional characteristics of a diet can be aggregated into an overall indicator for diet quality. Furthermore, for Operations Research the thesis provides new insights into a novel preference structure from literature, that combines equity and utilitarianism in a single model. Finally, the thesis presents conclusions of the research and a general discussion, which discusses, amongst others, the main modelling choices encountered when using MODM methods for optimising diet quality. Summarising, this thesis explores the use of MODM approaches to improve decision-making for dietary assessment and advice. It provides opportunities for better decision-making in research on dietary assessment and advice, and it contributes to model building and solving in Operations Research. Considering the added value for Nutrition Research and the new models and solutions generated, we conclude that the combination of both fields has resulted in synergy between Nutrition Research and Operations Research.</p

    The LOFAR Transients Pipeline

    Get PDF
    Current and future astronomical survey facilities provide a remarkably rich opportunity for transient astronomy, combining unprecedented fields of view with high sensitivity and the ability to access previously unexplored wavelength regimes. This is particularly true of LOFAR, a recently-commissioned, low-frequency radio interferometer, based in the Netherlands and with stations across Europe. The identification of and response to transients is one of LOFAR's key science goals. However, the large data volumes which LOFAR produces, combined with the scientific requirement for rapid response, make automation essential. To support this, we have developed the LOFAR Transients Pipeline, or TraP. The TraP ingests multi-frequency image data from LOFAR or other instruments and searches it for transients and variables, providing automatic alerts of significant detections and populating a lightcurve database for further analysis by astronomers. Here, we discuss the scientific goals of the TraP and how it has been designed to meet them. We describe its implementation, including both the algorithms adopted to maximize performance as well as the development methodology used to ensure it is robust and reliable, particularly in the presence of artefacts typical of radio astronomy imaging. Finally, we report on a series of tests of the pipeline carried out using simulated LOFAR observations with a known population of transients.Comment: 30 pages, 11 figures; Accepted for publication in Astronomy & Computing; Code at https://github.com/transientskp/tk

    Predicting the influence of strain on crack length measurements performed using the potential drop method

    Get PDF
    The potential drop (PD) crack growth measurement technique is sensitive to strain accumulation which is often erroneously interpreted as crack extension. When testing ductile materials these errors can be significant, but in many cases the optimum method of minimising or supressing them remains unknown because it is extremely difficult to measure them experimentally in isolation from other sources of error, such non-ideal crack morphology. In this work a novel method of assessing the influence of strain on PD, using a sequentially coupled structural electrical finite element (FE) model, has been developed. By comparing the FE predictions with experimental data it has been demonstrated that the proposed FE technique is extremely effective at predicting trends in PD due to strain. It has been used to identify optimum PD configurations for compact tension, C(T), and single edge notched tension, SEN(T), fracture mechanics specimens and it has been demonstrated that the PD configuration often recommended for C(T) specimens can be subject to large errors due to strain accumulation. In addition, the FE technique has been employed to assess the significance of strain after the initiation of stable tearing for a monotonically loaded C(T) specimen. The proposed FE technique provides a powerful tool for optimising the measurement of crack initiation and growth in applications where large strains are present, e.g. J-R curve and creep crack growth testing

    Study of non-interactive computer methods for microcircuit layout

    Get PDF

    Toward digital twins for sawmill production planning and control : benefits, opportunities and challenges

    Get PDF
    Sawmills are key elements of the forest product industry supply chain, and they play important economic, social, and environmental roles. Sawmill production planning and control are, however, challenging owing to severalfactors, including, but not limited to, the heterogeneity of the raw material. The emerging concept of digital twins introduced in the context of Industry 4.0 has generated high interest and has been studied in a variety of domains, including production planning and control. In this paper, we investigate the benefits digital twins would bring to the sawmill industry via a literature review on the wider subject of sawmill production planning and control. Opportunities facilitating their implementation, as well as ongoing challenges from both academic and industrial perspectives, are also studied

    Genetic improvement of GPU software

    Get PDF
    We survey genetic improvement (GI) of general purpose computing on graphics cards. We summarise several experiments which demonstrate four themes. Experiments with the gzip program show that genetic programming can automatically port sequential C code to parallel code. Experiments with the StereoCamera program show that GI can upgrade legacy parallel code for new hardware and software. Experiments with NiftyReg and BarraCUDA show that GI can make substantial improvements to current parallel CUDA applications. Finally, experiments with the pknotsRG program show that with semi-automated approaches, enormous speed ups can sometimes be had by growing and grafting new code with genetic programming in combination with human input
    • …
    corecore