25 research outputs found

    Statistical Service Guarantees for Traffic Scheduling in High-Speed Data Networks

    Get PDF
    School of Electrical and Computer Engineerin

    Driving the Network-on-Chip Revolution to Remove the Interconnect Bottleneck in Nanoscale Multi-Processor Systems-on-Chip

    Get PDF
    The sustained demand for faster, more powerful chips has been met by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SoC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MP-SoC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NoCs) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the onchip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation performs a design space exploration of network-on-chip architectures, in order to point-out the trade-offs associated with the design of each individual network building blocks and with the design of network topology overall. The design space exploration is preceded by a comparative analysis of state-of-the-art interconnect fabrics with themselves and with early networkon- chip prototypes. The ultimate objective is to point out the key advantages that NoC realizations provide with respect to state-of-the-art communication infrastructures and to point out the challenges that lie ahead in order to make this new interconnect technology come true. Among these latter, technologyrelated challenges are emerging that call for dedicated design techniques at all levels of the design hierarchy. In particular, leakage power dissipation, containment of process variations and of their effects. The achievement of the above objectives was enabled by means of a NoC simulation environment for cycleaccurate modelling and simulation and by means of a back-end facility for the study of NoC physical implementation effects. Overall, all the results provided by this work have been validated on actual silicon layout

    Models, Algorithms, and Architectures for Scalable Packet Classification

    Get PDF
    The growth and diversification of the Internet imposes increasing demands on the performance and functionality of network infrastructure. Routers, the devices responsible for the switch-ing and directing of traffic in the Internet, are being called upon to not only handle increased volumes of traffic at higher speeds, but also impose tighter security policies and provide support for a richer set of network services. This dissertation addresses the searching tasks performed by Internet routers in order to forward packets and apply network services to packets belonging to defined traffic flows. As these searching tasks must be performed for each packet traversing the router, the speed and scalability of the solutions to the route lookup and packet classification problems largely determine the realizable performance of the router, and hence the Internet as a whole. Despite the energetic attention of the academic and corporate research communities, there remains a need for search engines that scale to support faster communication links, larger route tables and filter sets and increasingly complex filters. The major contributions of this work include the design and analysis of a scalable hardware implementation of a Longest Prefix Matching (LPM) search engine for route lookup, a survey and taxonomy of packet classification techniques, a thorough analysis of packet classification filter sets, the design and analysis of a suite of performance evaluation tools for packet classification algorithms and devices, and a new packet classification algorithm that scales to support high-speed links and large filter sets classifying on additional packet fields

    On a wildlife tracking and telemetry system : a wireless network approach

    Get PDF
    Includes abstract.Includes bibliographical references (p. 239-261).Motivated by the diversity of animals, a hybrid wildlife tracking system, EcoLocate, is proposed, with lightweight VHF-like tags and high performance GPS enabled tags, bound by a common wireless network design. Tags transfer information amongst one another in a multi-hop store-and-forward fashion, and can also monitor the presence of one another, enabling social behaviour studies to be conducted. Information can be gathered from any sensor variable of interest (such as temperature, water level, activity and so on) and forwarded through the network, thus leading to more effective game reserve monitoring. Six classes of tracking tags are presented, varying in weight and functionality, but derived from a common set of code, which facilitates modular tag design and deployment. The link between the tags means that tags can dynamically choose their class based on their remaining energy, prolonging lifetime in the network at the cost of a reduction in function. Lightweight, low functionality tags (that can be placed on small animals) use the capabilities of heavier, high functionality devices (placed on larger animals) to transfer their information. EcoLocate is a modular approach to animal tracking and sensing and it is shown how the same common technology can be used for diverse studies, from simple VHF-like activity research to full social and behavioural research using wireless networks to relay data to the end user. The network is not restricted to only tracking animals – environmental variables, people and vehicles can all be monitored, allowing for rich wildlife tracking studies

    Congestion detection within multi-service TCP/IP networks using wavelets.

    Get PDF
    Using passive observation within the multi-service TCP/IP networking domain, we have developed a methodology that associates the frequency composition of composite traffic signals with the packet transmission mechanisms of TCP. At the core of our design is the Discrete Wavelet Transform (DWT), used to temporally localise the frequency variations of a signal. Our design exploits transmission mechanisms (including Fast Retransmit/Fast Recovery, Congestion Avoidance, Slow start, and Retransmission Timer Expiry with Exponential Back off.) that are activated in response to changes within this type of network environment. Manipulation of DWT output, combined with the use of novel heuristics permits shifts in the frequency spectrum of composite traffic signals to be directly associated with the former. Our methodology can be adapted to accommodate composite traffic signals that contain a substantial proportion of data originating from non-rate adaptive sources often associated with Long Range Dependence and Self Similarity (e.g. Pareto sources). We demonstrate the methodology in two ways. Firstly, it is used to design a congestion indicator tool that can operate with network control mechanisms that dissipate congestion. Secondly, using a queue management algorithm (Random Early Detection) as a candidate protocol, we show how our methodology can be adapted to produce a performance-monitoring tool. Our approach provides a solution that has both low operational and implementation intrusiveness with respect to existing network infrastructure. The methodology requires a single parameter (i.e. the arrival rate of traffic at a network node), which can be extracted from almost all network-forwarding devices. This simplifies implementation. Our study was performed within the context of fault management with design requirements and constraints arising from an in depth study of the Fault Management Systems (FMS) used by British Telecomm on regional UK networks up to February 2000

    International benchmarking of Australian telecommunications services

    Get PDF
    The study compares the performance of the Australian telecommunications services industry with those in other countries. Related papers submitted to this study by NECG Ltd. and Telecom New Zealand have been released with the report.international benchmarking - telecommunications - Telstra - carriers - service providers - social policy - retail price regulation - Universal Service Obligation - competition - regulation - access - number portability - accounting separation - anti-competitive behaviour - Public Switched Telephone Network - ISDN - mobile - residential price - business price - phone - SMEs - quality of service - performance indicators - productivity

    Introduction to Queueing Theory and Stochastic Teletraffic Models

    Full text link
    The aim of this textbook is to provide students with basic knowledge of stochastic models that may apply to telecommunications research areas, such as traffic modelling, resource provisioning and traffic management. These study areas are often collectively called teletraffic. This book assumes prior knowledge of a programming language, mathematics, probability and stochastic processes normally taught in an electrical engineering course. For students who have some but not sufficiently strong background in probability and stochastic processes, we provide, in the first few chapters, background on the relevant concepts in these areas.Comment: 298 page

    Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1

    Get PDF
    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications

    Third International Symposium on Space Mission Operations and Ground Data Systems, part 1

    Get PDF
    Under the theme of 'Opportunities in Ground Data Systems for High Efficiency Operations of Space Missions,' the SpaceOps '94 symposium included presentations of more than 150 technical papers spanning five topic areas: Mission Management, Operations, Data Management, System Development, and Systems Engineering. The papers focus on improvements in the efficiency, effectiveness, productivity, and quality of data acquisition, ground systems, and mission operations. New technology, techniques, methods, and human systems are discussed. Accomplishments are also reported in the application of information systems to improve data retrieval, reporting, and archiving; the management of human factors; the use of telescience and teleoperations; and the design and implementation of logistics support for mission operations
    corecore