13 research outputs found

    High-speed, economical design implementation of transit network router

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (p. 88-90).by Kazuhiro Hara.M.S

    Network flow optimization for distributed clouds

    Get PDF
    Internet applications, which rely on large-scale networked environments such as data centers for their back-end support, are often geo-distributed and typically have stringent performance constraints. The interconnecting networks, within and across data centers, are critical in determining these applications' performance. Data centers can be viewed as composed of three layers: physical infrastructure consisting of servers, switches, and links, control platforms that manage the underlying resources, and applications that run on the infrastructure. This dissertation shows that network flow optimization can improve performance of distributed applications in the cloud by designing high-throughput schemes spanning all three layers. At the physical infrastructure layer, we devise a framework for measuring and understanding throughput of network topologies. We develop a heuristic for estimating the worst-case performance of any topology and propose a systematic methodology for comparing performance of networks built with different equipment. At the control layer, we put forward a source-routed data center fabric which can achieve near-optimal throughput performance by leveraging a large number of available paths while using limited memory in switches. At the application layer, we show that current Application Network Interfaces (ANIs), abstractions that translate an application's performance goals to actionable network objectives, fail to capture the requirements of many emerging applications. We put forward a novel ANI that can capture application intent more effectively and quantify performance gains achievable with it. We also tackle resource optimization in the inter-data center context of cellular providers. In this emerging environment, a large amount of resources are geographically fragmented across thousands of micro data centers, each with a limited share of resources, necessitating cross-application optimization to satisfy diverse performance requirements and improve network and server utilization. Our solution, Patronus, employs hierarchical optimization for handling multiple performance requirements and temporally partitioned scheduling for scalability

    Small-world interconnection networks for large parallel computer systems

    Get PDF
    The use of small-world graphs as interconnection networks of multicomputers is proposed and analysed in this work. Small-world interconnection networks are constructed by adding (or modifying) edges to an underlying local graph. Graphs with a rich local structure but with a large diameter are shown to be the most suitable candidates for the underlying graph. Generation models based on random and deterministic wiring processes are proposed and analysed. For the random case basic properties such as degree, diameter, average length and bisection width are analysed, and the results show that a fast transition from a large diameter to a small diameter is experienced when the number of new edges introduced is increased. Random traffic analysis on these networks is undertaken, and it is shown that although the average latency experiences a similar reduction, networks with a small number of shortcuts have a tendency to saturate as most of the traffic flows through a small number of links. An analysis of the congestion of the networks corroborates this result and provides away of estimating the minimum number of shortcuts required to avoid saturation. To overcome these problems deterministic wiring is proposed and analysed. A Linear Feedback Shift Register is used to introduce shortcuts in the LFSR graphs. A simple routing algorithm has been constructed for the LFSR and extended with a greedy local optimisation technique. It has been shown that a small search depth gives good results and is less costly to implement than a full shortest path algorithm. The Hilbert graph on the other hand provides some additional characteristics, such as support for incremental expansion, efficient layout in two dimensional space (using two layers), and a small fixed degree of four. Small-world hypergraphs have also been studied. In particular incomplete hypermeshes have been introduced and analysed and it has been shown that they outperform the complete traditional implementations under a constant pinout argument. Since it has been shown that complete hypermeshes outperform the mesh, the torus, low dimensional m-ary d-cubes (with and without bypass channels), and multi-stage interconnection networks (when realistic decision times are accounted for and with a constant pinout), it follows that incomplete hypermeshes outperform them as well

    Modern Applications in Optics and Photonics: From Sensing and Analytics to Communication

    Get PDF
    Optics and photonics are among the key technologies of the 21st century, and offer potential for novel applications in areas such as sensing and spectroscopy, analytics, monitoring, biomedical imaging/diagnostics, and optical communication technology. The high degree of control over light fields, together with the capabilities of modern processing and integration technology, enables new optical measurement systems with enhanced functionality and sensitivity. They are attractive for a range of applications that were previously inaccessible. This Special Issue aims to provide an overview of some of the most advanced application areas in optics and photonics and indicate the broad potential for the future

    Dynamically reconfigurable long-reach PONs for high capacity access

    Get PDF
    Fibre-to-the-Premises (FTTP) is currently seen as the ultimate in high-speed transmission technologies for delivering ubiquitous bandwidth to customers. However, as the deployment of network infrastructure requires a substantial investment, the main obstacle to fibre deployment is that of financial viability. With this in mind, a logical strategy to offset network costs is to optimise the infrastructure in order to capture a greater amount of customers over larger areas with increased sharing of network resources. This approach prompted the design of a long-reach passive optical network (LR-PON) in which the physical reach and split of a conventional PON is significantly increased through the use of intermediate optical amplification. In particular, the LR-PON architecture effectively integrates the metro and access networks enabling the majority of local exchange sites to be bypassed resulting in a substantial reduction in field equipment requirements and power consumption. Furthermore, the extension in physical reach and split can be coupled with an increased information capacity through the use of time- and wavelength division multiplexing (TWDM) which serve to exploit the large bandwidth capabilities offered by single-mode fibre. In this project, reconfigurable TWDM LR-PON architectures which dynamically exploit the wavelength domain are proposed, assembled and characterised in order to establish an economically viable ‘open access’ environment that is capable of concurrently supporting multiple operators offering converged services (residential, business and mobile) to support diverse customer requirements and locations. The main investigations in this work address the key physical layer challenges within such wavelength-agile networks. In particular, a range of experimental analysis has been carried out in order to realise the critical component technologies which include low-cost, 10G-capable, wavelength-tuneable transmitters for mass-market residential deployment and the development of gain-stabilised optical amplifier nodes to support the targeted physical reach (≥ 100km) and split (≥ 512). Finally, the feasibility of the proposed dynamically reconfigurable LR-PON configurations as a flexible and cost-effective solution for future access networks is verified through full-scale network demonstrations using an experimental laboratory test-bed

    Creation of value with open source software in the telecommunications field

    Get PDF
    Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    Digital mixing consoles: parallel architectures and taskforce scheduling strategies

    Get PDF
    This thesis is concerned specifically with the implementation of large-scale professional DMCs. The design of such multi-DSP audio products is extremely challenging: one cannot simply lash together n DSPs and obtain /7-times the performance of a sole device. M-P models developed here show that topology and IPC mechanisms have critical design implications. Alternative processor technologies are investigated with respect to the requirements of DMC architectures. An extensive analysis of M-P topologies is undertaken using the metrics provided by the TPG tool. Novel methods supporting DSP message-passing connectivity lead to the development of a hybrid audio M-P (HYMIPS) employing these techniques. A DMC model demonstrates the impact of task allocation on ASP M-P architectures. Five application-specific heuristics and four static-labelling schemes are developed for scheduling console taskforces on M-Ps. An integrated research framework and DCS engine enable scheduling strategies to be analysed with regard to the DMC problem domain. Three scheduling algorithms — CPM, DYN and AST — and three IPC mechanisms — FWE, NSL and NML — are investigated. Dynamic-labelling strategies and mix-bus granularity issues are further studied in detail. To summarise, this thesis elucidates those topologies, construction techniques and scheduling algorithms appropriate to professional DMC systems

    Proceedings of the 2018 Canadian Society for Mechanical Engineering (CSME) International Congress

    Get PDF
    Published proceedings of the 2018 Canadian Society for Mechanical Engineering (CSME) International Congress, hosted by York University, 27-30 May 2018

    The Next Generation Space Telescope

    Get PDF
    In Space Science in the Twenty-First Century, the Space Science Board of the National Research Council identified high-resolution-interferometry and high-throughput instruments as the imperative new initiatives for NASA in astronomy for the two decades spanning 1995 to 2015. In the optical range, the study recommended an 8 to 16-meter space telescope, destined to be the successor of the Hubble Space Telescope (HST), and to complement the ground-based 8 to 10-meter-class telescopes presently under construction. It might seem too early to start planning for a successor to HST. In fact, we are late. The lead time for such major missions is typically 25 years, and HST has been in the making even longer with its inception dating back to the early 1960s. The maturity of space technology and a more substantial technological base may lead to a shorter time scale for the development of the Next Generation Space Telescope (NGST). Optimistically, one could therefore anticipate that NGST be flown as early as 2010. On the other hand, the planned lifetime of HST is 15 years. So, even under the best circumstances, there will be a five year gap between the end of HST and the start of NGST. The purpose of this first workshop dedicated to NGST was to survey its scientific potential and technical challenges. The three-day meeting brought together 130 astronomers and engineers from government, industry and universities. Participants explored the technologies needed for building and operating the observatory, reviewed the current status and future prospects for astronomical instrumentation, and discussed the launch and space support capabilities likely to be available in the next decade. To focus discussion, the invited speakers were asked to base their presentations on two nominal concepts, a 10-meter telescope in space in high earth orbit, and a 16-meter telescope on the moon. The workshop closed with a panel discussion focused mainly on the scientific case, siting, and the programmatic approach needed to bring NGST into being. The essential points of this panel discussion have been incorporated into a series of recommendations that represent the conclusions of the workshop. Speakers were asked to provide manuscripts of their presentation. Those received were reproduced here with only minor editorial changes. The few missing papers have been replaced by the presentation viewgraphs. The discussion that follows each speaker's paper was derived from the question and answer sheets, or if unavailable, from the tapes of the meeting. In the latter case, the editors have made every effort to faithfully represent the discussion

    Nova combinação de hardware e de software para veículos de desporto automóvel baseada no processamento directo de funções gráficas

    Get PDF
    Doutoramento em Engenharia EletrónicaThe main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.A principal motivação para o trabalho que conduziu a esta tese residiu na constatação de que os actuais métodos de modelação de centralinas automóveis conduzem a significativos problemas de desenvolvimento e manutenção. Como resultado dessa constatação, o objectivo deste trabalho centrou-se no desenvolvimento de um conceito de arquitectura que rompe radicalmente com os modelos state-of-the-art e que assenta num conjunto de conceitos que vieram a ser designados de "Macro" e "Celular ECU". Com este modelo pretendeu-se simultaneamente minimizar a panóplia de software e de hardware necessários à obtenção de uma sistema funcional final. Inicialmente, esta tese propõem-se fazer um levantamento exaustivo do estado da arte na área específica do software e correspondente hardware das ferramentas e centralinas automóveis. Os problemas decorrentes de tal software serão identificados e, dessa identificação deverá ficar claro, que praticamente todos esses problemas têm origem directa ou indirecta no facto de se continuar a fazer um uso exaustivo de "tool chains" extremamente compridas e complexas. De forma semelhante, no hardware, os problemas têm origem na extrema complexidade e inter-dependência das arquitecturas dos processadores. As consequências distribuem-se por uma extensa lista de "pitfalls" que também serão exaustivamente enumeradas, identificadas e caracterizadas. São ainda propostas soluções para os diversos problemas actuais e correspondentes implementações dessas mesmas soluções. Todo este trabalho final faz parte de um sistema "proof-of-concept" designado "ECU2010". O elemento central deste sistema é o já referido conceito de “Macro”, que consiste num bloco gráfico que representa uma de muitas operações necessárias num sistema automóvel, como sejam funções aritméticas, lógicas, de filtragem, de integração, de multiplexagem, entre outras. O resultado final do trabalho proposto assenta numa única ferramenta, totalmente integrada que permite o desenvolvimento e gestão de todo o sistema de forma simples numa única interface visual. Parte do resultado apresentado assenta numa plataforma hardware totalmente adaptada ao software, bem como na elevada flexibilidade e escalabilidade, para além de permitir a utilização de exactamente a mesma tecnologia quer para a centralina, como para o datalogger e para os periféricos. Os sistemas actuais assentam num percurso maioritariamente evolutivo, apenas permitindo a calibração online de parâmetros, mas nunca a alteração online dos próprios algoritmos das funcionalidades automóveis. Pelo contrário, o sistema desenvolvido e descrito nesta tese apresenta a vantagem de seguir um "clean-slate approach", pelo que tudo pode ser globalmente repensado. No final e para além de todas as restantes características, o “LIVE-PROTOTYPING” é a funcionalidade mais relevante, ao permitir alterar algoritmos automóveis (ex: injecção, ignição, controlo lambda, etc.) de forma 100% online, mantendo o motor constantemente a trabalhar e sem nunca ter de o parar ou re-arrancar para efectuar tais alterações. Isto elimina consequentemente qualquer "turnaround delay" tipicamente presente em qualquer sistema automóvel actual, aumentando de forma significativa a eficiência global do sistema e da sua utilização
    corecore