390 research outputs found

    From 3D Models to 3D Prints: an Overview of the Processing Pipeline

    Get PDF
    Due to the wide diffusion of 3D printing technologies, geometric algorithms for Additive Manufacturing are being invented at an impressive speed. Each single step, in particular along the Process Planning pipeline, can now count on dozens of methods that prepare the 3D model for fabrication, while analysing and optimizing geometry and machine instructions for various objectives. This report provides a classification of this huge state of the art, and elicits the relation between each single algorithm and a list of desirable objectives during Process Planning. The objectives themselves are listed and discussed, along with possible needs for tradeoffs. Additive Manufacturing technologies are broadly categorized to explicitly relate classes of devices and supported features. Finally, this report offers an analysis of the state of the art while discussing open and challenging problems from both an academic and an industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and Innovation action; Grant agreement N. 68044

    FROM WIDE- TO SHORT-RANGE COMMUNICATIONS: USING HUMAN INTERACTIONS TO DESIGN NEW MOBILE SYSTEMS AND SERVICES

    Get PDF
    The widespread diffusion of mobile devices has radically changed the way people interact with each other and with object of their daily life. In particular, modern mobile devices are equipped with multiple radio interfaces allowing users to interact at different spatial granularities according to the various radio technology they use. The research community is progressively moving to heterogeneous network solutions which include many different wireless technologies seamlessly integrated to address a wide variety of use cases and requirements. In 5th- Generation (5G) of mobile network we can find multiple network typology such as device-to-device (D2D), vehicular networks, machine-to-machine(M2M), and more, which are integrated in the existing mobile-broadband technology such as LTE and its future evolutions. In this complex and rich scenario, many issues and challenges are still open from a technological, architectural, and mobile services and applications points of view. In this work we provide network solutions, mobile services, and applications consistent with the 5G mobile network vision by using users interactions as a common starting point. We focus on three different spatial granularities, long, medium/short, and micro mediated by cellular network, Wi-Fi, and NFC radio technologies, respectively. We deal with various kinds of issues and challenges according to the distinct spatial granularity we consider. We start with an user centric approach based on the analysis of the characteristics and the peculiarities of each kind of interaction. Following this path, we provide contributions to support the design of new network architectures, and the development of novel mobile services and applications

    Routing on the Channel Dependency Graph:: A New Approach to Deadlock-Free, Destination-Based, High-Performance Routing for Lossless Interconnection Networks

    Get PDF
    In the pursuit for ever-increasing compute power, and with Moore's law slowly coming to an end, high-performance computing started to scale-out to larger systems. Alongside the increasing system size, the interconnection network is growing to accommodate and connect tens of thousands of compute nodes. These networks have a large influence on total cost, application performance, energy consumption, and overall system efficiency of the supercomputer. Unfortunately, state-of-the-art routing algorithms, which define the packet paths through the network, do not utilize this important resource efficiently. Topology-aware routing algorithms become increasingly inapplicable, due to irregular topologies, which either are irregular by design, or most often a result of hardware failures. Exchanging faulty network components potentially requires whole system downtime further increasing the cost of the failure. This management approach becomes more and more impractical due to the scale of today's networks and the accompanying steady decrease of the mean time between failures. Alternative methods of operating and maintaining these high-performance interconnects, both in terms of hardware- and software-management, are necessary to mitigate negative effects experienced by scientific applications executed on the supercomputer. However, existing topology-agnostic routing algorithms either suffer from poor load balancing or are not bounded in the number of virtual channels needed to resolve deadlocks in the routing tables. Using the fail-in-place strategy, a well-established method for storage systems to repair only critical component failures, is a feasible solution for current and future HPC interconnects as well as other large-scale installations such as data center networks. Although, an appropriate combination of topology and routing algorithm is required to minimize the throughput degradation for the entire system. This thesis contributes a network simulation toolchain to facilitate the process of finding a suitable combination, either during system design or while it is in operation. On top of this foundation, a key contribution is a novel scheduling-aware routing, which reduces fault-induced throughput degradation while improving overall network utilization. The scheduling-aware routing performs frequent property preserving routing updates to optimize the path balancing for simultaneously running batch jobs. The increased deployment of lossless interconnection networks, in conjunction with fail-in-place modes of operation and topology-agnostic, scheduling-aware routing algorithms, necessitates new solutions to solve the routing-deadlock problem. Therefore, this thesis further advances the state-of-the-art by introducing a novel concept of routing on the channel dependency graph, which allows the design of an universally applicable destination-based routing capable of optimizing the path balancing without exceeding a given number of virtual channels, which are a common hardware limitation. This disruptive innovation enables implicit deadlock-avoidance during path calculation, instead of solving both problems separately as all previous solutions

    Independent Parallel Capillary Array Separations for Rapid Second Dimension Sampling in On-Line Two-Dimensional Capillary Electrophoresis of Complex Biological Samples

    Get PDF
    Biological samples remain challenging in proteomic separations due to their complexity and large concentration dynamic range. Improvements to separation power are needed to interrogate proteomes more deeply and facilitate the advancement of biomarker discovery for personalized medicine. Current online multidimensional separations require compromise; long analysis times if the second dimension (2nd-D) must be regenerated between injections, or reduced separation efficiency if the 2nd-D is operated rapidly. Using an array of capillaries as the 2nd-D, operated in parallel, allows fast sampling of the first dimension (1st-D). This relaxes the constraints on the 2nd-D separation, allowing it to operate at optimal separation conditions that would otherwise be sacrificed for speed. This configuration allows total separation times to approximately equal to the 1st-D separation time. We have developed a novel interface that enables continuous sampling of a 1st-D separation by a 2nd-D capillary array for rapid, high peak capacity two dimensional (2D) separations, based upon automated precision positioning of capillaries. Within a laminar flow regime, a capillary electrophoresis (CE) 1st-D separation was coupled to an array of eight independent CE 2nd-D separations. The instrument terminus provides laser induced fluorescence detection via a sheath flow cuvette. Effluent transfer efficiency, from the 1st-D to the 2nd-D, and detection was optimized using visible and fluorescent dye tracers. To that end, this dissertation will discuss characterization of interface and detector parameters, including: inter capillary transverse alignment accuracy, injection distance, injection time, hydrodynamic flow rate, density considerations, inter and intra capillary differences, signal crosstalk and laser intensity. Separation performance will further be demonstrated using model protein and serum digestates. Each dimension of the 2D instrument will be operated as a one dimensional (1D) instrument to compare against an optimized commercial 1D CE instrument. These results will be used to evaluate the quality of the separations operated in on-line 2D capillary electrophoresis-to-capillary array electrophoresis (CE×CAE) mode. A novel application of the CE×CAE design will be discussed in the spirit of resolving the long standing challenge of migration time reproducibility in CE separations

    Performance Analysis Of Pde Based Parallel Algorithms On Different Computer Architectures

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Bilişim Enstitüsü, 2009Thesis (M.Sc.) -- İstanbul Technical University, Institute of Informatics, 2009Son yıllarda dağıtık algoritmaların farklı platformlarda kullanılabilmesi platform ve uygulama bağımsız performans analizi uygulamaları ihtiyacını arttırmıştır. Farklı donanımları ve haberleşme metodlarını destekleyen uygulamalar kullanıcılara donanım ve yazılımdan bağımsız ortak bir zemin hazırladıkları için kolaylık sağlamaktadır. Kısmi fark denklemleri hesaplamalı bilim ve mühendisliğin bir çok alanında kullanılmaktadır (ısı, dalga yayılımı gibi). Bu denklemlerin sayısal çözümü yinelemeli yöntemler kullanılarak yapılmaktadır. Problemin boyutu ve hata değerine göre çözüme ulaşmak için gereken yineleme sayısı ve buna bağlı olarak süresi değişmektedir. Kısmi fark denklemelerinin tek işlemcili bilgisayarlardaki çözümü uzun sürdüğü ve yüksek boyutlarda hafızaları yetersiz kaldığı için paralelleştirilerek birden fazla bilgisayarın işlemcisi ve hafızası kullanılarak çözülmektedir. Tezimde eliptik kısmi fark denklemlerini Gauss-Seidel ve Successive Over-Relaxation (SOR) metodlarını kullanarak çözen paralel algoritmalar kullanılmıştır. Performans analizi ve eniyilemesi kabaca üç adımdan oluşmaktadır; ölçüm, sonuçların analizi, darboğazların tespit edilip yazılımda iyileştirme yapılması. Ölçüm aşamasında programın koşarken ürettiği performans bilgisi toplanır, toplanan bu veriler görselleştirme araçları ile anlaşılır hale getirilerek yorumlanır. Yorumlama aşamasında tespit edilen dar boğazlar belirlenir ve giderilme yöntemleri araştırılır. Gerekli iyileştirmeler yapılarak program yeniden analiz edilir. Bu aşamaların her birinde farklı uygulamalar kullanılabilir fakat tez çalışmamda uygulamaları tek çatı altında toplayan TAU kullanılmıştır. TAU (Tuning and Analysis Utilities) farklı donanımları ve işletim sistemlerini destekleyerek farklı paralelleştirme metodlarını analiz edebilmektedir. Açık kaynak kodlu olan TAU diğer açık kaynak kodlu uygulamalar ile uyumlu olup birçok seviyede bütünleşme sağlanmıştır. Bu tez çalışmasında, iki farklı platformda aynı uygulamanın performans analizi yapılarak platform farkının getirdiği farklılıklar incelenmektedir. Performans analizinde bir algoritmanın eniyilemesini yapmak için genel bir kural olmadığından her algoritma her platformda incelenerek gerekli değişiklikler yapılmalıdır. Bu bağlamda kullandığım PDE algoritmasının her iki sistemdeki analizi sonucu elde edilen bilgiler yorumlanmıştır.In last two decades, use of parallel algorithms on different architectures increased the need of architecture and application independent performance analysis tools. Tools that support different communication methods and hardware prepare a common ground regardless of equipments provided. Partial differential equations (PDE) are used in several applications (such as propagation of heat, wave) in computational science and engineering. These equations can be solved using iterative numerical methods. Problem size and error tolerance effects iteration count and computation time to solve equation. PDE computations take long time using single processor computers with sequential algorithms, and if data size gets bigger single processors memory may be insufficient. Thus, PDE?s are solved using parallel algorithms on multiple processors. In this thesis, elliptic partial differential equation is solved using Gauss-Seidel and Successive Over-Relaxation (SOR) methods parallel algorithms. Performance analysis and optimization basically has three steps; evaluation, analysis of gathered information, defining and optimizing bottlenecks. In evaluation, performance information is gathered while program runs, then observations are made on gathered information by using visualization tools. Bottlenecks are defined and optimization techniques are researched. Necessary improvements are made to analyze the program again. Different applications in each of these stages can be used but in this thesis TAU is used, which collects these applications under one roof. TAU (Tuning and Analysis Utilities) supports many hardware, operating systems and parallelization methods. TAU is an open source application and collaborates with other open source applications at different levels. In this thesis, differences based on performance analysis of an algorithm in different two architectures are investigated. In performance analysis and optimization there is no golden rule to speed up algorithm. Each algorithm must be analyzed on that specific architecture. In this context, the performance analysis of a PDE algorithm on two architectures has been interpreted.Yüksek LisansM.Sc

    Advancing Time-Dependent Earthquake Risk Modelling

    Get PDF
    Catastrophe (CAT) risk models are commonly used in the (re)insurance industry and by public organizations to estimate potential losses due to natural hazards like earthquakes. Conventional earthquake risk modelling involves several significant modelling assumptions, which mainly neglect: (a) the interaction between adjacent faults; (b) the long-term elastic-rebound behaviour of faults; (c) the short-term hazard increase associated with aftershocks; and (d) the damage accumulation in building assets that results from the occurrence of multiple earthquakes in a short time window. Several recent earthquake events/sequences (e.g., 2010/2012 Canterbury earthquakes, New Zealand; 2019 Ridgecrest earthquakes, USA; 2023 Turkey-Syria earthquakes) have emphasised the simplicity of these assumptions and the need for earthquake risk models to start accounting for the short-and long-term time-dependent characteristics of earthquake risk. This thesis introduces an end-to-end framework for time-dependent earthquake risk modelling that incorporates (a) advancements in long-term time-dependent fault and aftershock modelling in the hazard component of the risk modelling framework; and (b) vulnerability models that account for the damage accumulation due to multiple ground motions occurring in a short period of time. The long-term time-dependent fault model used incorporates the elastic-rebound motivated methodologies of the latest Uniform California Earthquake Rupture Forecast (UCERF3) and explicitly accounts for fault-interaction triggering between major known faults. The Epidemic-Type Aftershock Sequence (ETAS) model is used to simulate aftershocks, representing the short-term hazard increase observed after large mainshocks. Damage-dependent fragility and vulnerability models are then used to account for damage accumulation. Sensitivity analyses of direct economic losses to these time dependencies are also conducted, providing valuable guidance on integrating time dependencies in earthquake risk modelling

    Selected Papers from IEEE ICASI 2019

    Get PDF
    The 5th IEEE International Conference on Applied System Innovation 2019 (IEEE ICASI 2019, https://2019.icasi-conf.net/), which was held in Fukuoka, Japan, on 11–15 April, 2019, provided a unified communication platform for a wide range of topics. This Special Issue entitled “Selected Papers from IEEE ICASI 2019” collected nine excellent papers presented on the applied sciences topic during the conference. Mechanical engineering and design innovations are academic and practical engineering fields that involve systematic technological materialization through scientific principles and engineering designs. Technological innovation by mechanical engineering includes information technology (IT)-based intelligent mechanical systems, mechanics and design innovations, and applied materials in nanoscience and nanotechnology. These new technologies that implant intelligence in machine systems represent an interdisciplinary area that combines conventional mechanical technology and new IT. The main goal of this Special Issue is to provide new scientific knowledge relevant to IT-based intelligent mechanical systems, mechanics and design innovations, and applied materials in nanoscience and nanotechnology

    Time domain based image generation for synthetic aperture radar on field programmable gate arrays

    Get PDF
    Aerial images are important in different scenarios including surface cartography, surveillance, disaster control, height map generation, etc. Synthetic Aperture Radar (SAR) is one way to generate these images even through clouds and in the absence of daylight. For a wide and easy usage of this technology, SAR systems should be small, mounted to Unmanned Aerial Vehicles (UAVs) and process images in real-time. Since UAVs are small and lightweight, more robust (but also more complex) time-domain algorithms are required for good image quality in case of heavy turbulence. Typically the SAR data set size does not allow for ground transmission and processing, while the UAV size does not allow for huge systems and high power consumption to process the data. A small and energy-efficient signal processing system is therefore required. To fill the gap between existing systems that are capable of either high-speed processing or low power consumption, the focus of this thesis is the analysis, design, and implementation of such a system. A survey shows that most architectures either have to high power budgets or too few processing capabilities to match real-time requirements for time-domain-based processing. Therefore, a Field Programmable Gate Array (FPGA) based system is designed, as it allows for high performance and low-power consumption. The Global Backprojection (GBP) is implemented, as it is the standard time-domain-based algorithm which allows for highest image quality at arbitrary trajectories at the complexity of O(N3). To satisfy real-time requirements under all circumstances, the accelerated Fast Factorized Backprojection (FFBP) algorithm with a complexity of O(N2logN) is implemented as well, to allow for a trade-off between image quality and processing time. Additionally, algorithm and design are enhanced to correct the failing assumptions for Frequency Modulated Continuous Wave (FMCW) Radio Detection And Ranging (Radar) data at high velocities. Such sensors offer high-resolution data at considerably low transmit power which is especially interesting for UAVs. A full analysis of all algorithms is carried out, to design a highly utilized architecture for maximum throughput. The process covers the analysis of mathematical steps and approximations for hardware speedup, the analysis of code dependencies for instruction parallelism and the analysis of streaming capabilities, including memory access and caching strategies, as well as parallelization considerations and pipeline analysis. Each architecture is described in all details with its surrounding control structure. As proof of concepts, the architectures are mapped on a Virtex 6 FPGA and results on resource utilization, runtime and image quality are presented and discussed. A special framework allows to scale and port the design to other FPGAs easily and to enable for maximum resource utilization and speedup. The result is streaming architectures that are capable of massive parallelization with a minimum in system stalls. It is shown that real-time processing on FPGAs with strict power budgets in time-domain is possible with the GBP (mid-sized images) and the FFBP (any image size with a trade-off in quality), allowing for a UAV scenario
    corecore