87 research outputs found
Next Generation High Throughput Satellite System
This paper aims at presenting an overview of the state-of-the-art in High Throughput Satellite (HTS) systems for Fixed Satellite Services (FSS) and High Density-FSS. Promising techniques and innovative strategies that can enhance system performance are reviewed and analyzed aiming to show what to expect for next generation ultra-high capacity satellite systems. Potential air interface evolutions, efficient frequency plans,feeder link dimensioning strategies and interference cancellation techniques are presented to show how Terabit/s satellite myth may turn into reality real soon
Low power processor architecture and multicore approach for embedded systems
13301çČ珏4319ć·ć棫ïŒć·„ćŠïŒéæȹ性ćŠć棫è«ææŹæFull 仄äžă«æČèŒïŒ1.IEICE Transactions Vol. E98-C(7) pp.544-549 2015. IEICE. ć
±èè
ïŒ S. Otani, H. Kondo. /2.Reuse èš±ćŻăšăăăłăčé
RHINO: reconfigurable hardware interface for computation and radio
Field-programmable gate arrays, or FPGAs, provide an attractive computing platform for software-defined radio applications. Their reconfigurable nature allows many digital signal processing (DSP) algorithms to be highly parallelised within the FPGA fabric, while their customisable I/O interfaces allow simple interfacing to analogue-to-digital converters (ADCs) and digital-to-analogue converters (DACs). However, FPGA boards that deliver sufficient performance to be useful in real-world applications are generally expensive. Rhino is an FPGA-based hardware processing platform that primarily supports software-defined radio applications. The final cost estimate for a complete Rhino system is under $1700, cheaper than similar FPGA boards that deliver much lower performance
Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems
Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models
Replication for send-deterministic MPI HPC applications
International audienceReplication has recently gained attention in the context of fault tolerance for large scale MPI HPC applications. Existing implementations try to cover all MPI codes and to be independent from the underlying library. In this paper, we evaluate the advantages of adopting a different approach. First, we try to take advantage of a communication property common to many MPI HPC application, namely send-determinism. Second, we choose to implement replication inside the MPI library. The main advantage of our approach is simplicity. While being only a small patch to the Open MPI library, our solution called SDR-MPI supports most main features of the MPI standard including all collectives and group operations. SDR-MPI additionally achieves good performance: Experiments run with HPC benchmarks and applications show that its overhead remains below 5%
Recommended from our members
Storing and manipulating environmental big data with JASMIN
JASMIN is a super-data-cluster designed to provide
a high-performance high-volume data analysis environment for
the UK environmental science community. Thus far JASMIN
has been used primarily by the atmospheric science and earth
observation communities, both to support their direct scientific workflow, and the curation of data products in the STFC Centre for Environmental Data Archival (CEDA). Initial JASMIN configuration and first experiences are reported here. Useful improvements in scientific workflow are presented. It is clear from the explosive growth in stored data and use that there was a pent up demand for a suitable big-data analysis environment.
This demand is not yet satisfied, in part because JASMIN does not yet have enough compute, the storage is fully allocated, and not all software needs are met. Plans to address these constraints are introduced
Empowering Cloud Data Centers with Network Programmability
Cloud data centers are a critical infrastructure for modern Internet services such as web search, social networking and e-commerce. However, the gradual slow-down of Mooreâs law has put a burden on the growth of data centersâ performance and energy efficiency. In addition, the increasing of millisecond-scale and microsecond-scale tasks also bring higher requirements to the throughput and latency for the cloud applications. Todayâs server-based solutions are hard to meet the performance requirements in many scenarios like resource management, scheduling, high-speed traffic monitoring and testing.
In this dissertation, we study these problems from a network perspective. We investigate a new architecture that leverages the programmability of new-generation network switches to improve the performance and reliability of clouds. As programmable switches only provide very limited memory and functionalities, we exploit compact data structures and deeply co-design software and hardware to best utilize the resource. More specifically, this dissertation presents four systems:
(i) NetLock: A new centralized lock management architecture that co-designs programmable switches and servers to simultaneously achieve high performance and rich policy support. It provides orders-of-magnitude higher throughput than existing systems with microsecond-level latency, and supports many commonly-used policies such as performance isolation.
(ii) HCSFQ: A scalable and practical solution to implement hierarchical fair queueing on commodity hardware at line rate. Instead of relying on a hierarchy of queues with complex queue management, HCSFQ does not keep per-flow states and uses only one queue to achieve hierarchical fair queueing.
(iii) AIFO: A new approach for programmable packet scheduling that only uses a single FIFO queue. AIFO utilizes an admission control mechanism to approximate PIFO which is theoretically ideal but hard to implement with commodity devices.
(iv) Lumina: A tool that enables fine-grained analysis of hardware network stack. By exploiting network programmability to emulate various network scenarios, Lumina is able to help users understand the micro-behaviors of hardware network stacks
Introduction to IoT
The Internet of Things has rapidly transformed the 21st century, enhancing
decision-making processes and introducing innovative consumer services such as
pay-as-you-use models. The integration of smart devices and automation
technologies has revolutionized every aspect of our lives, from health services
to the manufacturing industry, and from the agriculture sector to mining.
Alongside the positive aspects, it is also essential to recognize the
significant safety, security, and trust concerns in this technological
landscape. This chapter serves as a comprehensive guide for newcomers
interested in the IoT domain, providing a foundation for making future
contributions. Specifically, it discusses the overview, historical evolution,
key characteristics, advantages, architectures, taxonomy of technologies, and
existing applications in major IoT domains. In addressing prevalent issues and
challenges in designing and deploying IoT applications, the chapter examines
security threats across architectural layers, ethical considerations, user
privacy concerns, and trust-related issues. This discussion equips researchers
with a solid understanding of diverse IoT aspects, providing a comprehensive
understanding of IoT technology along with insights into the extensive
potential and impact of this transformative field.Comment: 48 pages, 7 figures, 8 tables, chapter 1 revised version of "IoT and
ML for Information Management: A Smart Healthcare Perspective" under the
Springer Studies in Computational Intelligence serie
- âŠ