1,123 research outputs found
The 2023 terahertz science and technology roadmap
Terahertz (THz) radiation encompasses a wide spectral range within the electromagnetic spectrum that extends from microwaves to the far infrared (100 GHz–∼30 THz). Within its frequency boundaries exist a broad variety of scientific disciplines that have presented, and continue to present, technical challenges to researchers. During the past 50 years, for instance, the demands of the scientific community have substantially evolved and with a need for advanced instrumentation to support radio astronomy, Earth observation, weather forecasting, security imaging, telecommunications, non-destructive device testing and much more. Furthermore, applications have required an emergence of technology from the laboratory environment to production-scale supply and in-the-field deployments ranging from harsh ground-based locations to deep space. In addressing these requirements, the research and development community has advanced related technology and bridged the transition between electronics and photonics that high frequency operation demands. The multidisciplinary nature of THz work was our stimulus for creating the 2017 THz Science and Technology Roadmap (Dhillon et al 2017 J. Phys. D: Appl. Phys. 50 043001). As one might envisage, though, there remains much to explore both scientifically and technically and the field has continued to develop and expand rapidly. It is timely, therefore, to revise our previous roadmap and in this 2023 version we both provide an update on key developments in established technical areas that have important scientific and public benefit, and highlight new and emerging areas that show particular promise. The developments that we describe thus span from fundamental scientific research, such as THz astronomy and the emergent area of THz quantum optics, to highly applied and commercially and societally impactful subjects that include 6G THz communications, medical imaging, and climate monitoring and prediction. Our Roadmap vision draws upon the expertise and perspective of multiple international specialists that together provide an overview of past developments and the likely challenges facing the field of THz science and technology in future decades. The document is written in a form that is accessible to policy makers who wish to gain an overview of the current state of the THz art, and for the non-specialist and curious who wish to understand available technology and challenges. A such, our experts deliver a 'snapshot' introduction to the current status of the field and provide suggestions for exciting future technical development directions. Ultimately, we intend the Roadmap to portray the advantages and benefits of the THz domain and to stimulate further exploration of the field in support of scientific research and commercial realisation
Optimisation for Optical Data Centre Switching and Networking with Artificial Intelligence
Cloud and cluster computing platforms have become standard across almost every domain of business, and their scale quickly approaches servers in a single warehouse. However, the tier-based opto-electronically packet switched network infrastructure that is standard across these systems gives way to several scalability bottlenecks including resource fragmentation and high energy requirements. Experimental results show that optical circuit switched networks pose a promising alternative that could avoid these.
However, optimality challenges are encountered at realistic commercial scales. Where exhaustive optimisation techniques are not applicable for problems at the scale of Cloud-scale computer networks, and expert-designed heuristics are performance-limited and typically biased in their design, artificial intelligence can discover more scalable and better performing optimisation strategies.
This thesis demonstrates these benefits through experimental and theoretical work spanning all of component, system and commercial optimisation problems which stand in the way of practical Cloud-scale computer network systems. Firstly, optical components are optimised to gate in and are demonstrated in a proof-of-concept switching architecture for optical data centres with better wavelength and component scalability than previous demonstrations. Secondly, network-aware resource allocation schemes for optically composable data centres are learnt end-to-end with deep reinforcement learning and graph neural networks, where less networking resources are required to achieve the same resource efficiency compared to conventional methods. Finally, a deep reinforcement learning based method for optimising PID-control parameters is presented which generates tailored parameters for unseen devices in . This method is demonstrated on a market leading optical switching product based on piezoelectric actuation, where switching speed is improved with no compromise to optical loss and the manufacturing yield of actuators is improved. This method was licensed to and integrated within the manufacturing pipeline of this company. As such, crucial public and private infrastructure utilising these products will benefit from this work
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
A Novel Approach for Integrated Shortest Path Finding Algorithm (ISPSA) Using Mesh Topologies and Networks-on-Chip (NOC)
A novel data dispatching or communication technique based on circulating networks of any network IP is suggested for multi data transmission in multiprocessor systems using Networks-On-Chip (NoC). In wireless communication network management have some negatives have heavy data losses and traffic of data sending data while packet scheduling and low performance in the varied network due to workloads. To overcome the drawbacks, in this method proposed system is Integrated Shortest Path Search Algorithm (ISPSA) using mesh topologies. The message is sent to IP (Internet Protocol) in the network until the specified bus accepts it. Integrated Shortest Path Search Algorithm for communication between two nodes is possible at any one moment. On-chip wireless communications operating at specific frequencies are the most capable option for overcoming metal interconnects multi-hop delay and excessive power consumption in Network-on-Chip (NoC) devices. Each node can be indicated by a pair of coordinates (level, position), where the level is the tree's vertical level and the view point is its horizontal arrangement in the sequence of left to right. The output gateway node's n nodes are linked to two nodes in the following level, with all resource nodes located at the bottommost vertical level and the constraint of this topology is its narrow bisection area. The software Xilinx 14.5 tool by using that overall performance analysis of mesh topology, each method are reduced data losses with better accuracy although the productivity of the delay is decreased by 21 % was evaluated and calculated.
Towards trustworthy computing on untrustworthy hardware
Historically, hardware was thought to be inherently secure and trusted due to its
obscurity and the isolated nature of its design and manufacturing. In the last two
decades, however, hardware trust and security have emerged as pressing issues.
Modern day hardware is surrounded by threats manifested mainly in undesired
modifications by untrusted parties in its supply chain, unauthorized and pirated
selling, injected faults, and system and microarchitectural level attacks. These threats,
if realized, are expected to push hardware to abnormal and unexpected behaviour
causing real-life damage and significantly undermining our trust in the electronic and
computing systems we use in our daily lives and in safety critical applications. A
large number of detective and preventive countermeasures have been proposed in
literature. It is a fact, however, that our knowledge of potential consequences to
real-life threats to hardware trust is lacking given the limited number of real-life
reports and the plethora of ways in which hardware trust could be undermined. With
this in mind, run-time monitoring of hardware combined with active mitigation of
attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed
as the last line of defence. This last line of defence allows us to face the issue of live
hardware mistrust rather than turning a blind eye to it or being helpless once it occurs.
This thesis proposes three different frameworks towards trustworthy computing
on untrustworthy hardware. The presented frameworks are adaptable to different
applications, independent of the design of the monitored elements, based on
autonomous security elements, and are computationally lightweight. The first
framework is concerned with explicit violations and breaches of trust at run-time,
with an untrustworthy on-chip communication interconnect presented as a potential
offender. The framework is based on the guiding principles of component guarding,
data tagging, and event verification. The second framework targets hardware elements
with inherently variable and unpredictable operational latency and proposes a
machine-learning based characterization of these latencies to infer undesired latency
extensions or denial of service attacks. The framework is implemented on a DDR3
DRAM after showing its vulnerability to obscured latency extension attacks. The
third framework studies the possibility of the deployment of untrustworthy hardware
elements in the analog front end, and the consequent integrity issues that might arise
at the analog-digital boundary of system on chips. The framework uses machine
learning methods and the unique temporal and arithmetic features of signals at this
boundary to monitor their integrity and assess their trust level
The 2023 terahertz science and technology roadmap
Terahertz (THz) radiation encompasses a wide spectral range within the electromagnetic spectrum that extends from microwaves to the far infrared (100 GHz–∼30 THz). Within its frequency boundaries exist a broad variety of scientific disciplines that have presented, and continue to present, technical challenges to researchers. During the past 50 years, for instance, the demands of the scientific community have substantially evolved and with a need for advanced instrumentation to support radio astronomy, Earth observation, weather forecasting, security imaging, telecommunications, non-destructive device testing and much more. Furthermore, applications have required an emergence of technology from the laboratory environment to production-scale supply and in-the-field deployments ranging from harsh ground-based locations to deep space. In addressing these requirements, the research and development community has advanced related technology and bridged the transition between electronics and photonics that high frequency operation demands. The multidisciplinary nature of THz work was our stimulus for creating the 2017 THz Science and Technology Roadmap (Dhillon et al 2017 J. Phys. D: Appl. Phys. 50 043001). As one might envisage, though, there remains much to explore both scientifically and technically and the field has continued to develop and expand rapidly. It is timely, therefore, to revise our previous roadmap and in this 2023 version we both provide an update on key developments in established technical areas that have important scientific and public benefit, and highlight new and emerging areas that show particular promise. The developments that we describe thus span from fundamental scientific research, such as THz astronomy and the emergent area of THz quantum optics, to highly applied and commercially and societally impactful subjects that include 6G THz communications, medical imaging, and climate monitoring and prediction. Our Roadmap vision draws upon the expertise and perspective of multiple international specialists that together provide an overview of past developments and the likely challenges facing the field of THz science and technology in future decades. The document is written in a form that is accessible to policy makers who wish to gain an overview of the current state of the THz art, and for the non-specialist and curious who wish to understand available technology and challenges. A such, our experts deliver a 'snapshot' introduction to the current status of the field and provide suggestions for exciting future technical development directions. Ultimately, we intend the Roadmap to portray the advantages and benefits of the THz domain and to stimulate further exploration of the field in support of scientific research and commercial realisation
Optical Networks and Interconnects
The rapid evolution of communication technologies such as 5G and beyond, rely
on optical networks to support the challenging and ambitious requirements that
include both capacity and reliability. This chapter begins by giving an
overview of the evolution of optical access networks, focusing on Passive
Optical Networks (PONs). The development of the different PON standards and
requirements aiming at longer reach, higher client count and delivered
bandwidth are presented. PON virtualization is also introduced as the
flexibility enabler. Triggered by the increase of bandwidth supported by access
and aggregation network segments, core networks have also evolved, as presented
in the second part of the chapter. Scaling the physical infrastructure requires
high investment and hence, operators are considering alternatives to optimize
the use of the existing capacity. This chapter introduces different planning
problems such as Routing and Spectrum Assignment problems, placement problems
for regenerators and wavelength converters, and how to offer resilience to
different failures. An overview of control and management is also provided.
Moreover, motivated by the increasing importance of data storage and data
processing, this chapter also addresses different aspects of optical data
center interconnects. Data centers have become critical infrastructure to
operate any service. They are also forced to take advantage of optical
technology in order to keep up with the growing capacity demand and power
consumption. This chapter gives an overview of different optical data center
network architectures as well as some expected directions to improve the
resource utilization and increase the network capacity
A Phase Change Memory and DRAM Based Framework For Energy-Efficient and High-Speed In-Memory Stochastic Computing
Convolutional Neural Networks (CNNs) have proven to be highly effective in various fields related to Artificial Intelligence (AI) and Machine Learning (ML). However, the significant computational and memory requirements of CNNs make their processing highly compute and memory-intensive. In particular, the multiply-accumulate (MAC) operation, which is a fundamental building block of CNNs, requires enormous arithmetic operations. As the input dataset size increases, the traditional processor-centric von-Neumann computing architecture becomes ill-suited for CNN-based applications. This results in exponentially higher latency and energy costs, making the processing of CNNs highly challenging.
To overcome these challenges, researchers have explored the Processing-In Memory (PIM) technique, which involves placing the processing unit inside or near the memory unit. This approach reduces data migration length and utilizes the internal memory bandwidth at the memory chip level. However, developing a reliable PIM-based system with minimal hardware modifications and design complexity remains a significant challenge.
The proposed solution in the report suggests utilizing different memory technologies, such as Dynamic RAM (DRAM) and phase change memory (PCM), with Stochastic arithmetic and minimal add-on logic. Stochastic computing is a technique that uses random numbers to perform arithmetic operations instead of traditional binary representation. This technique reduces hardware requirements for CNN\u27s arithmetic operations, making it possible to implement them with minimal add-on logic.
The report details the workflow for performing arithmetical operations used by CNNs, including MAC, activation, and floating-point functions. The proposed solution includes designs for scalable Stochastic Number Generator (SNG), DRAM CNN accelerator, non-volatile memory (NVM) class PCRAM-based CNN accelerator, and DRAM-based stochastic to binary conversion (StoB) for in-situ deep learning. These designs utilize stochastic computing to reduce the hardware requirements for CNN\u27s arithmetic operations and enable energy and time-efficient processing of CNNs.
The report also identifies future research directions for the proposed designs, including in-situ PCRAM-based SNG, ODIN (A Bit-Parallel Stochastic Arithmetic Based Accelerator for In-Situ Neural Network Processing in Phase Change RAM), ATRIA (Bit-Parallel Stochastic Arithmetic Based Accelerator for In-DRAM CNN Processing), and AGNI (In-Situ, Iso-Latency Stochastic-to-Binary Number Conversion for In-DRAM Deep Learning), and presents initial findings for these ideas.
In summary, the proposed solution in the report offers a comprehensive approach to address the challenges of processing CNNs, and the proposed designs have the potential to improve the energy and time efficiency of CNNs significantly. Using Stochastic Computing and different memory technologies enables the development of reliable PIM-based systems with minimal hardware modifications and design complexity, providing a promising path for the future of CNN-based applications
- …