64 research outputs found
Photonic Neural Networks and Optics-informed Deep Learning Fundamentals
The recent explosive compute growth, mainly fueled by the boost of AI and
DNNs, is currently instigating the demand for a novel computing paradigm that
can overcome the insurmountable barriers imposed by conventional electronic
computing architectures. PNNs implemented on silicon integration platforms
stand out as a promising candidate to endow NN hardware, offering the potential
for energy efficient and ultra-fast computations through the utilization of the
unique primitives of photonics i.e. energy efficiency, THz bandwidth and
low-latency. Thus far, several demonstrations have revealed the huge potential
of PNNs in performing both linear and non-linear NN operations at unparalleled
speed and energy consumption metrics. Transforming this potential into a
tangible reality for DL applications requires, however, a deep understanding of
the basic PNN principles, requirements and challenges across all constituent
architectural, technological and training aspects. In this tutorial, we,
initially, review the principles of DNNs along with their fundamental building
blocks, analyzing also the key mathematical operations needed for their
computation in a photonic hardware. Then, we investigate, through an intuitive
mathematical analysis, the interdependence of bit precision and energy
efficiency in analog photonic circuitry, discussing the opportunities and
challenges of PNNs. Followingly, a performance overview of PNN architectures,
weight technologies and activation functions is presented, summarizing their
impact in speed, scalability and power consumption. Finally, we provide an
holistic overview of the optics-informed NN training framework that
incorporates the physical properties of photonic building blocks into the
training process in order to improve the NN classification accuracy and
effectively elevate neuromorphic photonic hardware into high-performance DL
computational settings
MLGWSC-1: The first Machine Learning Gravitational-Wave Search Mock Data Challenge
We present the results of the first Machine Learning Gravitational-Wave Search Mock Data Challenge (MLGWSC-1). For this challenge, participating groups had to identify gravitational-wave signals from binary black hole mergers of increasing complexity and duration embedded in progressively more realistic noise. The final of the 4 provided datasets contained real noise from the O3a observing run and signals up to a duration of 20 seconds with the inclusion of precession effects and higher order modes. We present the average sensitivity distance and runtime for the 6 entered algorithms derived from 1 month of test data unknown to the participants prior to submission. Of these, 4 are machine learning algorithms. We find that the best machine learning based algorithms are able to achieve up to 95% of the sensitive distance of matched-filtering based production analyses for simulated Gaussian noise at a false-alarm rate (FAR) of one per month. In contrast, for real noise, the leading machine learning search achieved 70%. For higher FARs the differences in sensitive distance shrink to the point where select machine learning submissions outperform traditional search algorithms at FARs per month on some datasets. Our results show that current machine learning search algorithms may already be sensitive enough in limited parameter regions to be useful for some production settings. To improve the state-of-the-art, machine learning algorithms need to reduce the false-alarm rates at which they are capable of detecting signals and extend their validity to regions of parameter space where modeled searches are computationally expensive to run. Based on our findings we compile a list of research areas that we believe are the most important to elevate machine learning searches to an invaluable tool in gravitational-wave signal detection
MLGWSC-1: The first Machine Learning Gravitational-Wave Search Mock Data Challenge
We present the results of the first Machine Learning Gravitational-Wave
Search Mock Data Challenge (MLGWSC-1). For this challenge, participating groups
had to identify gravitational-wave signals from binary black hole mergers of
increasing complexity and duration embedded in progressively more realistic
noise. The final of the 4 provided datasets contained real noise from the O3a
observing run and signals up to a duration of 20 seconds with the inclusion of
precession effects and higher order modes. We present the average sensitivity
distance and runtime for the 6 entered algorithms derived from 1 month of test
data unknown to the participants prior to submission. Of these, 4 are machine
learning algorithms. We find that the best machine learning based algorithms
are able to achieve up to 95% of the sensitive distance of matched-filtering
based production analyses for simulated Gaussian noise at a false-alarm rate
(FAR) of one per month. In contrast, for real noise, the leading machine
learning search achieved 70%. For higher FARs the differences in sensitive
distance shrink to the point where select machine learning submissions
outperform traditional search algorithms at FARs per month on some
datasets. Our results show that current machine learning search algorithms may
already be sensitive enough in limited parameter regions to be useful for some
production settings. To improve the state-of-the-art, machine learning
algorithms need to reduce the false-alarm rates at which they are capable of
detecting signals and extend their validity to regions of parameter space where
modeled searches are computationally expensive to run. Based on our findings we
compile a list of research areas that we believe are the most important to
elevate machine learning searches to an invaluable tool in gravitational-wave
signal detection.Comment: 25 pages, 6 figures, 4 tables, additional material available at
https://github.com/gwastro/ml-mock-data-challenge-
OpenDR: An Open Toolkit for Enabling High Performance, Low Footprint Deep Learning for Robotics
Existing Deep Learning (DL) frameworks typically do not provide ready-to-use solutions for robotics, where very specific learning, reasoning, and embodiment problems exist. Their relatively steep learning curve and the different methodologies employed by DL compared to traditional approaches, along with the high complexity of DL models, which often leads to the need of employing specialized hardware accelerators, further increase the effort and cost needed to employ DL models in robotics. Also, most of the existing DL methods follow a static inference paradigm, as inherited by the traditional computer vision pipelines, ignoring active perception, which can be employed to actively interact with the environment in order to increase perception accuracy. In this paper, we present the Open Deep Learning Toolkit for Robotics (OpenDR). OpenDR aims at developing an open, non-proprietary, efficient, and modular toolkit that can be easily used by robotics companies and research institutions to efficiently develop and deploy AI and cognition technologies to robotics applications, providing a solid step towards addressing the aforementioned challenges. We also detail the design choices, along with an abstract interface that was created to overcome these challenges. This interface can describe various robotic tasks, spanning beyond traditional DL cognition and inference, as known by existing frameworks, incorporating openness, homogeneity and robotics-oriented perception e.g., through active perception, as its core design principles.acceptedVersionPeer reviewe
Digital image watermarking: its formal model, fundamental properties and possible attacks
While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes
D Image Watermarking Robust To Geometric Distortions
A novel blind method for 3D image watermarking robust against geometric distortions is proposed. A ternary watermark is embedded in a grayscale or a color 3D volume. Construction of watermarks having appropriate structure enables fast and robust watermark detection even after several geometric distortions of the watermarked volume. Simulation results indicate the ability of the proposed method to deal with the aforementioned attacks. The proposed method is also robust against lossy compression up to a certain compression ratio. Experiments conducted indicate the superiority of the proposed method
- …