44 research outputs found

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Entity Linking in Low-Annotation Data Settings

    Get PDF
    Recent advances in natural language processing have focused on applying and adapting large pretrained language models to specific tasks. These models, such as BERT (Devlin et al., 2019) and BART (Lewis et al., 2020a), are pretrained on massive amounts of unlabeled text across a variety of domains. The impact of these pretrained models is visible in the task of entity linking, where a mention of an entity in unstructured text is matched to the relevant entry in a knowledge base. State-of-the-art linkers, such as Wu et al. (2020) and De Cao et al. (2021), leverage pretrained models as a foundation for their systems. However, these models are also trained on large amounts of annotated data, which is crucial to their performance. Often these large datasets consist of domains that are easily annotated, such as Wikipedia or newswire text. However, tailoring NLP tools to a narrow variety of textual domains severely restricts their use in the real world. Many other domains, such as medicine or law, do not have large amounts of entity linking annotations available. Entity linking, which serves to bridge the gap between massive unstructured amounts of text and structured repositories of knowledge, is equally crucial in these domains. Yet tools trained on newswire or Wikipedia annotations are unlikely to be well-suited for identifying medical conditions mentioned in clinical notes. As most annotation efforts focus on English, similar challenges can be noted in building systems for non-English text. There is often a relatively small amount of annotated data in these domains. With this being the case, looking to other types of domain-specific data, such as unannotated text or highly-curated structured knowledge bases, is often required. In these settings, it is crucial to translate lessons taken from tools tailored for high-annotation domains into algorithms that are suited for low-annotation domains. This requires both leveraging broader types of data and understanding the unique challenges present in each domain

    1-D broadside-radiating leaky-wave antenna based on a numerically synthesized impedance surface

    Get PDF
    A newly-developed deterministic numerical technique for the automated design of metasurface antennas is applied here for the first time to the design of a 1-D printed Leaky-Wave Antenna (LWA) for broadside radiation. The surface impedance synthesis process does not require any a priori knowledge on the impedance pattern, and starts from a mask constraint on the desired far-field and practical bounds on the unit cell impedance values. The designed reactance surface for broadside radiation exhibits a non conventional patterning; this highlights the merit of using an automated design process for a design well known to be challenging for analytical methods. The antenna is physically implemented with an array of metal strips with varying gap widths and simulation results show very good agreement with the predicted performance

    Energy Data Analytics for Smart Meter Data

    Get PDF
    The principal advantage of smart electricity meters is their ability to transfer digitized electricity consumption data to remote processing systems. The data collected by these devices make the realization of many novel use cases possible, providing benefits to electricity providers and customers alike. This book includes 14 research articles that explore and exploit the information content of smart meter data, and provides insights into the realization of new digital solutions and services that support the transition towards a sustainable energy system. This volume has been edited by Andreas Reinhardt, head of the Energy Informatics research group at Technische Universität Clausthal, Germany, and Lucas Pereira, research fellow at Técnico Lisboa, Portugal

    Sensors Fault Diagnosis Trends and Applications

    Get PDF
    Fault diagnosis has always been a concern for industry. In general, diagnosis in complex systems requires the acquisition of information from sensors and the processing and extracting of required features for the classification or identification of faults. Therefore, fault diagnosis of sensors is clearly important as faulty information from a sensor may lead to misleading conclusions about the whole system. As engineering systems grow in size and complexity, it becomes more and more important to diagnose faulty behavior before it can lead to total failure. In the light of above issues, this book is dedicated to trends and applications in modern-sensor fault diagnosis

    La quarta rivoluzione industriale tra opportunitĂ  e disuguaglianze

    Get PDF
    In recent years a new revolution has been emerging, connected to the growth of advanced technologies (such as big data, algorithms, digital platforms, sensors) and to the intensity of their impact at the economic, social and cultural level. Within the debate on this so-called fourth industrial revolution, the text focuses on the analysis of the changes, both generated and perceived, with an emphasis on development opportunities and critical issues, as well as social and territorial inequalities. Considering contributions provided by some humanistic and social disciplines, the book introduces interdisciplinary research perspectives and explores new methods of analysis that combine the evolutionary dimension (typical of political history and economic history) with the spatial (prevalent in geography) and the narrative dimension (evident in cultural history and geography and in particular in disciplines related to image, representation, cinema)

    Exploring Novel Datasets and Methods for the Study of False Information

    Get PDF
    False information has increasingly become a subject of much discussion. Recently, disinformation has been linked to causing massive social harm, leading to the decline of democracy, and hindering global efforts in an international health crisis. In computing, and specifically Natural Language Processing (NLP), much effort has been put into tackling this problem. This has led to an increase of research in automated fact-checking and the language of disinformation. However, current research suffers from looking at a limited variety of sources. Much focus has, understandably, been given to platforms such as Twitter, Facebook and WhatsApp, as well as on traditional news articles online. Few works in NLP have looked at the specific communities where false information ferments. There has also been something of a topical constraint, with most examples of “Fake News” relating to current political issues. This thesis contributes to this rapidly growing research area by looking wider for new sources of data, and developing methods to analyse them. Specifically, it introduces two new datasets to the field and performs analyses on both. The first of these, a corpus of April Fools hoaxes, is analysed with a feature-driven approach to examine the generalisability of different features in the classification of false information. This is the first corpus of April Fools news articles, and is publicly available for researchers. The second dataset, a corpus of online Flat Earth communities, is also the first of its kind. In addition to performing the first NLP analysis of the language of Flat Earth fora, an exploration is performed to look for the existence of sub-groups within these communities, as well as an analysis of language change. To support this analysis, language change methods are surveyed, and a new method for comparing the language change of groups over time is developed. The methods used, brought together from both NLP and Corpus Linguistics, provide new insight into the language of false information, and the way communities discuss it

    Ego things : Networks Of Self-Aware Intelligent Objects.

    Get PDF
    PhD ThesesThere is an increasing demand for developing intelligence and awareness in arti cial agents in recent days to improve autonomy, robustness, and scalability, and it has been investigated in various research elds such as machine learning, robotics, software engineering, etc. Moreover, it is crucial to model such an agent's interaction with the surrounding environment and other agents to represent collaborative tasks. In this thesis, we have proposed several approaches to developing multi-modal self-awareness in agents and multi-modal collective awareness (CA) for multiple networked intelligent agents by focusing on the functionality to detect abnormal situations. The rst part of the thesis is proposed a novel approach to build selfawareness in dynamic agents to detect abnormalities based on multi-sensory data and feature selection. By considering several sensory data features, learned multiple inference models and facilitated obtaining the most distinct features for predicting future instances and detecting possible abnormalities. The proposed method can select the optimal set features to be shared in networking operations such that state prediction, decision-making, and abnormality detection processes are favored. In the second part, proposed di erent approaches for developing collective awareness in an agents network. Each agent of a network is considered an Internet of Things (IoT) node equipped with machine learning capabilities. The collective awareness aims to provide the network with updated causal knowledge of the state of execution of actions of each node performing a joint task, with particular attention to anomalies that can arise. Datadriven dynamic Bayesian models learned from multi-sensory data recorded during the normal realization of a joint task (agent network experience) are used for distributed state estimation of agents and detection of abnormalities. Moreover, the e ects of networking protocols and communications in the estimation of state and abnormalities are analyzed. Finally, the abnormality estimation is performed at the model's di erent abstraction levels and explained the models' interpretability. In this work, interpretability is the capability to use anomaly data to modify the model to make inferences accurately in the future

    Data-Driven Approach based on Deep Learning and Probabilistic Models for PHY-Layer Security in AI-enabled Cognitive Radio IoT.

    Get PDF
    PhD Theses.Cognitive Radio Internet of Things (CR-IoT) has revolutionized almost every eld of life and reshaped the technological world. Several tiny devices are seamlessly connected in a CR-IoT network to perform various tasks in many applications. Nevertheless, CR-IoT su ers from malicious attacks that pulverize communication and perturb network performance. Therefore, recently it is envisaged to introduce higher-level Arti cial Intelligence (AI) by incorporating Self-Awareness (SA) capabilities into CR-IoT objects to facilitate CR-IoT networks to establish secure transmission against vicious attacks autonomously. In this context, sub-band information from the Orthogonal Frequency Division Multiplexing (OFDM) modulated transmission in the spectrum has been extracted from the radio device receiver terminal, and a generalized state vector (GS) is formed containing low dimension in-phase and quadrature components. Accordingly, a probabilistic method based on learning a switching Dynamic Bayesian Network (DBN) from OFDM transmission with no abnormalities has been proposed to statistically model signal behaviors inside the CR-IoT spectrum. A Bayesian lter, Markov Jump Particle Filter (MJPF), is implemented to perform state estimation and capture malicious attacks. Subsequently, GS containing a higher number of subcarriers has been investigated. In this connection, Variational autoencoders (VAE) is used as a deep learning technique to extract features from high dimension radio signals into low dimension latent space z, and DBN is learned based on GS containing latent space data. Afterward, to perform state estimation and capture abnormalities in a spectrum, Adapted-Markov Jump Particle Filter (A-MJPF) is deployed. The proposed method can capture anomaly that appears due to either jammer attacks in transmission or cognitive devices in a network experiencing di erent transmission sources that have not been observed previously. The performance is assessed using the receiver
    corecore