510 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    New perspectives on A.I. in sentencing. Human decision-making between risk assessment tools and protection of humans rights.

    Get PDF
    The aim of this thesis is to investigate a field that until a few years ago was foreign to and distant from the penal system. The purpose of this undertaking is to account for the role that technology could plays in the Italian Criminal Law system. More specifically, this thesis attempts to scrutinize a very intricate phase of adjudication. After deciding on the type of an individual's liability, a judge must decide on the severity of the penalty. This type of decision implies a prognostic assessment that looks to the future. It is precisely in this field and in prognostic assessments that, as has already been anticipated in the United, instruments and processes are inserted in the pre-trial but also in the decision-making phase. In this contribution, we attempt to describe the current state of this field, trying, as a matter of method, to select the most relevant or most used tools. Using comparative and qualitative methods, the uses of some of these instruments in the supranational legal system are analyzed. Focusing attention on the Italian system, an attempt was made to investigate the nature of the element of an individual's ‘social dangerousness’ (pericolosità sociale) and capacity to commit offences, types of assessments that are fundamental in our system because they are part of various types of decisions, including the choice of the best sanctioning treatment. It was decided to turn our attention to this latter field because it is believed that the judge does not always have the time, the means and the ability to assess all the elements of a subject and identify the best 'individualizing' treatment in order to fully realize the function of Article 27, paragraph 3 of the Constitution

    BDS GNSS for Earth Observation

    Get PDF
    For millennia, human communities have wondered about the possibility of observing phenomena in their surroundings, and in particular those affecting the Earth on which they live. More generally, it can be conceptually defined as Earth observation (EO) and is the collection of information about the biological, chemical and physical systems of planet Earth. It can be undertaken through sensors in direct contact with the ground or airborne platforms (such as weather balloons and stations) or remote-sensing technologies. However, the definition of EO has only become significant in the last 50 years, since it has been possible to send artificial satellites out of Earth’s orbit. Referring strictly to civil applications, satellites of this type were initially designed to provide satellite images; later, their purpose expanded to include the study of information on land characteristics, growing vegetation, crops, and environmental pollution. The data collected are used for several purposes, including the identification of natural resources and the production of accurate cartography. Satellite observations can cover the land, the atmosphere, and the oceans. Remote-sensing satellites may be equipped with passive instrumentation such as infrared or cameras for imaging the visible or active instrumentation such as radar. Generally, such satellites are non-geostationary satellites, i.e., they move at a certain speed along orbits inclined with respect to the Earth’s equatorial plane, often in polar orbit, at low or medium altitude, Low Earth Orbit (LEO) and Medium Earth Orbit (MEO), thus covering the entire Earth’s surface in a certain scan time (properly called ’temporal resolution’), i.e., in a certain number of orbits around the Earth. The first remote-sensing satellites were the American NASA/USGS Landsat Program; subsequently, the European: ENVISAT (ENVironmental SATellite), ERS (European Remote-Sensing satellite), RapidEye, the French SPOT (Satellite Pour l’Observation de laTerre), and the Canadian RADARSAT satellites were launched. The IKONOS, QuickBird, and GeoEye-1 satellites were dedicated to cartography. The WorldView-1 and WorldView-2 satellites and the COSMO-SkyMed system are more recent. The latest generation are the low payloads called Small Satellites, e.g., the Chinese BuFeng-1 and Fengyun-3 series. Also, Global Navigation Satellite Systems (GNSSs) have captured the attention of researchers worldwide for a multitude of Earth monitoring and exploration applications. On the other hand, over the past 40 years, GNSSs have become an essential part of many human activities. As is widely noted, there are currently four fully operational GNSSs; two of these were developed for military purposes (American NAVstar GPS and Russian GLONASS), whilst two others were developed for civil purposes such as the Chinese BeiDou satellite navigation system (BDS) and the European Galileo. In addition, many other regional GNSSs, such as the South Korean Regional Positioning System (KPS), the Japanese quasi-zenital satellite system (QZSS), and the Indian Regional Navigation Satellite System (IRNSS/NavIC), will become available in the next few years, which will have enormous potential for scientific applications and geomatics professionals. In addition to their traditional role of providing global positioning, navigation, and timing (PNT) information, GNSS navigation signals are now being used in new and innovative ways. Across the globe, new fields of scientific study are opening up to examine how signals can provide information about the characteristics of the atmosphere and even the surfaces from which they are reflected before being collected by a receiver. EO researchers monitor global environmental systems using in situ and remote monitoring tools. Their findings provide tools to support decision makers in various areas of interest, from security to the natural environment. GNSS signals are considered an important new source of information because they are a free, real-time, and globally available resource for the EO community

    Security and Privacy for Modern Wireless Communication Systems

    Get PDF
    The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks

    Toward Dynamic Social-Aware Networking Beyond Fifth Generation

    Get PDF
    The rise of the intelligent information world presents significant challenges for the telecommunication industry in meeting the service-level requirements of future applications and incorporating societal and behavioral awareness into the Internet of Things (IoT) objects. Social Digital Twins (SDTs), or Digital Twins augmented with social capabilities, have the potential to revolutionize digital transformation and meet the connectivity, computing, and storage needs of IoT devices in dynamic Fifth-Generation (5G) and Beyond Fifth-Generation (B5G) networks. This research focuses on enabling dynamic social-aware B5G networking. The main contributions of this work include(i) the design of a reference architecture for the orchestration of SDTs at the network edge to accelerate the service discovery procedure across the Social Internet of Things (SIoT); (ii) a methodology to evaluate the highly dynamic system performance considering jointly communication and computing resources; (iii) a set of practical conclusions and outcomes helpful in designing future digital twin-enabled B5G networks. Specifically, we propose an orchestration for SDTs and an SIoT-Edge framework aligned with the Multi-access Edge Computing (MEC) architecture ratified by the European Telecommunications Standards Institute (ETSI). We formulate the optimal placement of SDTs as a Quadratic Assignment Problem (QAP) and propose a graph-based approximation scheme considering the different types of IoT devices, their social features, mobility patterns, and the limited computing resources of edge servers. We also study the appropriate intervals for re-optimizing the SDT deployment at the network edge. The results demonstrate that accounting for social features in SDT placement offers considerable improvements in the SIoT browsing procedure. Moreover, recent advancements in wireless communications, edge computing, and intelligent device technologies are expected to promote the growth of SIoT with pervasive sensing and computing capabilities, ensuring seamless connections among SIoT objects. We then offer a performance evaluation methodology for eXtended Reality (XR) services in edge-assisted wireless networks and propose fluid approximations to characterize the XR content evolution. The approach captures the time and space dynamics of the content distribution process during its transient phase, including time-varying loads, which are affected by arrival, transition, and departure processes. We examine the effects of XR user mobility on both communication and computing patterns. The results demonstrate that communication and computing planes are the key barriers to meeting the requirement for real-time transmissions. Furthermore, due to the trend toward immersive, interactive, and contextualized experiences, new use cases affect user mobility patterns and, therefore, system performance.Cotutelle -yhteisvÀitöskirj

    Demand Response in Smart Grids

    Get PDF
    The Special Issue “Demand Response in Smart Grids” includes 11 papers on a variety of topics. The success of this Special Issue demonstrates the relevance of demand response programs and events in the operation of power and energy systems at both the distribution level and at the wide power system level. This reprint addresses the design, implementation, and operation of demand response programs, with focus on methods and techniques to achieve an optimized operation as well as on the electricity consumer

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    The text classification pipeline: Starting shallow, going deeper

    Get PDF
    An increasingly relevant and crucial subfield of Natural Language Processing (NLP), tackled in this PhD thesis from a computer science and engineering perspective, is the Text Classification (TC). Also in this field, the exceptional success of deep learning has sparked a boom over the past ten years. Text retrieval and categorization, information extraction and summarization all rely heavily on TC. The literature has presented numerous datasets, models, and evaluation criteria. Even if languages as Arabic, Chinese, Hindi and others are employed in several works, from a computer science perspective the most used and referred language in the literature concerning TC is English. This is also the language mainly referenced in the rest of this PhD thesis. Even if numerous machine learning techniques have shown outstanding results, the classifier effectiveness depends on the capability to comprehend intricate relations and non-linear correlations in texts. In order to achieve this level of understanding, it is necessary to pay attention not only to the architecture of a model but also to other stages of the TC pipeline. In an NLP framework, a range of text representation techniques and model designs have emerged, including the large language models. These models are capable of turning massive amounts of text into useful vector representations that effectively capture semantically significant information. The fact that this field has been investigated by numerous communities, including data mining, linguistics, and information retrieval, is an aspect of crucial interest. These communities frequently have some overlap, but are mostly separate and do their research on their own. Bringing researchers from other groups together to improve the multidisciplinary comprehension of this field is one of the objectives of this dissertation. Additionally, this dissertation makes an effort to examine text mining from both a traditional and modern perspective. This thesis covers the whole TC pipeline in detail. However, the main contribution is to investigate the impact of every element in the TC pipeline to evaluate the impact on the final performance of a TC model. It is discussed the TC pipeline, including the traditional and the most recent deep learning-based models. This pipeline consists of State-Of-The-Art (SOTA) datasets used in the literature as benchmark, text preprocessing, text representation, machine learning models for TC, evaluation metrics and current SOTA results. In each chapter of this dissertation, I go over each of these steps, covering both the technical advancements and my most significant and recent findings while performing experiments and introducing novel models. The advantages and disadvantages of various options are also listed, along with a thorough comparison of the various approaches. At the end of each chapter, there are my contributions with experimental evaluations and discussions on the results that I have obtained during my three years PhD course. The experiments and the analysis related to each chapter (i.e., each element of the TC pipeline) are the main contributions that I provide, extending the basic knowledge of a regular survey on the matter of TC.An increasingly relevant and crucial subfield of Natural Language Processing (NLP), tackled in this PhD thesis from a computer science and engineering perspective, is the Text Classification (TC). Also in this field, the exceptional success of deep learning has sparked a boom over the past ten years. Text retrieval and categorization, information extraction and summarization all rely heavily on TC. The literature has presented numerous datasets, models, and evaluation criteria. Even if languages as Arabic, Chinese, Hindi and others are employed in several works, from a computer science perspective the most used and referred language in the literature concerning TC is English. This is also the language mainly referenced in the rest of this PhD thesis. Even if numerous machine learning techniques have shown outstanding results, the classifier effectiveness depends on the capability to comprehend intricate relations and non-linear correlations in texts. In order to achieve this level of understanding, it is necessary to pay attention not only to the architecture of a model but also to other stages of the TC pipeline. In an NLP framework, a range of text representation techniques and model designs have emerged, including the large language models. These models are capable of turning massive amounts of text into useful vector representations that effectively capture semantically significant information. The fact that this field has been investigated by numerous communities, including data mining, linguistics, and information retrieval, is an aspect of crucial interest. These communities frequently have some overlap, but are mostly separate and do their research on their own. Bringing researchers from other groups together to improve the multidisciplinary comprehension of this field is one of the objectives of this dissertation. Additionally, this dissertation makes an effort to examine text mining from both a traditional and modern perspective. This thesis covers the whole TC pipeline in detail. However, the main contribution is to investigate the impact of every element in the TC pipeline to evaluate the impact on the final performance of a TC model. It is discussed the TC pipeline, including the traditional and the most recent deep learning-based models. This pipeline consists of State-Of-The-Art (SOTA) datasets used in the literature as benchmark, text preprocessing, text representation, machine learning models for TC, evaluation metrics and current SOTA results. In each chapter of this dissertation, I go over each of these steps, covering both the technical advancements and my most significant and recent findings while performing experiments and introducing novel models. The advantages and disadvantages of various options are also listed, along with a thorough comparison of the various approaches. At the end of each chapter, there are my contributions with experimental evaluations and discussions on the results that I have obtained during my three years PhD course. The experiments and the analysis related to each chapter (i.e., each element of the TC pipeline) are the main contributions that I provide, extending the basic knowledge of a regular survey on the matter of TC
    • 

    corecore