927 research outputs found

    Control in technological systems and physical intelligence: an emerging theory

    Get PDF
    The transduction and processing of physical information is becoming important in a range of research fields, from the design of materials and virtual environments to the dynamics of cellular microenvironments. Previous approaches such as morphological computation/soft robotics, neuromechanics, and embodiment have provided valuable insight. This work approaches haptic, proprioception, and physical sensing as all part of the same subject. In this presentation, three design criteria for applying physical intelligence to engineering applications will be presented. These criteria have several properties in common, which inspires two types of end-effector model: stochastic (based on a spring) and deterministic (based on a piezomechanical array). The generalized behavior and output dynamics of these models can be described as three findings summarized from previous work. In conclusion, future directions for modeling neural control using a neuromorphic approach will be discussed

    Relative Positional Encoding for Speech Recognition and Direct Translation

    Full text link
    Transformer models are powerful sequence-to-sequence architectures that are capable of directly mapping speech inputs to transcriptions or translations. However, the mechanism for modeling positions in this model was tailored for text modeling, and thus is less ideal for acoustic inputs. In this work, we adapt the relative position encoding scheme to the Speech Transformer, where the key addition is relative distance between input states in the self-attention network. As a result, the network can better adapt to the variable distributions present in speech data. Our experiments show that our resulting model achieves the best recognition result on the Switchboard benchmark in the non-augmentation condition, and the best published result in the MuST-C speech translation benchmark. We also show that this model is able to better utilize synthetic data than the Transformer, and adapts better to variable sentence segmentation quality for speech translation.Comment: Submitted to Interspeech 202
    • …
    corecore