1,630 research outputs found

    High speed hardware development for FDMA/TDM system

    Get PDF
    The development of a transmultiplexor and a quadrature phase shift keying (QPSK) demodulator is discussed. The system is designed to meet real time signal processing requirements of future satellite systems and should consume very little power. The architectures of the transmultiplexor and the demodulator are designed for the pipelining of all the modules, namely the commutator, the filter bank fast fourier transform (FFT), and the internal modules of the QPSK. The architecture is designed for the case of 800 channels. Each channel is to have a bandwidth of 45 KHz and a bit rate of 64 Kb/s. In this case each module will have 22.22 micro seconds to complete a computation

    A reconfigurable multicarrier demodulator architecture

    Get PDF
    An architecture based on parallel and pipline design approaches has been developed for the Frequency Division Multiple Access/Time Domain Multiplexed (FDMA/TDM) conversion system. The architecture has two main modules namely the transmultiplexer and the demodulator. The transmultiplexer has two pipelined modules. These are the shared multiplexed polyphase filter and the Fast Fourier Transform (FFT). The demodulator consists of carrier, clock, and data recovery modules which are interactive. Progress on the design of the MultiCarrier Demodulator (MCD) using commercially available chips and Application Specific Integrated Circuits (ASIC) and simulation studies using Viewlogic software will be presented at the conference

    Simultaneous Inference of User Representations and Trust

    Full text link
    Inferring trust relations between social media users is critical for a number of applications wherein users seek credible information. The fact that available trust relations are scarce and skewed makes trust prediction a challenging task. To the best of our knowledge, this is the first work on exploring representation learning for trust prediction. We propose an approach that uses only a small amount of binary user-user trust relations to simultaneously learn user embeddings and a model to predict trust between user pairs. We empirically demonstrate that for trust prediction, our approach outperforms classifier-based approaches which use state-of-the-art representation learning methods like DeepWalk and LINE as features. We also conduct experiments which use embeddings pre-trained with DeepWalk and LINE each as an input to our model, resulting in further performance improvement. Experiments with a dataset of \sim356K user pairs show that the proposed method can obtain an high F-score of 92.65%.Comment: To appear in the proceedings of ASONAM'17. Please cite that versio

    Spin waves interference from rising and falling edges of electrical pulses

    Full text link
    The authors have investigated the effect of the electrical pulse width of input excitations on the generated spin waves in a NiFe strip using pulse inductive time domain measurements. The authors have shown that the spin waves resulting from the rising- and the falling-edges of input excitation pulses interfere either constructively or destructively, and have provided conditions for obtaining spin wave packets with maximum intensity at different bias conditions

    Superconductor Insulator Transition in Long MoGe Nanowires

    Full text link
    Properties of one-dimensional superconducting wires depend on physical processes with different characteristic lengths. To identify the process dominant in the critical regime we have studied trans- port properties of very narrow (9-20 nm) MoGe wires fabricated by advanced electron-beam lithography in wide range of lengths, 1-25 microns. We observed that the wires undergo a superconductor -insulator transition that is controlled by cross sectional area of a wire and possibly also by the thickness-to-width ratio. Mean-field critical temperature decreases exponentially with the inverse of the wire cross section. We observed that qualitatively similar superconductor{insulator transition can be induced by external magnetic field. Some of our long superconducting MoGe nanowires can be identified as localized superconductors, namely in these wires one-electron localization length is much shorter than the length of a wire

    Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain

    Full text link
    With their recent development, large language models (LLMs) have been found to exhibit a certain level of Theory of Mind (ToM), a complex cognitive capacity that is related to our conscious mind and that allows us to infer another's beliefs and perspective. While human ToM capabilities are believed to derive from the neural activity of a broadly interconnected brain network, including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise processes underlying LLM's capacity for ToM or their similarities with that of humans remains largely unknown. In this study, we drew inspiration from the dmPFC neurons subserving human ToM and employed a similar methodology to examine whether LLMs exhibit comparable characteristics. Surprisingly, our analysis revealed a striking resemblance between the two, as hidden embeddings (artificial neurons) within LLMs started to exhibit significant responsiveness to either true- or false-belief trials, suggesting their ability to represent another's perspective. These artificial embedding responses were closely correlated with the LLMs' performance during the ToM tasks, a property that was dependent on the size of the models. Further, the other's beliefs could be accurately decoded using the entire embeddings, indicating the presence of the embeddings' ToM capability at the population level. Together, our findings revealed an emergent property of LLMs' embeddings that modified their activities in response to ToM features, offering initial evidence of a parallel between the artificial model and neurons in the human brain

    Controlled Tactile Exploration and Haptic Object Recognition

    Get PDF
    In this paper we propose a novel method for in-hand object recognition. The method is composed of a grasp stabilization controller and two exploratory behaviours to capture the shape and the softness of an object. Grasp stabilization plays an important role in recognizing objects. First, it prevents the object from slipping and facilitates the exploration of the object. Second, reaching a stable and repeatable position adds robustness to the learning algorithm and increases invariance with respect to the way in which the robot grasps the object. The stable poses are estimated using a Gaussian mixture model (GMM). We present experimental results showing that using our method the classifier can successfully distinguish 30 objects.We also compare our method with a benchmark experiment, in which the grasp stabilization is disabled. We show, with statistical significance, that our method outperforms the benchmark method
    corecore