4 research outputs found

    Enabling Fine Sample Rate Settings in DSOs with Time-Interleaved ADCs

    Get PDF
    The time-base used by digital storage oscilloscopes allows limited selections of the sample rate, namely constrained to a few integer submultiples of the maximum sample rate. This limitation offers the advantage of simplifying the data transfer from the analog-to-digital converter to the acquisition memory, and of assuring stability performances, expressed in terms of absolute jitter, that are independent of the chosen sample rate. On the counterpart, it prevents an optimal usage of the memory resources of the oscilloscope and compels to post processing operations in several applications. A time-base that allows selecting the sample rate with very fine frequency resolution, in particular as a rational submultiple of the maximum rate, is proposed. The proposal addresses the oscilloscopes with time-interleaved converters, that require a dedicated and multifaceted approach with respect to architectures where a single monolithic converter is in charge of signal digitization. The proposed time-base allows selecting with fine frequency resolution sample rate values up to 200 GHz and beyond, still assuring jitter performances independent of the sample rate selection

    An Adaptive Sampling Framework for Life Cycle Degradation Monitoring

    Get PDF
    Data redundancy and data loss are relevant issues in condition monitoring. Sampling strategies for segment intervals can address these at the source, but do not receive the attention they deserve. Currently, the sampling methods in relevant research lack sufficient adaptability to the condition. In this paper, an adaptive sampling framework of segment intervals is proposed, based on the summary and improvement of existing problems. The framework is implemented to monitor mechanical degradation, and experiments are implemented on simulation data and real datasets. Subsequently, the distributions of the samples collected by different sampling strategies are visually presented through a color map, and five metrics are designed to assess the sampling results. The intuitive and numerical results show the superiority of the proposed method in comparison to existing methods, and the results are closely related to data status and degradation indicators. The smaller the data fluctuation and the more stable the degradation trend, the better the result. Furthermore, the results of the objective physical indicators are obviously better than those of the feature indicators. By addressing existing problems, the proposed framework opens up a new idea of predictive sampling, which significantly improves the degradation monitoring

    Classifier-Based Data Transmission Reduction in Wearable Sensor Network for Human Activity Monitoring

    Get PDF
    The recent development of wireless wearable sensor networks offers a spectrum of new applications in fields of healthcare, medicine, activity monitoring, sport, safety, human-machine interfacing, and beyond. Successful use of this technology depends on lifetime of the battery-powered sensor nodes. This paper presents a new method for extending the lifetime of the wearable sensor networks by avoiding unnecessary data transmissions. The introduced method is based on embedded classifiers that allow sensor nodes to decide if current sensor readings have to be transmitted to cluster head or not. In order to train the classifiers, a procedure was elaborated, which takes into account the impact of data selection on accuracy of a recognition system. This approach was implemented in a prototype of wearable sensor network for human activity monitoring. Real-world experiments were conducted to evaluate the new method in terms of network lifetime, energy consumption, and accuracy of human activity recognition. Results of the experimental evaluation have confirmed that, the proposed method enables significant prolongation of the network lifetime, while preserving high accuracy of the activity recognition. The experiments have also revealed advantages of the method in comparison with state-of-the-art algorithms for data transmission reduction

    A Data-Driven Adaptive Sampling Method Based on Edge Computing

    No full text
    The rise of edge computing has promoted the development of the industrial internet of things (IIoT). Supported by edge computing technology, data acquisition can also support more complex and perfect application requirements in industrial field. Most of traditional sampling methods use constant sampling frequency and ignore the impact of changes of sampling objects during the data acquisition. For the problem of sampling distortion, edge data redundancy and energy consumption caused by constant sampling frequency of sensors in the IIoT, a data-driven adaptive sampling method based on edge computing is proposed in this paper. The method uses the latest data collected by the sensors at the edge node for linear fitting and adjusts the next sampling frequency according to the linear median jitter sum and adaptive sampling strategy. An edge data acquisition platform is established to verify the validity of the method. According to the experimental results, the proposed method is more effective than other adaptive sampling methods. Compared with constant sampling frequency, the proposed method can reduce the edge data redundancy and energy consumption by more than 13.92% and 12.86%, respectively
    corecore