8 research outputs found

    Deep-Gap: A deep learning framework for forecasting crowdsourcing supply-demand gap based on imaging time series and residual learning

    Full text link
    Mobile crowdsourcing has become easier thanks to the widespread of smartphones capable of seamlessly collecting and pushing the desired data to cloud services. However, the success of mobile crowdsourcing relies on balancing the supply and demand by first accurately forecasting spatially and temporally the supply-demand gap, and then providing efficient incentives to encourage participant movements to maintain the desired balance. In this paper, we propose Deep-Gap, a deep learning approach based on residual learning to predict the gap between mobile crowdsourced service supply and demand at a given time and space. The prediction can drive the incentive model to achieve a geographically balanced service coverage in order to avoid the case where some areas are over-supplied while other areas are under-supplied. This allows anticipating the supply-demand gap and redirecting crowdsourced service providers towards target areas. Deep-Gap relies on historical supply-demand time series data as well as available external data such as weather conditions and day type (e.g., weekday, weekend, holiday). First, we roll and encode the time series of supply-demand as images using the Gramian Angular Summation Field (GASF), Gramian Angular Difference Field (GADF) and the Recurrence Plot (REC). These images are then used to train deep Convolutional Neural Networks (CNN) to extract the low and high-level features and forecast the crowdsourced services gap. We conduct comprehensive comparative study by establishing two supply-demand gap forecasting scenarios: with and without external data. Compared to state-of-art approaches, Deep-Gap achieves the lowest forecasting errors in both scenarios.Comment: Accepted at CloudCom 2019 Conferenc

    Effective high compression of ECG signals at low level distortion

    Get PDF
    An effective method for compression of ECG signals, which falls within the transform lossy compression category, is proposed. The transformation is realized by a fast wavelet transform. The effectiveness of the approach, in relation to the simplicity and speed of its implementation, is a consequence of the efficient storage of the outputs of the algorithm which is realized in compressed Hierarchical Data Format. The compression performance is tested on the MIT-BIH Arrhythmia database producing compression results which largely improve upon recently reported benchmarks on the same database. For a distortion corresponding to a percentage root-mean-square difference (PRD) of 0.53, in mean value, the achieved average compression ratio is 23.17 with quality score of 43.93. For a mean value of PRD up to 1.71 the compression ratio increases up to 62.5. The compression of a 30 min record is realized in an average time of 0.14 s. The insignificant delay for the compression process, together with the high compression ratio achieved at low level distortion and the negligible time for the signal recovery, uphold the suitability of the technique for supporting distant clinical health care

    On the trade-off between compression efficiency and distortion of a new compression algorithm for multichannel EEG signals based on singular value decomposition

    Get PDF
    In this article we investigate the trade-off between the compression ratio and distortion of a recently published compression technique specifically devised for multichannel electroencephalograph (EEG) signals. In our previous paper, we proved that, when singular value decomposition (SVD) is already performed for denoising or removing unwanted artifacts, it is possible to exploit the same SVD for compression purpose by achieving a compression ratio in the order of 10 and a percentage root mean square distortion in the order of 0.01 %. In this article, we successfully demonstrate how, with a negligible increase in the computational cost of the algorithm, it is possible to further improve the compression ratio by about 10 % by maintaining the same distortion level or, alternatively, to improve the compression ratio by about 50 % by still maintaining the distortion level below the 0.1 %

    Optimal Resource Allocation Using Deep Learning-Based Adaptive Compression For Mhealth Applications

    Get PDF
    In the last few years the number of patients with chronic diseases that require constant monitoring increases rapidly; which motivates the researchers to develop scalable remote health applications. Nevertheless, transmitting big real-time data through a dynamic network limited by the bandwidth, end-to-end delay and transmission energy; will be an obstacle against having an efficient transmission of the data. The problem can be resolved by applying data reduction techniques on the vital signs at the transmitter side and reconstructing the data at the receiver side (i.e. the m-Health center). However, a new problem will be introduced which is the ability to receive the vital signs at the server side with an acceptable distortion rate (i.e. deformation of vital signs because of inefficient data reduction). In this thesis, we integrate efficient data reduction with wireless networking to deliver an adaptive compression with an acceptable distortion, while reacting to the wireless network dynamics such as channel fading and user mobility. A Deep Learning (DL) approach was used to implement an adaptive compression technique to compress and reconstruct the vital signs in general and specifically the Electroencephalogram Signal (EEG) with the minimum distortion. Then, a resource allocation framework was introduced to minimize the transmission energy along with the distortion of the reconstructed signa

    A two-dimensional approach for lossless EEG compression

    No full text
    In this paper, we study various lossless compression techniques for electroencephalograph (EEG) signals. We discuss a computationally simple pre-processing technique, where EEG signal is arranged in the form of a matrix (2-D) before compression. We discuss a two-stage coder to compress the EEG matrix, with a lossy coding layer (SPIHT) and residual coding layer (arithmetic coding). This coder is optimally tuned to utilize the source memory and the i.i.d. nature of the residual. We also investigate and compare EEG compression with other schemes such as JPEG2000 image compression standard, predictive coding based shorten, and simple entropy coding. The compression algorithms are tested with University of Bonn database and Physiobank Motor/Mental Imagery database. 2-D based compression schemes yielded higher lossless compression compared to the standard vector-based compression, predictive and entropy coding schemes. The use of pre-processing technique resulted in 6% improvement, and the two-stage coder yielded a further improvement of 3% in compression performance.Accepted versio
    corecore