19 research outputs found

    Accelerating Wireless Federated Learning via Nesterov's Momentum and Distributed Principle Component Analysis

    Full text link
    A wireless federated learning system is investigated by allowing a server and workers to exchange uncoded information via orthogonal wireless channels. Since the workers frequently upload local gradients to the server via bandwidth-limited channels, the uplink transmission from the workers to the server becomes a communication bottleneck. Therefore, a one-shot distributed principle component analysis (PCA) is leveraged to reduce the dimension of uploaded gradients such that the communication bottleneck is relieved. A PCA-based wireless federated learning (PCA-WFL) algorithm and its accelerated version (i.e., PCA-AWFL) are proposed based on the low-dimensional gradients and the Nesterov's momentum. For the non-convex loss functions, a finite-time analysis is performed to quantify the impacts of system hyper-parameters on the convergence of the PCA-WFL and PCA-AWFL algorithms. The PCA-AWFL algorithm is theoretically certified to converge faster than the PCA-WFL algorithm. Besides, the convergence rates of PCA-WFL and PCA-AWFL algorithms quantitatively reveal the linear speedup with respect to the number of workers over the vanilla gradient descent algorithm. Numerical results are used to demonstrate the improved convergence rates of the proposed PCA-WFL and PCA-AWFL algorithms over the benchmarks

    Prostaglandin E1 Alleviates Cognitive Dysfunction in Chronic Cerebral Hypoperfusion Rats by Improving Hemodynamics

    Get PDF
    Compensatory vascular mechanisms can restore cerebral blood flow (CBF) but fail to protect against chronic cerebral hypoperfusion (CCH)-mediated neuronal damage and cognitive impairment. Prostaglandin E1 (PGE1) is known as a vasodilator to protect against ischemic injury in animal models, but its protective role in CCH remains unclear. To determine the effect of PGE1 on cerebral hemodynamics and cognitive functions in CCH, bilateral common carotid artery occlusion (BCCAO) was used to mimic CCH in rats, which were subsequently intravenously injected with PGE1 daily for 2 weeks. Magnetic resonance imaging, immunofluorescence staining and Morris water maze (MWM) were used to evaluate CBF, angiogenesis, and cognitive functions, respectively. We found that PGE1 treatment significantly restored CBF by enhancing vertebral artery dilation. In addition, PGE1 treatment increased the number of microvascular endothelial cells and neuronal cells in the hippocampus, and decreased the numbers of astrocyte and apoptotic cells. In the MWM test, we further showed that the escape latency of CCH rats was significantly reduced after PGE1 treatment. Our results suggest that PGE1 ameliorates cognitive dysfunction in CCH rats by enhancing CBF recovery, sustaining angiogenesis, and reducing astrocyte activation and neuronal loss

    A cloud eco-system : reactive demand control and dynamic pricing methodology

    No full text
    Resources are limited in capacity. In the meanwhile, over-provisioning of resources resulted in server low utilization could be costly to cloud providers. The underlying reasons of the low utilization are multiple-folds, such as uneven application fit where the application cannot fully utilize the resources allocated or the uncertainty in demand forecasts that is introduced by the dramatically varied demand of the cloud resource between peak and non-peak periods. While many research works are devoted to optimize the resource allocation techniques in the effort of achieving higher server utilization, how to control resource demand so the correct level of resource provisioning can be determined has become the next research question. In this thesis, we introduce a pricing methodology with dynamic pricing that intended to induce desired demand pattern and enhances the revenue of a cloud provider. The proposed pricing methodology encourages cloud tenants, whose requested Virtual Machines (VMs) can be allocated easily, to use more cloud service by offering them lower prices and discouraging cloud tenants, whose requested VMs are difficult to allocate, from using cloud service by charging them higher prices. We study our pricing methodology with a combinatorial optimization algorithm, the Knapsack Algorithm and show that the overall revenue is enhanced through evaluations. Then, to achieve fairness among users, we further perform a case study of our pricing methodology with a multi-resource allocation fairness algorithm, the Heterogeneous Dominant Resource Fairness (DRFH) algorithm. Trace-driven simulation results show that the proposed pricing methodology with DRFH can increase the overall revenue by up to 11.60%. Furthermore, we propose a novel cloud federation system that is cognitive to the dynamic prices as a decision making assistant tool for our pricing methodology. The cloud federation system automatically selects and migrates user tasks to a cloud system that is charging at a more affordable rate. We discuss the architectural framework and platform design, provide a mathematical formulation and investigate a total service cost minimization approach with privacy constraints. Simulation results demonstrate the proposed system can lower the cost of cloud services by exploiting the advantages of dynamic prices of multiple clouds.Applied Science, Faculty ofElectrical and Computer Engineering, Department ofGraduat

    Cognitive Gaming

    No full text

    Image de-quantization via spatially varying sparsity prior

    No full text
    We address the problem of image de-quantization, which is also known as bit-depth expansion if the reconstructed 2D signal is re-quantized into higher bit-precision. In this paper, a novel image de-quantization method based on convex optimization theory is proposed, which exploits the spatially varying characteristics of image surface. We test our method on image bit-depth expansion problems, and the experimental results show that proposed method can achieve superior PSNR and SSIM performance. © 2012 IEEE

    Image Colorization using Sparse Representation

    No full text
    Image colorization is the task to color a grayscale image with limited color cues. In this work, we present a novel method to perform image colorization using sparse representation. Our method first trains an over-complete dictionary in YUV color space. Then taking a grayscale image and a small subset of color pixels as inputs, our method colorizes overlapping image patches via sparse representation; it is achieved by seeking sparse representations of patches that are consistent with both the grayscale image and the color pixels. After that, we aggregate the colorized patches with weights to get an intermediate result. This process iterates until the image is properly colorized. Experimental results show that our method leads to high-quality colorizations with small number of given color pixels. To demonstrate one of the applications of the proposed method, we apply it to transfer the color of one image onto another to obtain a visually pleasing image

    Inter-channel demosaicking traces for digital image forensics

    No full text
    Digital image forensics seeks to detect statistical traces left by image acquisition or post-processing in order to establish an images source and authenticity. Digital cameras acquire an image with one sensor overlayed with a color filter array (CFA), capturing at each spatial location one sample from the three necessary color channels. The missing pixels must be interpolated in a process known as demosaicking. This process is highly nonlinear and can vary greatly between different camera brands and models. Most practical algorithms, however, introduce correlations between the color channels, which are often different between algorithms. In this paper, we show how these correlations can be used to construct a characteristic map that is useful in matching an image to its source. Results show that our method employing inter-channel traces can distinguish between sophisticated demosaicking algorithms. It can complement existing classifiers based on inter-pixel correlations by providing a new feature dimension. © 2010 IEEE

    How Anti-aliasing Filter Affects Image Contrast: An Analysis from Majorization Theory Perspective

    No full text
    When we design an anti-aliasing low pass filter, it is usually an IIR filter. We need to truncate the filter to an FIR filter. One may think that the more taps there are, the better the image quality is. However, we find that there exists an optimal value of tap number that will give the best visual quality. Filters with larger or smaller number of taps will degrade the image quality, due to the fact that the image contrast is reduced. In this paper we analyze this phenomenon using majorization theory and find that the image contrast can be formulated as a Schur convex function on filter coefficients. We also propose an effective method to choose the best filter so that the image contrast is maximized, so as to give best visual quality. © 2011 IEEE

    Data Hiding in Dot Diffused Halftone Images

    No full text
    In this paper, a new adaptive subspace learning model based on incremental nonparametric discriminant analysis (INDA) is proposed for visual tracking. Traditional subspace trackers focus on updating eigenvectors in handling with appearance variation of the target object, ignoring the non-target back-ground region during tracking. The INDA features take both of them into consideration, thereby promoting the tracking process in the ever-changing environment. Meanwhile, INDA relaxes the Gaussian assumption in Fisher discriminant analysis (FDA), so it can handle more general class distributions problem. The scatter matrices are also reformulated to update the subspace incrementally based on previous results. In conjunction with efficient feature extraction method, the system is real time capable. Numerous experiments show the superiority of our tracker over current states of art methods on several publicly available datasets

    Arbitrary Factor Image Interpolation using Geodesic Distance Weighted 2D Autoregressive Modeling

    No full text
    Least square regression has been widely used in image interpolation. Some existing regression-based interpolation methods used ordinary least squares (OLS) to formulate cost functions. These methods usually have difficulties at object boundaries because OLS is sensitive to outliers. Weighted least squares (WLS) is then adopted to solve the outlier problem. Some weighting schemes have been proposed in the literature. In this paper we propose to use geodesic distance weighting in that geodesic distance can simultaneously measure both the spatial distance and color difference. Another contribution of this paper is that we propose an optimization scheme that can handle arbitrary factor interpolation. The idea is to separate the problem into two parts, an adaptive pixel correlation model and a convolution based image degradation model. Geodesic distance weighted 2D autoregressive model is used to model the pixel correlation which preserves local geometry. The convolution based image degradation model provides the flexibility to handle arbitrary interpolation factor. The entire problem is formulated as a WLS problem constrained by a linear equality. © 2013 IEEE
    corecore