2,172 research outputs found

    Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    Full text link
    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood, and numerical inverse extra-regularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener-filtering, Tikhonov regularization, Ridge regression, Maximum Entropy, and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman, and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The structures of the up-to-date highest-performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener-filter in the novel ARGO-software package, the different numerical schemes are benchmarked with 1-, 2-, and 3-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark-matter density field, the peculiar velocity field, and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    A New Method for the Economic Laws of Extinction Using the Fox-Wright-type Function

    Get PDF
    In this note, we deal with the possibility of optimal economic extinction. We employ the Fox-Wright-type function to characterize the probability of transference from optimal selection to the economic laws of extinction. For the extinction, we shall utilize the fractional Poisson process

    Modelling of Hybrid Meta heuristic Based Parameter Optimizers with Deep Convolutional Neural Network for Mammogram Cancer Detection

    Get PDF
    Breast cancer (BC) is the common type of cancer among females. Mortality from BC could be decreased by identifying and diagnosing it atan earlierphase. Different imaging modalities are used to detect BC, like mammography. Even withproven records as a BC screening tool, mammography istime-consuming and hasconstraints, namely lower sensitivity in women with dense breast tissue. Computer-Aided Diagnosis or Detection (CAD) system assistsaproficient radiologist to identifyBC at an earlier stage. Recently, the advancementin deep learning (DL)methodsareemployed to mammography assist radiologists to increase accuracy and efficiency. Therefore, this study presents a metaheuristic-based hyperparameter optimization with deep learning-based breast cancer detection on mammogram images (MHODL-BCDMI) technique. The presented MHODL-BCDMI technique mainly focused on the recognition and classification of breast cancer on digital mammograms. To achieve this, the MHODL-BCDMI technique employs pre-processing in two stages: Wiener Filter (WF) based noise elimination and contrast enhancement. Besides, the MHODL-BCDMI technique exploits densely connected networks (DenseNet201) model for feature extraction purposes. For BC classification and detection, a hybrid convolutional neural network with a gated recurrent unit (HCNN-GRU) model is used. Furthermore, three hyperparameter optimizers are employed namely cat swarm optimization (CSO), harmony search algorithm (HSA), and hybrid grey wolf whale optimization algorithm (HGWWOA). Finally, the U2Net segmentation approach is used for the classification of benign and malignant types of cancer. The experimental analysis of the MHODL-BCDMI method is tested on a digital mammogram image dataset and the outcomes are assessed in terms of diverse metrics. The simulation results highlighted the enhanced cancer detection performance of the MHODL-BCDMI technique over other recent algorithms

    Monitoring and analysis system for performance troubleshooting in data centers

    Get PDF
    It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.Ph.D
    • …
    corecore