16 research outputs found

    Big data compression processing and verification based on Hive for smart substation

    Get PDF

    Big Data solutions on a small scale: Evaluating accessible high-performance computing for social research

    Get PDF
    Though full of promise, Big Data research success is often contingent on access to the newest, most advanced, and often expensive hardware systems and the expertise needed to build and implement such systems. As a result, the accessibility of the growing number of Big Data-capable technology solutions has often been the preserve of business analytics. Pay as you store/process services like Amazon Web Services have opened up possibilities for smaller scale Big Data projects. There is high demand for this type of research in the digital humanities and digital sociology, for example. However, scholars are increasingly finding themselves at a disadvantage as available data sets of interest continue to grow in size and complexity. Without a large amount of funding or the ability to form interdisciplinary partnerships, only a select few find themselves in the position to successfully engage Big Data. This article identifies several notable and popular Big Data technologies typically implemented using large and extremely powerful cloud-based systems and investigates the feasibility and utility of development of Big Data analytics systems implemented using low-cost commodity hardware in basic and easily maintainable configurations for use within academic social research. Through our investigation and experimental case study (in the growing field of social Twitter analytics), we found that not only are solutions like Cloudera’s Hadoop feasible, but that they can also enable robust, deep, and fruitful research outcomes in a variety of use-case scenarios across the disciplines

    Large-scale unit commitment under uncertainty: an updated literature survey

    Get PDF
    The Unit Commitment problem in energy management aims at finding the optimal production schedule of a set of generation units, while meeting various system-wide constraints. It has always been a large-scale, non-convex, difficult problem, especially in view of the fact that, due to operational requirements, it has to be solved in an unreasonably small time for its size. Recently, growing renewable energy shares have strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex and uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focussing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, while providing entry points to the relevant literature on optimization under uncertainty. This is an updated version of the paper "Large-scale Unit Commitment under uncertainty: a literature survey" that appeared in 4OR 13(2), 115--171 (2015); this version has over 170 more citations, most of which appeared in the last three years, proving how fast the literature on uncertain Unit Commitment evolves, and therefore the interest in this subject

    Multi-stage cascaded deconvolution for depth map and surface normal prediction from single image

    Full text link
    Understanding the 3D perspective of a scene is imperative in improving the precision of intelligent autonomous systems. The difficulty in understanding is compounded when only one image of the scene is available at disposal. In this regard, we propose a fully convolutional deep framework for predicting the depth map and surface normal from a single RGB image in a common architecture. The DenseNet CNN architecture is employed to learn the complex mapping between an input RGB image and its corresponding 3D primitives. We introduce a novel approach of multi-stage cascaded deconvolution, where the output feature maps of one dense block are reused by concatenating with the feature maps of the corresponding deconvolution block. These combined feature maps are progressed along the deep network in a pre-activated manner to construct the final output. The network is trained separately for estimating depth and surface normal while keeping the architecture same. The suggested architecture, compared to the counterparts, uses fewer training samples and model parameters. Exhaustive experiments on benchmark dataset not only reveal the efficacy of the proposed multi-stage scheme over the one-way sequential deconvolution but also outperform the state-of-the-art methods
    corecore