466,349 research outputs found

    Reliability Evaluation of Direct Current Distribution System for Intelligent Buildings Based on Big Data Analysis

    Get PDF
    In intelligent buildings, the power is distributed in the direct current (DC) mode, which is more energy-efficient than the traditional alternating current (AC) mode. However, the DC distribution system for intelligent buildings faces many problems, such as the stochasticity and intermittency of distributed generation, as well as the uncertain reliability of key supply and distribution devices. To solve these problems, this paper evaluates and predicts the reliability of the DC distribution system for intelligent buildings through big data analysis. Firstly, the authors identified the sources of the big data on DC distribution system for reliability analysis, and constructed a scientific evaluation index system. Then, association rules were mined from the original data on the evaluation indices with MapReduce, and a reliability evaluation model was established based on Bayesian network. Finally, the proposed model was proved valid through experiments. The research provides reference for reliability evaluation of the DC distribution system in various fields

    An Analysis of the Potential Applications of Big Data Analytics (BDA) in Supply Chain Management: Emerging Market Perspective

    Get PDF
    Big Data is defined as the techniques, technologies, systems, practices, methodologies, and applications that analyze critical business data to help an enterprise better understand its business and market and make timely business decisions. Big Data can be utilized to gain critical and fundamental insights towards optimizing the supply chain decisions more effective and efficient. In the recent years, therefore, researchers and practitioners have tried to measure the capabilities of Big Data to optimize Supply Chain Management (SCM) efficiency. This research attempts to provide a clear understanding of Big Data applications on Supply Chain Management in emerging markets, especially in Bangladesh, primarily focusing on four key areas: reducing inventory cost, attaining cost leadership, improving customer service and enhancing speed of delivery. To investigate the potential application of Big Data in supply management, a qualitative research has been conducted. Ten in-depth interviews and a case study have been conducted to collect the relevant information from the supply chain experts of the selected firms. Thematic analysis and Hermeneutic iterative methods of analyses have been used. The results indicate that the supply chain of both physical products and services can be benefited from Big Data analytics. The study also revealed that Big Data can be applied in SCM for operational and development purposes including value discovery, value creation and value capture. This study would help the decision makers and practitioners of Supply Chain Management of diverse fields to adopt Big Data to improve the organizations performance and sustainability. Keywords: Big Data analytics, Supply Chain Management, applications, emerging markets

    Big Data and Its Applications in Smart Real Estate and the Disaster Management Life Cycle: A Systematic Analysis

    Get PDF
    Big data is the concept of enormous amounts of data being generated daily in different fields due to the increased use of technology and internet sources. Despite the various advancements and the hopes of better understanding, big data management and analysis remain a challenge, calling for more rigorous and detailed research, as well as the identifications of methods and ways in which big data could be tackled and put to good use. The existing research lacks in discussing and evaluating the pertinent tools and technologies to analyze big data in an efficient manner which calls for a comprehensive and holistic analysis of the published articles to summarize the concept of big data and see field-specific applications. To address this gap and keep a recent focus, research articles published in last decade, belonging to top-tier and high-impact journals, were retrieved using the search engines of Google Scholar, Scopus, and Web of Science that were narrowed down to a set of 139 relevant research articles. Different analyses were conducted on the retrieved papers including bibliometric analysis, keywords analysis, big data search trends, and authors’ names, countries, and affiliated institutes contributing the most to the field of big data. The comparative analyses show that, conceptually, big data lies at the intersection of the storage, statistics, technology, and research fields and emerged as an amalgam of these four fields with interlinked aspects such as data hosting and computing, data management, data refining, data patterns, and machine learning. The results further show that major characteristics of big data can be summarized using the seven Vs, which include variety, volume, variability, value, visualization, veracity, and velocity. Furthermore, the existing methods for big data analysis, their shortcomings, and the possible directions were also explored that could be taken for harnessing technology to ensure data analysis tools could be upgraded to be fast and efficient. The major challenges in handling big data include efficient storage, retrieval, analysis, and visualization of the large heterogeneous data, which can be tackled through authentication such as Kerberos and encrypted files, logging of attacks, secure communication through Secure Sockets Layer (SSL) and Transport Layer Security (TLS), data imputation, building learning models, dividing computations into sub-tasks, checkpoint applications for recursive tasks, and using Solid State Drives (SDD) and Phase Change Material (PCM) for storage. In terms of frameworks for big data management, two frameworks exist including Hadoop and Apache Spark, which must be used simultaneously to capture the holistic essence of the data and make the analyses meaningful, swift, and speedy. Further field-specific applications of big data in two promising and integrated fields, i.e., smart real estate and disaster management, were investigated, and a framework for field-specific applications, as well as a merger of the two areas through big data, was highlighted. The proposed frameworks show that big data can tackle the ever-present issues of customer regrets related to poor quality of information or lack of information in smart real estate to increase the customer satisfaction using an intermediate organization that can process and keep a check on the data being provided to the customers by the sellers and real estate managers. Similarly, for disaster and its risk management, data from social media, drones, multimedia, and search engines can be used to tackle natural disasters such as floods, bushfires, and earthquakes, as well as plan emergency responses. In addition, a merger framework for smart real estate and disaster risk management show that big data generated from the smart real estate in the form of occupant data, facilities management, and building integration and maintenance can be shared with the disaster risk management and emergency response teams to help prevent, prepare, respond to, or recover from the disasters

    Real-time satellite data processing platform architecture

    Get PDF
    Remote sensing satellites produce massive amounts of data of the earth every day. This earth observation data can be used to solve real world problems in many different fields. Finnish space data company Terramonitor has been using satellite data to produce new information for its customers. The Process for producing valuable information includes finding raw data, analysing it and visualizing it according to the client’s needs. This process contains a significant amount of manual work that is done at local workstations. Because satellite data can quickly become very big, it is not efficient to use unscalable processes that require lot of waiting time. This thesis is trying to solve the problem by introducing an architecture for cloud based real-time processing platform that allows satellite image analysis to be done in cloud environment. The architectural model is built using microservice patterns to ensure that the solution is scalable to match the changing demand

    An IT Professional Talents Training Model in Colleges Based on Animal Cell Structure

    Get PDF
    Under the current period background of big data and cloud computing, there is a huge demand for professionals in related fields such as information technology (IT). To solve this problem, this paper puts forward an IT professional talents training model based on animal cell structure by comparing the structures of animal cells and its efficient operation principle with IT professional training model system. According to the efficient-working principle of ‘Nucleus-Cytoplasm- Environment’, this model is built as a ‘Class (The Core)-College (Internal Environment)-Enterprise (External Environment)’ training model for IT-majored students. The motivation is to cultivate students’ abilities in these four aspects: structure, application, analysis and innovation, namely, regarding theory teaching as the core, college practice training as the pulling force and enterprise project resources as the pushing force. The reliability and validation of this model have been demonstrated by simulation results in Wuhan University of Science and Technology

    Parallel and Streaming Wavelet Neural Networks for Classification and Regression under Apache Spark

    Full text link
    Wavelet neural networks (WNN) have been applied in many fields to solve regression as well as classification problems. After the advent of big data, as data gets generated at a brisk pace, it is imperative to analyze it as soon as it is generated owing to the fact that the nature of the data may change dramatically in short time intervals. This is necessitated by the fact that big data is all pervasive and throws computational challenges for data scientists. Therefore, in this paper, we built an efficient Scalable, Parallelized Wavelet Neural Network (SPWNN) which employs the parallel stochastic gradient algorithm (SGD) algorithm. SPWNN is designed and developed under both static and streaming environments in the horizontal parallelization framework. SPWNN is implemented by using Morlet and Gaussian functions as activation functions. This study is conducted on big datasets like gas sensor data which has more than 4 million samples and medical research data which has more than 10,000 features, which are high dimensional in nature. The experimental analysis indicates that in the static environment, SPWNN with Morlet activation function outperformed SPWNN with Gaussian on the classification datasets. However, in the case of regression, the opposite was observed. In contrast, in the streaming environment i.e., Gaussian outperformed Morlet on the classification and Morlet outperformed Gaussian on the regression datasets. Overall, the proposed SPWNN architecture achieved a speedup of 1.32-1.40.Comment: 25 pages; 2 Tables; 7 Figure

    Topological Data Analysis of High-dimensional Correlation Structures with Applications in Epigenetics

    Get PDF
    This thesis comprises a comprehensive study of the correlation of highdimensional datasets from a topological perspective. Derived from a lack of efficient algorithms of big data analysis and motivated by the importance of finding a structure of correlations in genomics, we have developed two analytical tools inspired by the topological data analysis approach that describe and predict the behavior of the correlated design. Those models allowed us to study epigenetic interactions from a local and global perspective, taking into account the different levels of complexity. We applied graph-theoretic and algebraic topology principles to quantify structural patterns on local correlation networks and, based on them, we proposed a network model that was able to predict the locally high correlations of DNA methylation data. This model provided with an efficient tool to measure the evolution of the correlation with the aging process. Furthermore, we developed a powerful computational algorithm to analyze the correlation structure globally that was able to detect differentiated methylation patterns over sample groups. This methodology aimed to serve as a diagnostic tool, as it provides with selected epigenetic biomarkers associated with a specific phenotype of interest. Overall, this work establishes a novel perspective of analysis and modulation of hidden correlation structures, specifically those of great dimension and complexity, contributing to the understanding of the epigenetic processes, and that is designed to be useful for non-biological fields too
    • 

    corecore