1,035 research outputs found

    Some Contribution of Statistical Techniques in Big Data: A Review

    Get PDF
    Big Data is a popular topic in research work. Everyone is talking about big data, and it is believed that science, business, industry, government, society etc. will undergo a through change with the impact of big data.Big data is used to refer to very huge data set having large, more complex, hidden pattern, structured and unstructured nature of data with the difficulties to collect, storage, analysing for process or result. So proper advanced techniques to use to gain knowledge about big data. In big data research big challenge is created in storage, process, search, sharing, transfer, analysis and visualizing. To deeply discuss on introduction of big data, issue, management and all used big data techniques. Also in this paper present a review of various advanced statistical techniques to handling the key application of big data have large data set. These advanced techniques handle the structure as well as unstructured big data in different area

    Curriculum Change 2009-2010

    Get PDF

    A Novel Site-Agnostic Multimodal Deep Learning Model to Identify Pro-Eating Disorder Content on Social Media

    Full text link
    Over the last decade, there has been a vast increase in eating disorder diagnoses and eating disorder-attributed deaths, reaching their zenith during the Covid-19 pandemic. This immense growth derived in part from the stressors of the pandemic but also from increased exposure to social media, which is rife with content that promotes eating disorders. Such content can induce eating disorders in viewers. This study aimed to create a multimodal deep learning model capable of determining whether a given social media post promotes eating disorders based on a combination of visual and textual data. A labeled dataset of Tweets was collected from Twitter, upon which twelve deep learning models were trained and tested. Based on model performance, the most effective deep learning model was the multimodal fusion of the RoBERTa natural language processing model and the MaxViT image classification model, attaining accuracy and F1 scores of 95.9% and 0.959 respectively. The RoBERTa and MaxViT fusion model, deployed to classify an unlabeled dataset of posts from the social media sites Tumblr and Reddit, generated similar classifications as previous research studies that did not employ artificial intelligence, showing that artificial intelligence can develop insights congruent to those of researchers. Additionally, the model was used to conduct a time-series analysis of yet unseen Tweets from eight Twitter hashtags, uncovering that the relative abundance of pro-eating disorder content has decreased drastically. However, since approximately 2018, pro-eating disorder content has either stopped its decline or risen once more in ampleness

    Predictive Modeling for Fair and Efficient Transaction Inclusion in Proof-of-Work Blockchain Systems

    Get PDF
    This dissertation investigates the strategic integration of Proof-of-Work(PoW)-based blockchains and ML models to improve transaction inclusion, and consequently molding transaction fees, for clients using cryptocurrencies such as Bitcoin. The research begins with an in-depth exploration of the Bitcoin fee market, focusing on the interdependence between users and miners, and the emergence of a fee market in PoW-based blockchains. Our observations are used to formalize a transaction inclusion pattern. To support our research, we developed the Blockchain Analytics System (BAS) to acquire, store, and pre-process a local dataset of the Bitcoin blockchain. BAS employs various methods for data acquisition, including web scraping, web browser APIs, and direct access to the blockchain using Bitcoin Core software. We utilize time-series data analysis as a tool for predicting future trends, and transactions are sampled on a monthly basis with a fixed interval, incorporating a notion of relative time represented by block-creation epochs. We create a comprehensive model for transaction inclusion in a PoW-based blockchain system, with a focus on factors of revenue and fairness. Revenue serves as an incentive for miners to participate in the network and validate transactions, while fairness ensures equal opportunity for all users to have their transactions included upon paying an adequate fee value. The ML architecture used for prediction consists of three critical stages: the ingestion engine, the pre-processing stage, and the ML model. The ingestion engine processes and transforms raw data obtained from the blockchain, while the pre-processing phase transforms the data further into a suitable form for analysis, including feature extraction and additional data processing to generate a complete dataset. Our ML model showcases its effectiveness in predicting transaction inclusion, with an accuracy of more than 90%. Such a model enables users to save at least 10% on transaction fees while maintaining a likelihood of inclusion above 80%. Furthermore, adopting such model based on fairness and revenue, demonstrates that miners' average loss is never higher than 1.3%. Our research proves the efficacy of a formal transaction inclusion model and ML prototype in predicting transaction inclusion. The insights gained from our study shed light on the underlying mechanisms governing miners' decisions, improving the overall user experience, and enhancing the trust and reliability of cryptocurrencies. Consequently, this enables Bitcoin users to better select suitable fees and predict transaction inclusion with notable precision, contributing to the continued growth and adoption of cryptocurrencies

    The Proceedings of 14th Australian Information Security Management Conference, 5-6 December 2016, Edith Cowan University, Perth, Australia

    Get PDF
    The annual Security Congress, run by the Security Research Institute at Edith Cowan University, includes the Australian Information Security and Management Conference. Now in its fourteenth year, the conference remains popular for its diverse content and mixture of technical research and discussion papers. The area of information security and management continues to be varied, as is reflected by the wide variety of subject matter covered by the papers this year. The conference has drawn interest and papers from within Australia and internationally. All submitted papers were subject to a double blind peer review process. Fifteen papers were submitted from Australia and overseas, of which ten were accepted for final presentation and publication. We wish to thank the reviewers for kindly volunteering their time and expertise in support of this event. We would also like to thank the conference committee who have organised yet another successful congress. Events such as this are impossible without the tireless efforts of such people in reviewing and editing the conference papers, and assisting with the planning, organisation and execution of the conferences. To our sponsors also a vote of thanks for both the financial and moral support provided to the conference. Finally, thank you to the administrative and technical staff, and students of the ECU Security Research Institute for their contributions to the running of the conference

    An evaluation of equity premium prediction using multiple kernel learning with financial features

    Get PDF
    This paper introduces and extensively explores a forecasting procedure based on multivariate dynamic kernels to re-examine –under a non-linearframework– the experimental tests reported by Welch and Goyal (Review of Financial Studies 21(4),1455-1508, 2008) showing that several variables proposed in the finance literature are of no use as exogenous information to predict the equity premium under linear regressions. For this non-linear approach to equity premium forecasting, kernel functions for time series are used with multiple kernel learning(MKL) in order to represent the relative importance of each of the variables. We find that, in general, the predictive capabilities of the MKL models do not improve consistently with the use of some or all of the variables, nor does the predictability by single kernels, as determined by different resampling procedures that we implement and compare. This fact tends to corroborate the instability already observed by Welch and Goyal for the predictive power of exogenous variables, now in a non-linear modelling frameworkPeer ReviewedPostprint (author's final draft

    Blockchain as a driver for transformations in the public sector

    Get PDF
    Blockchain architecture, originally designed for Bitcoin, has revolutionized finance through decentralized transactions and secured data management. It has been utilized to maintain private citizen records, allowing data owners to grant access via the blockchain for direct communication. Despite its potential, this technology remains relatively unexplored by both citizens and the public sector. By carrying out a thorough literature review, this article aims to shed light on this field. The research focus encompasses two key elements: (1) analyzing blockchain dimensions and (2) exploring its transformative impact on the public sector. The methodology involves an extensive meta-analysis of existing research on blockchain’s analytical aspects and its role in reshaping public administration. Additionally, a questionnaire is administered to Information Technologies (IT) experts in public services, comparing their perceptions with established scientific studies. The research’s core findings address various analysis dimensions, including regulatory risks, data management challenges, privacy concerns, and technological limitations. On the transformation front, organizations adopting blockchain technology anticipate enhanced networked services, fortified data security, operational efficiency, informed decision-making, and novel public services. The potential of blockchain to drive innovative services and safeguard data is widely acknowledged, yet organizations with blockchain are cautiously optimistic about its practical implications compared to those without.info:eu-repo/semantics/publishedVersio

    Data Analytics and Techniques: A Review

    Get PDF
    Big data of different types, such as texts and images, are rapidly generated from the internet and other applications. Dealing with this data using traditional methods is not practical since it is available in various sizes, types, and processing speed requirements. Therefore, data analytics has become an important tool because only meaningful information is analyzed and extracted, which makes it essential for big data applications to analyze and extract useful information. This paper presents several innovative methods that use data analytics techniques to improve the analysis process and data management. Furthermore, this paper discusses how the revolution of data analytics based on artificial intelligence algorithms might provide improvements for many applications. In addition, critical challenges and research issues were provided based on published paper limitations to help researchers distinguish between various analytics techniques to develop highly consistent, logical, and information-rich analyses based on valuable features. Furthermore, the findings of this paper may be used to identify the best methods in each sector used in these publications, assist future researchers in their studies for more systematic and comprehensive analysis and identify areas for developing a unique or hybrid technique for data analysis
    • …
    corecore