161 research outputs found

    Short Term Load Forecasting for Smart Grids Using Apache Spark and a Modified Transformer Model

    Get PDF
    Smart grid is an advanced electrical grid that enables more efficient distribution of electricity. It counters many of the problems presented by renewable energy sources such as variability in production through techniques like load forecasting and dynamic pricing. Smart grid generates massive amounts of data through smart meters, this data is used to forecast future load to adjust distribution. To process all this data, big data analysis is necessary. Most existing schemes use Apache Hadoop for big data processing and various techniques for load forecasting that include methods based on statistical theory, machine learning and deep learning. This paper proposes using Apache Spark for big data analysis and a modified version of the transformer model for forecasting load profiles of households. The modified transformer model has been tested against several state-of-the-art machine learning models. The proposed scheme was tested against several baseline and state-of-the-art machine learning models and evaluated in terms of the RMSE, MAE, MedAE and R2 scores. The obtained results show that the proposed model has better performance in terms of RMSE and R2 which are the preferred metrics when evaluating a regression model on data with a large number of outliers

    The Impact of Artificial Intelligence on Trustworthiness and Authenticity in Green Influencer Marketing

    Get PDF
    In recent times, influencer marketing has become one of the crucial components of digital marketing tactics. We are living in a social media age where social media influencers have become the key voices in all fields and influencer marketing initiatives are becoming increasingly successful. They have started integrating AI into their content, analysis and audience engagement processes which can be of great benefit to them but at the same time, it presents new challenges for authenticity and genuineness. The findings of the research show the positive impact of Artificial intelligence on Influencer marketing but at the same time, it also aims to identify the challenges faced in green influencer marketing and how AI impacts its perceived trustworthiness and authenticity across major social media platforms including Instagram, and YouTube. Through analysis of sustainability-focused posts from influencers and survey responses from their followers, this paper examines the difference in impact between AI-enhanced content and authentic content. It also identifies how followers respond to AI influencer preferences and how this affects trust, particularly in AI-generated content. The findings also contribute to emerging theories of digital authenticity in social media marketing

    Efficient Encoders for Streaming Sequence Tagging

    Full text link
    A naive application of state-of-the-art bidirectional encoders for streaming sequence tagging would require encoding each token from scratch for each new token in an incremental streaming input (like transcribed speech). The lack of re-usability of previous computation leads to a higher number of Floating Point Operations (or FLOPs) and higher number of unnecessary label flips. Increased FLOPs consequently lead to higher wall-clock time and increased label flipping leads to poorer streaming performance. In this work, we present a Hybrid Encoder with Adaptive Restart (HEAR) that addresses these issues while maintaining the performance of bidirectional encoders over the offline (or complete) inputs while improving performance on streaming (or incomplete) inputs. HEAR has a Hybrid unidirectional-bidirectional encoder architecture to perform sequence tagging, along with an Adaptive Restart Module (ARM) to selectively guide the restart of bidirectional portion of the encoder. Across four sequence tagging tasks, HEAR offers FLOP savings in streaming settings upto 71.1% and also outperforms bidirectional encoders for streaming predictions by upto +10% streaming exact match.Comment: EACL 202

    Object Based Augmented Reality Case Study- Literature Survey on Application based approach towards Augmented Reality

    Get PDF
    This paper is about Augmented Reality (AR) using object-based visualization and implementation on the smartphone devices. Augmented Reality (AR) employs computer vision, image processing and computer graphics techniques to merge digital content into the real world. It enables real-time interaction between the user, real objects and virtual objects. AR can, for example, be used to embed 2D graphics into a video in such a way as if the virtual elements were part of the real environment. In this work, we are designing AR based software in which we are solving the problem for ease of access of documents on check post. One of the challenges of AR is to align virtual data with the environment. A marker-based approach solves the problem using visual markers, e.g. 2D barcodes, detectable with computer vision methods

    A Comparison of Multiple Machine Learning Algorithms to Predict Whole-Body Vibration Exposure of Dumper Operators in Iron Ore Mines in India

    Get PDF
    Background: This study deals with some factors that influence the exposure of whole-body vibration (WBV) of dumper operators in surface mines. The study also highlights the approach to improve the multivariate linear analysis outcomes when collinearity exists between certain factor pairs. Material and Methods: A total number of 130 vibration readings was taken from two adjacent surface iron ore mines. The frequency-weighted RMS acceleration was used for the WBV exposure assessment of the dumper operators. The factors considered in this study are age, weight, seat backrest height, awkward posture, the machine age, load tonnage, dumper speed and haul road condition. Four machine learning models were explored through the empirical training-testing approach. Results: The bootstrap linear regression model was found to be the best model based on performance and predictability when compared to multiple linear regression, LASSO regression, and decision tree. Results revealed that multiple factors influence WBV exposure. The significant factors are: weight of operators (regression coefficient β=-0.005, p\u3c0.001), awkward posture (β=0.033, p\u3c0.001), load tonnage (β=-0.026, p\u3c0.05), dumper speed (β=0.008, p\u3c0.001) and poor haul road condition (β=0.015, p\u3c0.001). Conclusion: The bootstrap linear regression model produced efficient results for the dataset which was characterized by collinearity. WBV exposure is multifactorial. Regular monitoring of WBV exposure and corrective actions through appropriate prevention programs including the ergonomic design of the seat would increase the health and safety of operators

    Formulation and evaluation of immediate release tablet of zopiclone using wet granulation method

    Get PDF
    Zopiclone, a cyclopyrolone, is a non-benzodiazepine derivative used as a hypnotic agent in the treatment of short term insomnia. The main objective of the present investigation was to formulate a pharmaceutically active stable and bioequivalent immediate release (IR) tablets of zopiclone using wet granulation method. The prepared formulations were evaluated using various physical parameters, equipment, dissolution study and drug release profile.  The basic approach used in development of zopiclone IR tablets was that the use of superdisintegrants as like Corn starch (maize) and Sodium starch glycolate which provide instant disintegration after administration. In-vitro dissolution testing study was carried out for 1 hours using 0.1N HCl in a dissolution apparatus for evaluation of Drug release. On the basis of the dissolution profile, F3 gives a better result and were found 100 % release in just 20 minutes and also found that as the polymer ratio were increases the drug release rate also increased from the formulation. Keywords: Hypnotic agent, immediate release, Wet granulation Method, Non-benzodiazepine derivative, Superdisintegrants, Zopiclone&nbsp

    Suture versus vessel sealer in vaginal hysterectomy: an observational study

    Get PDF
    Background: Vaginal route is considered to be the method of choice for removal of uterus and, in the absence of gross pelvic disease, can be carried out in most patients. Recent studies have shown that less than one-third of hysterectomies are performed vaginally. This could be due to technical difficulties occurring while operating in the narrow surgical field. This study was taken up to find out the easier alternatives in securing pedicles by using Electrosurgical Bipolar Vessel Sealer in Vaginal Hysterectomy.Methods: A prospective observational study was conducted in the Department of Obstetrics and Gynaecology, BRD Medical College, Gorakhpur over a period of one year i.e. July 15 to June 16. A total of 62 patients posted for vaginal hysterectomy for benign conditions were enrolled after informed consent. Results were recorded under headings of procedure time (min), blood loss (ml), major intra-operative complications and post operative complications, post-operative pain (on VAS) and duration of hospital stay.Results: Mean procedure time in suture method was found to be 55.66min, whereas, in sealer group it was 27.75 min. Mean blood loss in the sealer group was 83.78ml, while, in suture group it was 156.62ml. Mean pain score on Visual Analogue Scale on POD 1 was 8.44±1.1522 for suture group and 6±1.325 for sealer group. Mean pain score on POD2 in sealer group was 3.48±1.325 and in the suture group it was 5.31±1.754 (P200ml was observed in 29.03% of suture cases, none in the sealer group (P-value .0006). Labial burn occurred in 2 out of 32patients in sealer group.Conclusions: From above study, we conclude that bipolar vessel sealer has shown a significant reduction in intra-operative blood loss, procedure time, immediate post-operative pain (POD1,2&3), mean length of stay in hospital, major intra-operative blood loss>200ml which was found in significant number of cases in suture group

    AutoMix: Automatically Mixing Language Models

    Full text link
    Large language models (LLMs) are now available in various sizes and configurations from cloud API providers. While this diversity offers a broad spectrum of choices, effectively leveraging the options to optimize computational cost and performance remains challenging. In this work, we present AutoMix, an approach that strategically routes queries to larger LMs, based on the approximate correctness of outputs from a smaller LM. Central to AutoMix is a few-shot self-verification mechanism, which estimates the reliability of its own outputs without requiring training. Given that verifications can be noisy, we employ a meta verifier in AutoMix to refine the accuracy of these assessments. Our experiments using LLAMA2-13/70B, on five context-grounded reasoning datasets demonstrate that AutoMix surpasses established baselines, improving the incremental benefit per cost by up to 89%. Our code and data are available at https://github.com/automix-llm/automix.Comment: The first two authors contributed equally. Work started and partly done during Aman's internship at Google. This version adds results on mixing 3 models, and will be presented at the workshop on robustness of zero/few-shot learning in foundation models, Neurips 202
    corecore