52 research outputs found

    Studying Three Phase Supply in School

    Full text link
    The power distribution of nearly all major countries have accepted 3-phase distribution as a standard. With increasing power requirements of instrumentation today even a small physics laboratory requires 3-phase supply. While physics students are given an introduction of this in passing, no experiment work is done with 3-phase supply due to the sheer possibility of accidents while working with such large powers. We believe a conceptual understanding of 3-phase supply would be useful for physics students with hands on experience using a simple circuit that can be assembled even in a high school laboratorys

    Content Modelling for unbiased Information Analysis

    Get PDF
    Content is the form through which the information is conveyed as per the requirement of user. A volume of content is huge and expected to grow exponentially hence classification of useful data and not useful data is a very tedious task. Interface between content and user is Search engine. Therefore, the contents are designed considering search engine\u27s perspective. Content designed by the organization, utilizes user’s data for promoting their products and services. This is done mostly using inorganic ways utilized to influence the quality measures of a content, this may mislead the information. There is no correct mechanism available to analyse and disseminate the data. The gap between Actual results displayed to the user and results expected by the user can be minimized by introducing the quality check for the parameter to assess the quality of content. This may help to ensure the quality of content and popularity will not be allowed to precede quality of content. Social networking sites will help in doing the user modelling so that the qualitative dissemination of content can be validated

    Content Modelling for unbiased Information Analysis

    Get PDF
    Content is the form through which the information is conveyed as per the requirement of user. A volume of content is huge and expected to grow exponentially hence classification of useful data and not useful data is a very tedious task. Interface between content and user is Search engine. Therefore, the contents are designed considering search engine\u27s perspective. Content designed by the organization, utilizes user’s data for promoting their products and services. This is done mostly using inorganic ways utilized to influence the quality measures of a content, this may mislead the information. There is no correct mechanism available to analyse and disseminate the data. The gap between Actual results displayed to the user and results expected by the user can be minimized by introducing the quality check for the parameter to assess the quality of content. This may help to ensure the quality of content and popularity will not be allowed to precede quality of content. Social networking sites will help in doing the user modelling so that the qualitative dissemination of content can be validated

    A Hybrid Model for Photographic Supra-Projection

    Get PDF
    Photographic supra-projection (CS) comes under forensic process in which video shots or photographs of a missing person are compared against the skull that is found. By projecting both photographs on top of each other (or, even better, matching a scanned 3-D skull model against the face photo/video shot), the forensic anthropologist can try to ascertain whether it is the same person. The overall process is affected by inherent uncertainty, mostly because two objects of different nature (a face and a skull ) are involved. In this paper, we extended existing evolutionary-algorithm-based techniques to automatically superimpose the 3-D skull model and the 2-D face photo with the aim to overcome the limitations that are associated with the different sources of uncertainty, which are present in the problem. Three different approaches to handle the imprecision will be proposed: Viola- Jones Face Detection Framework, Canonical Correlation Analysis and Inverse Compositional Active Appearance Model. DOI: 10.17762/ijritcc2321-8169.15076

    Formulation and Evaluation of Herbo-Mineral Facial Scrub

    Get PDF
    The main objective of present study was to prepare a herbo-mineral facial scrub. Majorly facial skin comes in direct contact of dirt, pollution, dust particles and having large number of dead cells. In order to remove the dead cells and make the skin healthy, cleaned and nourished, some facial preparations required. The prepared scrub contains various natural ingredients which are safer for use and having fewer side effects and also they possess antiseptic, anti-infective, antioxidant, anti-aging and humectant properties. The scrub was prepared by using simple mixing method using various ingredients such as poppy seeds, neem extract, tulsi extract, aloe vera gel, almond oil, mixed in carbopol 934, rest of ingredients such as glycerin, triethanolamine, preservatives and perfuming agent were also added to this preparation with homogeneous mixing. The formulated scrub was evaluated for various parameters such as physical appearance, color, texture, odor, pH, viscosity, irritability, washability, homogeneity, extrudability, spreadability and found fruitful results for all the parameter tested. Thus the prepared formulation can be used effectively as it shows good scrubbing properties and it can be used to make a healthy, clean and glowing skin. Keywords: Facial scrub, antiseptic, anti-aging, herbal, poppy seeds etc

    Credibility Evaluation of User-generated Content using Novel Multinomial Classification Technique

    Get PDF
    Awareness about the features of the internet, easy access to data using mobile, and affordable data facilities have caused a lot of traffic on the internet. Digitization came with a lot of opportunities and challenges as well. One of the important advantages of digitization is paperless transactions, and transparency in payment, while data privacy, fake news, and cyber-attacks are the evolving challenges. The extensive use of social media networks and e-commerce websites has caused a lot of user-generated information, misinformation, and disinformation on the Internet. The quality of information depends upon various stages (of information) like generation of information, medium of propagation, and consumption of information. Content being user-generated, information needs a quality assessment before consumption. The loss of information is also necessary to be examined by applying the machine learning approach as the volume of content is extremely huge. This research work focuses on novel multinomial classification (based on multinoulli distribution) techniques to determine the quality of the information in the given content. To evaluate the information content a single algorithm with some processing is not sufficient and various approaches are necessary to evaluate the quality of content.  We propose a novel approach to calculate the bias, for which the Machine Learning model will be fitted appropriately to classify the content correctly. As an empirical study, rotten tomatoes’ movie review data set is used to apply the classification techniques. The accuracy of the system is evaluated using the ROC curve, confusion matrix, and MAP

    Intrusion Detection System using the Hybrid Model of Classification Algorithm and Rule-Based Algorithm

    Get PDF
    Intrusion detection system ID is necessary to secure the system from various intrusions. Analysis of the communication to categorize the data as useful or malicious data is crucial. The cyber security employed using intrusion detection systems should not also cause the extra time to perform the categorization. Nowadays machine learning techniques are used to make the identification of malicious data or an intrusion with the help of classification algorithms. The data set used for experimenting is KDD cup 99. The effect of individual classification algorithms can be improvised with the help of hybrid classification models. This model combines classification algorithms with rule-based algorithms. The blend of classification using machine and human intelligence adds an extra layer of security. An algorithm is validated using precision, recall, F-Measure, and Mean age Precision. The accuracy of the algorithm is 92.35 percent. The accuracy of the model is satisfactory even after the results are acquired by combining our rules inwritten by humans with conventional machine learning classification algorithms. Still, there is scope for improving and accurately classifying the attack precisely

    Automated Video and Audio based Stress Detection using Deep Learning Techniques

    Get PDF
    In today's world, stress has become an undoubtedly severe problem that affects people's health. Stress can modify a person's behavior, ideas, and feelings in addition to having an impact on mental health. Unchecked stress can contribute to chronic illnesses including high blood pressure, diabetes, and obesity. Early stress detection promotes a healthy lifestyle in society. This work demonstrates a deep learning-based method for identifying stress from facial expressions and speech signals.An image dataset formed by collecting images from the web is used to construct and train the model Convolution Neural Network (CNN), which then divides the images into two categories: stressed and normal. Recurrent Neural Network (RNN), which is used to categorize speech signals into stressed and normal categories based on the features extracted by the MFCC (Mel Frequency Cepstral Coefficient), is thought to perform better on sequential data since it maintains the past results to determine the final output

    A Parameter Based Comparative Study of Deep Learning Algorithms for Stock Price Prediction

    Get PDF
    Stock exchanges are places where buyers and sellers meet to trade shares in public companies. Stock exchanges encourage investment. Companies can grow, expand, and generate jobs in the economy by raising cash. These investments play a crucial role in promoting trade, economic expansion, and prosperity. We compare the three well-known deep learning algorithms, LSTM, GRU, and CNN, in this work. Our goal is to provide a thorough study of each algorithm and identify the best strategy when taking into account elements like accuracy, memory utilization, interpretability, and more. To do this, we recommend the usage of hybrid models, which combine the advantages of the various methods while also evaluating the performance of each approach separately. Aim of research is to investigate model with the highest accuracy and the best outcomes with respect to stock price prediction

    Survey on Classification of Online Reviews Based on Social Networking

    Get PDF
    For what reason would individuals like to vote in favor of or against content at some online groups and not at others? Social foraging hypothesis, mainly research on insect and other animal information sharing behavior, it provides new approach. Obtaining ideas from social searching hypothesis, this survey suggests that four components drive individuals' goal to vote online content (positive or negative): 1) altruistic intentions; 2) identification with the community; 3) data quality; and 4) learning self-adequacy. The survey show was tried in a study of online news groups. It found that positive voting goal was anticipated by altruistic motives, identification with the community, and learning self-adequacy. Data quality is critical for positive voting; however, it works in a indirect way through cultivating more group recognition. Negative voting expectation was anticipated by altruistic motives and data quality. Earlier research has connected through searching hypothesis to people acting alone, e.g., when an individual uses Google to search for data on the web. This survey grows the utilization of searching hypothesis to the group surroundings where people give votes to impact others in their selected group. The discoveries advance our insight about content voting and give suggestions to experts of voting systems
    • …
    corecore