25 research outputs found

    Energy Efficiency Prediction using Artificial Neural Network

    Get PDF
    Buildings energy consumption is growing gradually and put away around 40% of total energy use. Predicting heating and cooling loads of a building in the initial phase of the design to find out optimal solutions amongst different designs is very important, as ell as in the operating phase after the building has been finished for efficient energy. In this study, an artificial neural network model was designed and developed for predicting heating and cooling loads of a building based on a dataset for building energy performance. The main factors for input variables are: relative compactness, roof area, overall height, surface area, glazing are a, wall area, glazing area distribution of a building, orientation, and the output variables: heating and cooling loads of the building. The dataset used for training are the data published in the literature for various 768 residential buildings. The model was trained and validated, most important factors affecting heating load and cooling load are identified, and the accuracy for the validation was 99.60%

    Handwritten Signature Verification using Deep Learning

    Get PDF
    Every person has his/her own unique signature that is used mainly for the purposes of personal identification and verification of important documents or legal transactions. There are two kinds of signature verification: static and dynamic. Static(off-line) verification is the process of verifying an electronic or document signature after it has been made, while dynamic(on-line) verification takes place as a person creates his/her signature on a digital tablet or a similar device. Offline signature verification is not efficient and slow for a large number of documents. To overcome the drawbacks of offline signature verification, we have seen a growth in online biometric personal verification such as fingerprints, eye scan etc. In this paper we created CNN model using python for offline signature and after training and validating, the accuracy of testing was 99.70%

    Enhanced Context-Aware Role-Based Access Control Framework for Pervasive Environment

    Get PDF
    Utilization of contextual information considered very useful for improving access decision making process against systems resources, to be more effective in providing authorized service for a large number of end users.We selected model makes decisions based on context information sensed and collected from user environment. Then we enhanced context utilization and framework performance based on theoretical idea previously published [14], through studying the process of making decision based on context information validity. We focused on enhancing the distributing and management process of context information over users by using the proxy, which works as observer to enforce policy for short term context information. In case of any change, breaks access control policy rules, the proxy on user device will automatically send revocation/grant request based on change made for context information related to the user in his local environment. After the change made to context information listed within the available policy rules, the proxy will re-evaluate it on user device, and utilize available resources on the device, then grant or revoke permissions, finally will update the web service to be up-to-date. Such enhancement will highly increase system responsiveness and enhance authorization for end users.

    Diagnosis of Pneumonia Using Deep Learning

    Get PDF
    Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines or software that work and react like humans. Some of the activities computers with artificial intelligence are designed for include, Speech, recognition, Learning, Planning and Problem solving. Deep learning is a collection of algorithms used in machine learning, It is part of a broad family of methods used for machine learning that are based on learning representations of data. Deep learning is a technique used to produce Pneumonia detection and classification models using x-ray imaging for rapid and easy detection and identification of pneumonia. In this thesis, we review ways and mechanisms to use deep learning techniques to produce a model for Pneumonia detection. The goal is find a good and effective way to detect pneumonia based on X-rays to help the chest doctor in decision-making easily and accuracy and speed. The model will be designed and implemented, including both Dataset of image and Pneumonia detection through the use of Deep learning algorithms based on neural networks. The test and evaluation will be applied to a range of chest x-ray images and the results will be presented in detail and discussed. This thesis uses deep learning to detect pneumonia and its classification

    Sarcasm Detection in Headline News using Machine and Deep Learning Algorithms

    Get PDF
    Abstract: Sarcasm is commonly used in news and detecting sarcasm in headline news is challenging for humans and thus for computers. The media regularly seem to engage sarcasm in their news headline to get the attention of people. However, people find it tough to detect the sarcasm in the headline news, hence receiving a mistaken idea about that specific news and additionally spreading it to their friends, colleagues, etc. Consequently, an intelligent system that is able to distinguish between can sarcasm none sarcasm automatically is very important. The aim of the study is to build a sarcasm model that detect headline news using machine and deep learning and attempt to understand how a computer learns the patterns of sarcasm. The dataset used in this study was collected from Kaggle depository. We examined 21 algorithms of machine learning and one deep learning algorithm for detecting sarcasm in headline news. The evaluation metric used in this study are Accuracy, F1-measure, Recall, Precision, and Time needed for training and evaluation. The deep learning model achieved accuracy (95.27%), recall (96.62%), precision (94.15%), F1-score (95.37%) and time needed to train the mode (400 seconds), with loss of around 0.3398. However, the algorithm of machine learning that achieved the highest F1-Score is Passive Aggressive Classifier. It was the top classier for sarcasm detection among the machine learning algorithms with accuracy (95.50%), recall (96.09 %), precision (94.30%), F1-score (95.19%) and time needed to train the mode (0.31 seconds)

    Sarcasm Detection in Headline News using Machine and Deep Learning Algorithms

    Get PDF
    Abstract: Sarcasm is commonly used in news and detecting sarcasm in headline news is challenging for humans and thus for computers. The media regularly seem to engage sarcasm in their news headline to get the attention of people. However, people find it tough to detect the sarcasm in the headline news, hence receiving a mistaken idea about that specific news and additionally spreading it to their friends, colleagues, etc. Consequently, an intelligent system that is able to distinguish between can sarcasm none sarcasm automatically is very important. The aim of the study is to build a sarcasm model that detect headline news using machine and deep learning and attempt to understand how a computer learns the patterns of sarcasm. The dataset used in this study was collected from Kaggle depository. We examined 21 algorithms of machine learning and one deep learning algorithm for detecting sarcasm in headline news. The evaluation metric used in this study are Accuracy, F1-measure, Recall, Precision, and Time needed for training and evaluation. The deep learning model achieved accuracy (95.27%), recall (96.62%), precision (94.15%), F1-score (95.37%) and time needed to train the mode (400 seconds), with loss of around 0.3398. However, the algorithm of machine learning that achieved the highest F1-Score is Passive Aggressive Classifier. It was the top classier for sarcasm detection among the machine learning algorithms with accuracy (95.50%), recall (96.09 %), precision (94.30%), F1-score (95.19%) and time needed to train the mode (0.31 seconds)

    Prediction of Heart Disease Using a Collection of Machine and Deep Learning Algorithms

    Get PDF
    Abstract: Heart diseases are increasing daily at a rapid rate and it is alarming and vital to predict heart diseases early. The diagnosis of heart diseases is a challenging task i.e. it must be done accurately and proficiently. The aim of this study is to determine which patient is more likely to have heart disease based on a number of medical features. We organized a heart disease prediction model to identify whether the person is likely to be diagnosed with a heart disease or not using the medical features of the person. We used many different algorithms of machine learning such as Gaussian Mixture, Nearest Centroid, MultinomialNB, Logistic RegressionCV, Linear SVC, Linear Discriminant Analysis, SGD Classifier, Extra Tree Classifier, Calibrated ClassifierCV, Quadratic Discriminant Analysis, GaussianNB, Random Forest Classifier, ComplementNB, MLP Classifier, BernoulliNB, Bagging Classifier, LGBM Classifier, Ada Boost Classifier, K Neighbors Classifier, Logistic Regression, Gradient Boosting Classifier, Decision Tree Classifier, and Deep Learning to predict and classify the patient with heart disease. A quite helpful approach was used to regulate how the model can be used to improve the accuracy of prediction of heart diseases in any person. The strength of the proposed model was very satisfying and was able to predict evidence of having a heart disease in a particular person by using Deep Learning and Random Forest Classifier which showed a good accuracy in comparison to the other used classifiers. The proposed heart disease prediction model will enhances medical care and reduces the cost. This study gives us significant knowledge that can help us predict the person with heart disease. The dataset was collected from Kaggle depository and the model is implemented using python

    Prediction of Heart Disease Using a Collection of Machine and Deep Learning Algorithms

    Get PDF
    Abstract: Heart diseases are increasing daily at a rapid rate and it is alarming and vital to predict heart diseases early. The diagnosis of heart diseases is a challenging task i.e. it must be done accurately and proficiently. The aim of this study is to determine which patient is more likely to have heart disease based on a number of medical features. We organized a heart disease prediction model to identify whether the person is likely to be diagnosed with a heart disease or not using the medical features of the person. We used many different algorithms of machine learning such as Gaussian Mixture, Nearest Centroid, MultinomialNB, Logistic RegressionCV, Linear SVC, Linear Discriminant Analysis, SGD Classifier, Extra Tree Classifier, Calibrated ClassifierCV, Quadratic Discriminant Analysis, GaussianNB, Random Forest Classifier, ComplementNB, MLP Classifier, BernoulliNB, Bagging Classifier, LGBM Classifier, Ada Boost Classifier, K Neighbors Classifier, Logistic Regression, Gradient Boosting Classifier, Decision Tree Classifier, and Deep Learning to predict and classify the patient with heart disease. A quite helpful approach was used to regulate how the model can be used to improve the accuracy of prediction of heart diseases in any person. The strength of the proposed model was very satisfying and was able to predict evidence of having a heart disease in a particular person by using Deep Learning and Random Forest Classifier which showed a good accuracy in comparison to the other used classifiers. The proposed heart disease prediction model will enhances medical care and reduces the cost. This study gives us significant knowledge that can help us predict the person with heart disease. The dataset was collected from Kaggle depository and the model is implemented using python

    Quantitative single cell monitoring of protein synthesis at subcellular resolution using fluorescently labeled tRNA

    Get PDF
    We have developed a novel technique of using fluorescent tRNA for translation monitoring (FtTM). FtTM enables the identification and monitoring of active protein synthesis sites within live cells at submicron resolution through quantitative microscopy of transfected bulk uncharged tRNA, fluorescently labeled in the D-loop (fl-tRNA). The localization of fl-tRNA to active translation sites was confirmed through its co-localization with cellular factors and its dynamic alterations upon inhibition of protein synthesis. Moreover, fluorescence resonance energy transfer (FRET) signals, generated when fl-tRNAs, separately labeled as a FRET pair occupy adjacent sites on the ribosome, quantitatively reflect levels of protein synthesis in defined cellular regions. In addition, FRET signals enable detection of intra-populational variability in protein synthesis activity. We demonstrate that FtTM allows quantitative comparison of protein synthesis between different cell types, monitoring effects of antibiotics and stress agents, and characterization of changes in spatial compartmentalization of protein synthesis upon viral infection

    FRET-Based Identification of mRNAs Undergoing Translation

    Get PDF
    We present proof-of-concept in vitro results demonstrating the feasibility of using single molecule fluorescence resonance energy transfer (smFRET) measurements to distinguish, in real time, between individual ribosomes programmed with several different, short mRNAs. For these measurements we use either the FRET signal generated between two tRNAs labeled with different fluorophores bound simultaneously in adjacent sites to the ribosome (tRNA-tRNA FRET) or the FRET signal generated between a labeled tRNA bound to the ribosome and a fluorescent derivative of ribosomal protein L1 (L1-tRNA FRET). With either technique, criteria were developed to identify the mRNAs, taking into account the relative activity of the mRNAs. These criteria enabled identification of the mRNA being translated by a given ribosome to within 95% confidence intervals based on the number of identified FRET traces. To upgrade the approach for natural mRNAs or more complex mixtures, the stoichiometry of labeling should be enhanced and photobleaching reduced. The potential for porting these methods into living cells is discussed
    corecore