45,447 research outputs found

    Learning, Arts, and the Brain: The Dana Consortium Report on Arts and Cognition

    Get PDF
    Reports findings from multiple neuroscientific studies on the impact of arts training on the enhancement of other cognitive capacities, such as reading acquisition, sequence learning, geometrical reasoning, and memory

    Automated Measurement of Heavy Equipment Greenhouse Gas Emission: The case of Road/Bridge Construction and Maintenance

    Get PDF
    Road/bridge construction and maintenance projects are major contributors to greenhouse gas (GHG) emissions such as carbon dioxide (CO2), mainly due to extensive use of heavy-duty diesel construction equipment and large-scale earthworks and earthmoving operations. Heavy equipment is a costly resource and its underutilization could result in significant budget overruns. A practical way to cut emissions is to reduce the time equipment spends doing non-value-added activities and/or idling. Recent research into the monitoring of automated equipment using sensors and Internet-of-Things (IoT) frameworks have leveraged machine learning algorithms to predict the behavior of tracked entities. In this project, end-to-end deep learning models were developed that can learn to accurately classify the activities of construction equipment based on vibration patterns picked up by accelerometers attached to the equipment. Data was collected from two types of real-world construction equipment, both used extensively in road/bridge construction and maintenance projects: excavators and vibratory rollers. The validation accuracies of the developed models were tested of three different deep learning models: a baseline convolutional neural network (CNN); a hybrid convolutional and recurrent long shortterm memory neural network (LSTM); and a temporal convolutional network (TCN). Results indicated that the TCN model had the best performance, the LSTM model had the second-best performance, and the CNN model had the worst performance. The TCN model had over 83% validation accuracy in recognizing activities. Using deep learning methodologies can significantly increase emission estimation accuracy for heavy equipment and help decision-makers to reliably evaluate the environmental impact of heavy civil and infrastructure projects. Reducing the carbon footprint and fuel use of heavy equipment in road/bridge projects have direct and indirect impacts on health and the economy. Public infrastructure projects can leverage the proposed system to reduce the environmental cost of infrastructure project

    Nuclear mass predictions based on Bayesian neural network approach with pairing and shell effects

    Full text link
    Bayesian neural network (BNN) approach is employed to improve the nuclear mass predictions of various models. It is found that the noise error in the likelihood function plays an important role in the predictive performance of the BNN approach. By including a distribution for the noise error, an appropriate value can be found automatically in the sampling process, which optimizes the nuclear mass predictions. Furthermore, two quantities related to nuclear pairing and shell effects are added to the input layer in addition to the proton and mass numbers. As a result, the theoretical accuracies are significantly improved not only for nuclear masses but also for single-nucleon separation energies. Due to the inclusion of the shell effect, in the unknown region, the BNN approach predicts a similar shell-correction structure to that in the known region, e.g., the predictions of underestimation of nuclear mass around the magic numbers in the relativistic mean-field model. This manifests that better predictive performance can be achieved if more physical features are included in the BNN approach.Comment: 15 pages, 4 figures, and 3 table

    The competitiveness of nations and implications for human development

    Get PDF
    This is the post-print version of the final paper published in Socio-Economic Planning Sciences. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2010 Elsevier B.V.Human development should be the ultimate objective of human activity, its aim being healthier, longer, and fuller lives. Thus, if the competitiveness of a nation is properly managed, enhanced human welfare should be the key expected consequence. The research described here explores the relationship between the competitiveness of a nation and its implications for human development. For this purpose, 45 countries were evaluated initially using data envelopment analysis. In this stage, global competitiveness indicators were taken as input variables with human development index indicators as output variables. Subsequently, an artificial neural network analysis was conducted to identify those factors having the greatest impact on efficiency scores

    A critical assessment of imbalanced class distribution problem: the case of predicting freshmen student attrition

    Get PDF
    Predicting student attrition is an intriguing yet challenging problem for any academic institution. Class-imbalanced data is a common in the field of student retention, mainly because a lot of students register but fewer students drop out. Classification techniques for imbalanced dataset can yield deceivingly high prediction accuracy where the overall predictive accuracy is usually driven by the majority class at the expense of having very poor performance on the crucial minority class. In this study, we compared different data balancing techniques to improve the predictive accuracy in minority class while maintaining satisfactory overall classification performance. Specifically, we tested three balancing techniques—oversampling, under-sampling and synthetic minority over-sampling (SMOTE)—along with four popular classification methods—logistic regression, decision trees, neuron networks and support vector machines. We used a large and feature rich institutional student data (between the years 2005 and 2011) to assess the efficacy of both balancing techniques as well as prediction methods. The results indicated that the support vector machine combined with SMOTE data-balancing technique achieved the best classification performance with a 90.24% overall accuracy on the 10-fold holdout sample. All three data-balancing techniques improved the prediction accuracy for the minority class. Applying sensitivity analyses on developed models, we also identified the most important variables for accurate prediction of student attrition. Application of these models has the potential to accurately predict at-risk students and help reduce student dropout rates

    ResumeNet: A Learning-based Framework for Automatic Resume Quality Assessment

    Full text link
    Recruitment of appropriate people for certain positions is critical for any companies or organizations. Manually screening to select appropriate candidates from large amounts of resumes can be exhausted and time-consuming. However, there is no public tool that can be directly used for automatic resume quality assessment (RQA). This motivates us to develop a method for automatic RQA. Since there is also no public dataset for model training and evaluation, we build a dataset for RQA by collecting around 10K resumes, which are provided by a private resume management company. By investigating the dataset, we identify some factors or features that could be useful to discriminate good resumes from bad ones, e.g., the consistency between different parts of a resume. Then a neural-network model is designed to predict the quality of each resume, where some text processing techniques are incorporated. To deal with the label deficiency issue in the dataset, we propose several variants of the model by either utilizing the pair/triplet-based loss, or introducing some semi-supervised learning technique to make use of the abundant unlabeled data. Both the presented baseline model and its variants are general and easy to implement. Various popular criteria including the receiver operating characteristic (ROC) curve, F-measure and ranking-based average precision (AP) are adopted for model evaluation. We compare the different variants with our baseline model. Since there is no public algorithm for RQA, we further compare our results with those obtained from a website that can score a resume. Experimental results in terms of different criteria demonstrate the effectiveness of the proposed method. We foresee that our approach would transform the way of future human resources management.Comment: ICD
    corecore