623,833 research outputs found

    A Phase-based Approach to Rightward Movement in Comparatives

    Get PDF
    In this article, I aim at providing a phase-based explanation of extraposition phenomena in attributive comparatives. Conforming to a semantic requirement, the than-expression is an obligatory complement of the functional Degree head. However, there is need for an adequate explanation of extraposition, which seems to be syntactically unmotivated, if it involves movement to the right. Furthermore, this rightward movement is not even obligatory in head-final constructions. My solution makes use of the fact that comparative complements are phase-sized constituents, and the cyclic Spell-Out of these elements determines their order with respect to other elements in the construction. This may be changed by feature-driven movements in the derivation, which accounts for the lack of extraposition in head-final constructions

    Money Velocity in an Endogenous Growth Business Cycle with Credit Shocks

    Get PDF
    The explanation of velocity has been based in substitution and income effects, since Keynes’s (1923) interest rate explanation and Friedman’s (1956) application of the permanent income hypothesis to money demand. Modern real business cycle theory relies on a goods productivity shocks to mimic the data’s procyclic velocity feature, as in Friedman’s explanation, while finding money shocks unimportant and not integrating financial innovation explanations. This paper sets the model within endogenous growth and adds credit shocks. It models velocity more closely, with significant roles for money shocks and credit shocks, along with the goods productivity shocks. Endogenous growth is key to the construction of the money and credit shocks since they have similar effects on velocity, through substitution effects from changes in the nominal interest rate and in the cost of financial intermediation, but opposite effects upon growth, through permanent income effects that are absent with exogenous growth.Velocity, business cycle, credit shocks, endogenous growth.

    Feature construction using explanations of individual predictions

    Full text link
    Feature construction can contribute to comprehensibility and performance of machine learning models. Unfortunately, it usually requires exhaustive search in the attribute space or time-consuming human involvement to generate meaningful features. We propose a novel heuristic approach for reducing the search space based on aggregation of instance-based explanations of predictive models. The proposed Explainable Feature Construction (EFC) methodology identifies groups of co-occurring attributes exposed by popular explanation methods, such as IME and SHAP. We empirically show that reducing the search to these groups significantly reduces the time of feature construction using logical, relational, Cartesian, numerical, and threshold num-of-N and X-of-N constructive operators. An analysis on 10 transparent synthetic datasets shows that EFC effectively identifies informative groups of attributes and constructs relevant features. Using 30 real-world classification datasets, we show significant improvements in classification accuracy for several classifiers and demonstrate the feasibility of the proposed feature construction even for large datasets. Finally, EFC generated interpretable features on a real-world problem from the financial industry, which were confirmed by a domain expert.Comment: 54 pages, 10 figures, 22 table

    Adding Qualitative Context Factors to Analogy Estimating of Construction Projects

    Get PDF
    AbstractExisting estimating models have certain shortcomings in the management of historical data. There is a need of defining more objective and consistent criteria for the selection of historical construction data to be used for estimating. In this perspective, a methodology based on historical information, which incorporates qualitative context factors to the structure and use of this information for cost estimating, such as project complexity, environmental conditions and characteristics of workmanship, among others, is proposed. A list of qualitative project context factors that are most influential for construction projects’ cost and productivity is presented. Additionally, a context model that includes these variables is described together with an explanation of how they are incorporated into the cost estimate. It is concluded that the incorporation of qualitative context factors in cost estimating improves the use of historical information and that most critical aspects to achieve this feature are the creation of a reliable site-work feedback system and the correct structure of historical information

    Explainable Deep Learning for Construction Site Safety

    Get PDF
    The construction industry is going through a huge shift toward automation, with safety being one of the major challenges. We always want to take measures through which more accidents resulting serious injuries and deaths could be avoided. Indeed the construction sites are bound with several safety rules, one of the most important is having required personal protective equipment (PPE) based on the worker working environment. The presence of the monitoring camera at construction site provides an opportunity to enforce these safety rules by applying computer vision techniques and algorithms. This study shows capability of the Deep Learning model to classify worker as safe and unsafe and provides logical explanation to strengthen the prediction result. Here we exemplified classification of worker by using five convolutional neural network models with various layer structures. We collect a dataset of construction site scenes and annotate each image scene as safe and unsafe according to the workers working environment. The state-of-the-art neural networks successfully perform the binary classification with up to 90% accuracy. Furthermore, feature visualizations, such as Guided Back Propagation, Grad-CAM and different variants of LRP which is successful in showing which pixel in the original image contribute to the diagnosis and to what extent.https://ecommons.udayton.edu/stander_posters/3154/thumbnail.jp

    MBT: A Memory-Based Part of Speech Tagger-Generator

    Full text link
    We introduce a memory-based approach to part of speech tagging. Memory-based learning is a form of supervised learning based on similarity-based reasoning. The part of speech tag of a word in a particular context is extrapolated from the most similar cases held in memory. Supervised learning approaches are useful when a tagged corpus is available as an example of the desired output of the tagger. Based on such a corpus, the tagger-generator automatically builds a tagger which is able to tag new text the same way, diminishing development time for the construction of a tagger considerably. Memory-based tagging shares this advantage with other statistical or machine learning approaches. Additional advantages specific to a memory-based approach include (i) the relatively small tagged corpus size sufficient for training, (ii) incremental learning, (iii) explanation capabilities, (iv) flexible integration of information in case representations, (v) its non-parametric nature, (vi) reasonably good results on unknown words without morphological analysis, and (vii) fast learning and tagging. In this paper we show that a large-scale application of the memory-based approach is feasible: we obtain a tagging accuracy that is on a par with that of known statistical approaches, and with attractive space and time complexity properties when using {\em IGTree}, a tree-based formalism for indexing and searching huge case bases.} The use of IGTree has as additional advantage that optimal context size for disambiguation is dynamically computed.Comment: 14 pages, 2 Postscript figure
    corecore