980 research outputs found

    Modelling and Searching of Combinatorial Spaces Based on Markov Logic Networks

    Get PDF
    Markov Logic Networks (MLNs) combine Markov networks (MNs) and first-order logic by attaching weights to first-order formulas and using these as templates for features of MNs. Learning the structure of MLNs is performed by state-of-the-art methods by maximizing the likelihood of a relational database. This leads to suboptimal results for prediction tasks due to the mismatch between the objective function (likelihood) and the task of classification (maximizing conditional likelihood (CL)). In this paper we propose two algorithms for learning the structure of MLNs. The first maximizes the CL of query predicates instead of the joint likelihood of all predicates while the other maximizes the area under the Precision-Recall curve (AUC). Both algorithms set the parameters by maximum likelihood and choose structures by maximizing CL or AUC. For each of these algorithms we develop two different searching strategies. The first is based on Iterated Local Search and the second on Greedy Randomized Adaptive Search Procedure. We compare the performances of these randomized search approaches on real-world datasets and show that on larger datasets, the ILS-based approaches perform better, both in terms of CLL and AUC, while on small datasets, ILS and RBS approaches are competitive and RBS can also lead to better results for AUC

    Application of expert systems in project management decision aiding

    Get PDF
    The feasibility of developing an expert systems-based project management decision aid to enhance the performance of NASA project managers was assessed. The research effort included extensive literature reviews in the areas of project management, project management decision aiding, expert systems technology, and human-computer interface engineering. Literature reviews were augmented by focused interviews with NASA managers. Time estimation for project scheduling was identified as the target activity for decision augmentation, and a design was developed for an Integrated NASA System for Intelligent Time Estimation (INSITE). The proposed INSITE design was judged feasible with a low level of risk. A partial proof-of-concept experiment was performed and was successful. Specific conclusions drawn from the research and analyses are included. The INSITE concept is potentially applicable in any management sphere, commercial or government, where time estimation is required for project scheduling. As project scheduling is a nearly universal management activity, the range of possibilities is considerable. The INSITE concept also holds potential for enhancing other management tasks, especially in areas such as cost estimation, where estimation-by-analogy is already a proven method

    Social Learning Systems: The Design of Evolutionary, Highly Scalable, Socially Curated Knowledge Systems

    Get PDF
    In recent times, great strides have been made towards the advancement of automated reasoning and knowledge management applications, along with their associated methodologies. The introduction of the World Wide Web peaked academicians’ interest in harnessing the power of linked, online documents for the purpose of developing machine learning corpora, providing dynamical knowledge bases for question answering systems, fueling automated entity extraction applications, and performing graph analytic evaluations, such as uncovering the inherent structural semantics of linked pages. Even more recently, substantial attention in the wider computer science and information systems disciplines has been focused on the evolving study of social computing phenomena, primarily those associated with the use, development, and analysis of online social networks (OSN\u27s). This work followed an independent effort to develop an evolutionary knowledge management system, and outlines a model for integrating the wisdom of the crowd into the process of collecting, analyzing, and curating data for dynamical knowledge systems. Throughout, we examine how relational data modeling, automated reasoning, crowdsourcing, and social curation techniques have been exploited to extend the utility of web-based, transactional knowledge management systems, creating a new breed of knowledge-based system in the process: the Social Learning System (SLS). The key questions this work has explored by way of elucidating the SLS model include considerations for 1) how it is possible to unify Web and OSN mining techniques to conform to a versatile, structured, and computationally-efficient ontological framework, and 2) how large-scale knowledge projects may incorporate tiered collaborative editing systems in an effort to elicit knowledge contributions and curation activities from a diverse, participatory audience

    The 1992 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    The purpose of this conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers fall into the following areas: planning and scheduling, control, fault monitoring/diagnosis and recovery, information management, tools, neural networks, and miscellaneous applications

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers

    Study on Parametric Optimization of Fused Deposition Modelling (FDM) Process

    Get PDF
    Rapid prototyping (RP) is a generic term for a number of technologies that enable fabrication of physical objects directly from CAD data sources. In contrast to classical methods of manufacturing such as milling and forging which are based on subtractive and formative principles espectively, these processes are based on additive principle for part fabrication. The biggest advantage of RP processes is that an entire 3-D (three-dimensional) consolidated assembly can be fabricated in a single setup without any tooling or human intervention; further, the part fabrication methodology is independent of the mplexity of the part geometry. Due to several advantages, RP has attracted the considerable attention of manufacturing industries to meet the customer demands for incorporating continuous and rapid changes in manufacturing in shortest possible time and gain edge over competitors. Out of all commercially available RP processes, fused deposition modelling (FDM) uses heated thermoplastic filament which are extruded from the tip of nozzle in a prescribed manner in a temperature controlled environment for building the part through a layer by layer deposition method. Simplicity of operation together with the ability to fabricate parts with locally controlled properties resulted in its wide spread application not only for prototyping but also for making functional parts. However, FDM process has its own demerits related with accuracy, surface finish, strength etc. Hence, it is absolutely necessary to understand the shortcomings of the process and identify the controllable factors for improvement of part quality. In this direction, present study focuses on the improvement of part build methodology by properly controlling the process parameters. The thesis deals with various part quality measures such as improvement in dimensional accuracy, minimization of surface roughness, and improvement in mechanical properties measured in terms of tensile, compressive, flexural, impact strength and sliding wear. The understanding generated in this work not only explain the complex build mechanism but also present in detail the influence of processing parameters such as layer thickness, orientation, raster angle, raster width and air gap on studied responses with the help of statistically validated models, microphotographs and non-traditional optimization methods. For improving dimensional accuracy of the part, Taguchi‟s experimental design is adopted and it is found that measured dimension is oversized along the thickness direction and undersized along the length, width and diameter of the hole. It is observed that different factors and interactions control the part dimensions along different directions. Shrinkage of semi molten material extruding out from deposition nozzle is the major cause of part dimension reduction. The oversized dimension is attributed to uneven layer surfaces generation and slicing constraints. For recommending optimal factor setting for improving overall dimension of the part, grey Taguchi method is used. Prediction models based on artificial neural network and fuzzy inference principle are also proposed and compared with Taguchi predictive model. The model based on fuzzy inference system shows better prediction capability in comparison to artificial neural network model. In order to minimize the surface roughness, a process improvement strategy through effective control of process parameters based on central composite design (CCD) is employed. Empirical models relating response and process parameters are developed. The validity of the models is established using analysis of variance (ANOVA) and residual analysis. Experimental results indicate that process parameters and their interactions are different for minimization of roughness in different surfaces. The surface roughness responses along three surfaces are combined into a single response known as multi-response performance index (MPI) using principal component analysis. Bacterial foraging optimisation algorithm (BFOA), a latest evolutionary approach, has been adopted to find out best process parameter setting which maximizes MPI. Assessment of process parameters on mechanical properties viz. tensile, flexural, impact and compressive strength of part fabricated using FDM technology is done using CCD. The effect of each process parameter on mechanical property is analyzed. The major reason for weak strength is attributed to distortion within or between the layers. In actual practice, the parts are subjected to various types of loadings and it is necessary that the fabricated part must withhold more than one type of loading simultaneously.To address this issue, all the studied strengths are combined into a single response known as composite desirability and then optimum parameter setting which will maximize composite desirability is determined using quantum behaved particle swarm optimization (QPSO). Resistance to wear is an important consideration for enhancing service life of functional parts. Hence, present work also focuses on extensive study to understand the effect of process parameters on the sliding wear of test specimen. The study not only provides insight into complex dependency of wear on process parameters but also develop a statistically validated predictive equation. The equation can be used by the process planner for accurate wear prediction in practice. Finally, comparative evaluation of two swarm based optimization methods such as QPSO and BFOA are also presented. It is shown that BFOA, because of its biologically motivated structure, has better exploration and exploitation ability but require more time for convergence as compared to QPSO. The methodology adopted in this study is quite general and can be used for other related or allied processes, especially in multi input, multi output systems. The proposed study can be used by industries like aerospace, automobile and medical for identifying the process capability and further improvement in FDM process or developing new processes based on similar principle

    Deep Learning for Multiclass Classification, Predictive Modeling and Segmentation of Disease Prone Regions in Alzheimer’s Disease

    Get PDF
    One of the challenges facing accurate diagnosis and prognosis of Alzheimer’s Disease (AD) is identifying the subtle changes that define the early onset of the disease. This dissertation investigates three of the main challenges confronted when such subtle changes are to be identified in the most meaningful way. These are (1) the missing data challenge, (2) longitudinal modeling of disease progression, and (3) the segmentation and volumetric calculation of disease-prone brain areas in medical images. The scarcity of sufficient data compounded by the missing data challenge in many longitudinal samples exacerbates the problem as we seek statistical meaningfulness in multiclass classification and regression analysis. Although there are many participants in the AD Neuroimaging Initiative (ADNI) study, many of the observations have a lot of missing features which often lead to the exclusion of potentially valuable data points that could add significant meaning in many ongoing experiments. Motivated by the necessity of examining all participants, even those with missing tests or imaging modalities, multiple techniques of handling missing data in this domain have been explored. Specific attention was drawn to the Gradient Boosting (GB) algorithm which has an inherent capability of addressing missing values. Prior to applying state-of-the-art classifiers such as Support Vector Machine (SVM) and Random Forest (RF), the impact of imputing data in common datasets with numerical techniques has been also investigated and compared with the GB algorithm. Furthermore, to discriminate AD subjects from healthy control individuals, and Mild Cognitive Impairment (MCI), longitudinal multimodal heterogeneous data was modeled using recurring neural networks (RNNs). In the segmentation and volumetric calculation challenge, this dissertation places its focus on one of the most relevant disease-prone areas in many neurological and neurodegenerative diseases, the hippocampus region. Changes in hippocampus shape and volume are considered significant biomarkers for AD diagnosis and prognosis. Thus, a two-stage model based on integrating the Vision Transformer and Convolutional Neural Network (CNN) is developed to automatically locate, segment, and estimate the hippocampus volume from the brain 3D MRI. The proposed architecture was trained and tested on a dataset containing 195 brain MRIs from the 2019 Medical Segmentation Decathlon Challenge against the manually segmented regions provided therein and was deployed on 326 MRI from our own data collected through Mount Sinai Medical Center as part of the 1Florida Alzheimer Disease Research Center (ADRC)
    corecore