4,368 research outputs found

    Geoscience after IT: Part J. Human requirements that shape the evolving geoscience information system

    Get PDF
    The geoscience record is constrained by the limitations of human thought and of the technology for handling information. IT can lead us away from the tyranny of older technology, but to find the right path, we need to understand our own limitations. Language, images, data and mathematical models, are tools for expressing and recording our ideas. Backed by intuition, they enable us to think in various modes, to build knowledge from information and create models as artificial views of a real world. Markup languages may accommodate more flexible and better connected records, and the object-oriented approach may help to match IT more closely to our thought processes

    Application of expert systems in project management decision aiding

    Get PDF
    The feasibility of developing an expert systems-based project management decision aid to enhance the performance of NASA project managers was assessed. The research effort included extensive literature reviews in the areas of project management, project management decision aiding, expert systems technology, and human-computer interface engineering. Literature reviews were augmented by focused interviews with NASA managers. Time estimation for project scheduling was identified as the target activity for decision augmentation, and a design was developed for an Integrated NASA System for Intelligent Time Estimation (INSITE). The proposed INSITE design was judged feasible with a low level of risk. A partial proof-of-concept experiment was performed and was successful. Specific conclusions drawn from the research and analyses are included. The INSITE concept is potentially applicable in any management sphere, commercial or government, where time estimation is required for project scheduling. As project scheduling is a nearly universal management activity, the range of possibilities is considerable. The INSITE concept also holds potential for enhancing other management tasks, especially in areas such as cost estimation, where estimation-by-analogy is already a proven method

    Artificial intelligence and distance learning philosophy in support of PfP mandate

    Get PDF
    Computers have long been utilised in the legal environment. The main use of computers however, has merely been to automate office tasks. More exciting is the prospect of using artificial intelligence (AI) technology to create computers that can emulate the substantive legal jobs performed by lawyers, to create computers that can autonomously reason with the law to determine legal solutions, for example: structuring and support of Partnership for Peace (PfP) mandate. Such attempts have not been successful jet. Modelling the law and emulating the processes of legal reasoning have proved to be more complex and subtle than originally envisaged. The adoption by AI researchers specialising in law of new AI techniques, such as case based reasoning, neural networks, fuzzy logic, deontic logics and non-monotonic logics, may move closer to achieving an automation of legal reasoning. Unfortunately these approaches also suffer several drawbacks that will need to be overcome if this is to be achieved. Even if these new techniques do not achieve an automation of legal reasoning however, they will still be valuable in better automating office tasks and in providing insights about the nature of law. An idea to apply the technology of intelligent multi-agent systems to the computer aided learning (CAL) in law, is currently being developed as a research project by the author of this article (see e.g. [Antoliš, 2002.]). Similar projects are usually based on the most modern technologies of multimedia and hypermedia, as it was implemented in this article. The theoretical foundations of the design and architecture of intelligent system for decision support process in law and distance-learning environment are, however, at their early stage of development

    Investigating effort prediction of web-based applications using CBR on the ISBSG dataset

    Get PDF
    As web-based applications become more popular and more sophisticated, so does the requirement for early accurate estimates of the effort required to build such systems. Case-based reasoning (CBR) has been shown to be a reasonably effective estimation strategy, although it has not been widely explored in the context of web applications. This paper reports on a study carried out on a subset of the ISBSG dataset to examine the optimal number of analogies that should be used in making a prediction. The results show that it is not possible to select such a value with confidence, and that, in common with other findings in different domains, the effectiveness of CBR is hampered by other factors including the characteristics of the underlying dataset (such as the spread of data and presence of outliers) and the calculation employed to evaluate the distance function (in particular, the treatment of numeric and categorical data)

    An Empirical investigation into software effort estimation by analogy

    Get PDF
    Most practitioners recognise the important part accurate estimates of development effort play in the successful management of major software projects. However, it is widely recognised that current estimation techniques are often very inaccurate, while studies (Heemstra 1992; Lederer and Prasad 1993) have shown that effort estimation research is not being effectively transferred from the research domain into practical application. Traditionally, research has been almost exclusively focused on the advancement of algorithmic models (e.g. COCOMO (Boehm 1981) and SLIM (Putnam 1978)), where effort is commonly expressed as a function of system size. However, in recent years there has been a discernible movement away from algorithmic models with non-algorithmic systems (often encompassing machine learning facets) being actively researched. This is potentially a very exciting and important time in this field, with new approaches regularly being proposed. One such technique, estimation by analogy, is the focus of this thesis. The principle behind estimation by analogy is that past experience can often provide insights and solutions to present problems. Software projects are characterised in terms of collectable features (such as the number of screens or the size of the functional requirements) and stored in a historical case base as they are completed. Once a case base of sufficient size has been cultivated, new projects can be estimated by finding similar historical projects and re-using the recorded effort. To make estimation by analogy feasible it became necessary to construct a software tool, dubbed ANGEL, which allowed the collection of historical project data and the generation of estimates for new software projects. A substantial empirical validation of the approach was made encompassing approximately 250 real historical software projects across eight industrial data sets, using stepwise regression as a benchmark. Significance tests on the results accepted the hypothesis (at the 1% confidence level) that estimation by analogy is a superior prediction system to stepwise regression in terms of accuracy. A study was also made of the sensitivity of the analogy approach. By growing project data sets in a pseudo time-series fashion it was possible to answer pertinent questions about the approach, such as, what are the effects of outlying projects and what is the minimum data set size? The main conclusions of this work are that estimation by analogy is a viable estimation technique that would seem to offer some advantages over algorithmic approaches including, improved accuracy, easier use of categorical features and an ability to operate even where no statistical relationships can be found

    Intelligent control based on fuzzy logic and neural net theory

    Get PDF
    In the conception and design of intelligent systems, one promising direction involves the use of fuzzy logic and neural network theory to enhance such systems' capability to learn from experience and adapt to changes in an environment of uncertainty and imprecision. Here, an intelligent control scheme is explored by integrating these multidisciplinary techniques. A self-learning system is proposed as an intelligent controller for dynamical processes, employing a control policy which evolves and improves automatically. One key component of the intelligent system is a fuzzy logic-based system which emulates human decision making behavior. It is shown that the system can solve a fairly difficult control learning problem. Simulation results demonstrate that improved learning performance can be achieved in relation to previously described systems employing bang-bang control. The proposed system is relatively insensitive to variations in the parameters of the system environment

    Training for understanding: a model for mediating abstract statistical concepts

    Get PDF
    The use of training models is suggested as an instructional tool facilitating understanding of abstract concepts as measured by transfer to problem solving settings. A three stage model was developed to mediate the transfer of attributes and structures from known concrete concepts to unknown abstract concepts. Student groups received computer based training followed by post-tests for factual retention, near transfer of model procedures (non-novel problems), and far transfer of model procedures (novel problems). Two test groups of university students received training using the developed training model and instruction using abstract statistical concepts. Two control groups received either model or statistical concept training but not both. Results indicate that the model training used did improve factual retention but not increase related near and far transfer in the statistical problem solving situation
    • …
    corecore