2,280 research outputs found

    Knowledge-based systems and geological survey

    Get PDF
    This personal and pragmatic review of the philosophy underpinning methods of geological surveying suggests that important influences of information technology have yet to make their impact. Early approaches took existing systems as metaphors, retaining the separation of maps, map explanations and information archives, organised around map sheets of fixed boundaries, scale and content. But system design should look ahead: a computer-based knowledge system for the same purpose can be built around hierarchies of spatial objects and their relationships, with maps as one means of visualisation, and information types linked as hypermedia and integrated in mark-up languages. The system framework and ontology, derived from the general geoscience model, could support consistent representation of the underlying concepts and maintain reference information on object classes and their behaviour. Models of processes and historical configurations could clarify the reasoning at any level of object detail and introduce new concepts such as complex systems. The up-to-date interpretation might centre on spatial models, constructed with explicit geological reasoning and evaluation of uncertainties. Assuming (at a future time) full computer support, the field survey results could be collected in real time as a multimedia stream, hyperlinked to and interacting with the other parts of the system as appropriate. Throughout, the knowledge is seen as human knowledge, with interactive computer support for recording and storing the information and processing it by such means as interpolating, correlating, browsing, selecting, retrieving, manipulating, calculating, analysing, generalising, filtering, visualising and delivering the results. Responsibilities may have to be reconsidered for various aspects of the system, such as: field surveying; spatial models and interpretation; geological processes, past configurations and reasoning; standard setting, system framework and ontology maintenance; training; storage, preservation, and dissemination of digital records

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Pragmatic cost estimation for web applications

    Get PDF
    Cost estimation for web applications is an interesting and difficult challenge for researchers and industrial practitioners. It is a particularly valuable area of ongoing commercial research. Attaining on accurate cost estimation for web applications is an essential element in being able to provide competitive bids and remaining successful in the market. The development of prediction techniques over thirty years ago has contributed to several different strategies. Unfortunately there is no collective evidence to give substantial advice or guidance for industrial practitioners. Therefore to address this problem, this thesis shows the way by investigating the characteristics of the dataset by combining the literature review and industrial survey findings. The results of the systematic literature review, industrial survey and an initial investigation, have led to an understanding that dataset characteristics may influence the cost estimation prediction techniques. From this, an investigation was carried out on dataset characteristics. However, in the attempt to structure the characteristics of dataset it was found not to be practical or easy to get a defined structure of dataset characteristics to use as a basis for prediction model selection. Therefore the thesis develops a pragmatic cost estimation strategy based on collected advice and general sound practice in cost estimation. The strategy is composed of the following five steps: test whether the predictions are better than the means of the dataset; test the predictions using accuracy measures such as MMRE, Pred and MAE knowing their strengths and weaknesses; investigate the prediction models formed to see if they are sensible and reasonable model; perform significance testing on the predictions; and get the effect size to establish preference relations of prediction models. The results from this pragmatic cost estimation strategy give not only advice on several techniques to choose from, but also give reliable results. Practitioners can be more confident about the estimation that is given by following this pragmatic cost estimation strategy. It can be concluded that the practitioners should focus on the best strategy to apply in cost estimation rather than focusing on the best techniques. Therefore, this pragmatic cost estimation strategy could help researchers and practitioners to get reliable results. The improvement and replication of this strategy over time will produce much more useful and trusted results.Cost estimation for web applications is an interesting and difficult challenge for researchers and industrial practitioners. It is a particularly valuable area of ongoing commercial research. Attaining on accurate cost estimation for web applications is an essential element in being able to provide competitive bids and remaining successful in the market. The development of prediction techniques over thirty years ago has contributed to several different strategies. Unfortunately there is no collective evidence to give substantial advice or guidance for industrial practitioners. Therefore to address this problem, this thesis shows the way by investigating the characteristics of the dataset by combining the literature review and industrial survey findings. The results of the systematic literature review, industrial survey and an initial investigation, have led to an understanding that dataset characteristics may influence the cost estimation prediction techniques. From this, an investigation was carried out on dataset characteristics. However, in the attempt to structure the characteristics of dataset it was found not to be practical or easy to get a defined structure of dataset characteristics to use as a basis for prediction model selection. Therefore the thesis develops a pragmatic cost estimation strategy based on collected advice and general sound practice in cost estimation. The strategy is composed of the following five steps: test whether the predictions are better than the means of the dataset; test the predictions using accuracy measures such as MMRE, Pred and MAE knowing their strengths and weaknesses; investigate the prediction models formed to see if they are sensible and reasonable model; perform significance testing on the predictions; and get the effect size to establish preference relations of prediction models. The results from this pragmatic cost estimation strategy give not only advice on several techniques to choose from, but also give reliable results. Practitioners can be more confident about the estimation that is given by following this pragmatic cost estimation strategy. It can be concluded that the practitioners should focus on the best strategy to apply in cost estimation rather than focusing on the best techniques. Therefore, this pragmatic cost estimation strategy could help researchers and practitioners to get reliable results. The improvement and replication of this strategy over time will produce much more useful and trusted results

    EffortEst- An Enhanced Software Effort Estimation by Analogy Method

    Get PDF
    Abstract:Over the past few years, large-scale software project development has become the point of growing interest to many organizations and thus, predicting the size, cost and effort of software projects has become a very significant task to project managers. Often inaccurate prediction results into software projects exceeding budget as well as being out of schedule. Therefore, software project managers have been introduced to numerous software tools and methods in recent years to automate their tasks. The paper presents some existing analogy-based software estimation tools used by project managers and these tools are critically analyzed to identify shortcomings. Finally an enhanced software effort estimation method is proposed. A system prototype named EffortEst has been implemented and evaluated based on the enhanced method. EffortEst provides the near-best estimation of software project effort with limited user intervention.Keywords: Software Effort Estimation, Analogy, Case- Based Reasoning, Prototype(Article history: Received 16 September 2016 and accepted 9 December 2016

    Double Whammy - How ICT Projects are Fooled by Randomness and Screwed by Political Intent

    Get PDF
    The cost-benefit analysis formulates the holy trinity of objectives of project management - cost, schedule, and benefits. As our previous research has shown, ICT projects deviate from their initial cost estimate by more than 10% in 8 out of 10 cases. Academic research has argued that Optimism Bias and Black Swan Blindness cause forecasts to fall short of actual costs. Firstly, optimism bias has been linked to effects of deception and delusion, which is caused by taking the inside-view and ignoring distributional information when making decisions. Secondly, we argued before that Black Swan Blindness makes decision-makers ignore outlying events even if decisions and judgements are based on the outside view. Using a sample of 1,471 ICT projects with a total value of USD 241 billion - we answer the question: Can we show the different effects of Normal Performance, Delusion, and Deception? We calculated the cumulative distribution function (CDF) of (actual-forecast)/forecast. Our results show that the CDF changes at two tipping points - the first one transforms an exponential function into a Gaussian bell curve. The second tipping point transforms the bell curve into a power law distribution with the power of 2. We argue that these results show that project performance up to the first tipping point is politically motivated and project performance above the second tipping point indicates that project managers and decision-makers are fooled by random outliers, because they are blind to thick tails. We then show that Black Swan ICT projects are a significant source of uncertainty to an organisation and that management needs to be aware of

    Mastering the requirements analysis for communication-intensive websites

    Get PDF
    Web application development still needs to employ effective methods to accommodate some distinctive aspects of the requirements analysis process: capturing high-level communication goals, considering several user profiles and stakeholders, defining hypermedia-specific requirements (concerning navigation, content, information structure and presentation aspects), and reusing requirements for an effective usability evaluation. Techniques should be usable by both stakeholders and the design team, require little training effort, and show relative advantage to project managers. Over the last few years, requirements methodologies applied to web-based applications have considered mainly the transactional and operational aspects typical of traditional information systems. The communicational aspects of web sites have been neglected in regards to systematic requirements methods. This thesis, starting from key achievements in Requirements Engineering (hereafter RE), introduces a model (AWARE) for defining and analyzing requirements for web applications mainly conceived as strategic communication means for an institution or organization. The model extends traditional goal and scenario-based approaches for refining highlevel goals into website requirements, by introducing the analysis of ill-defined user goals, stakeholder communication goals, and a hypermedia requirement taxonomy to facilitate web conceptual design, and paving the way for a systematic usability evaluation. AWARE comprises a conceptual toolkit and a notation for effective requirements documentation. AWARE concepts and notation represent a useful communication and analysis conceptual tool that may support in the elicitation, negotiation, analysis and validation of requirements from the relevant stakeholders (users included). The empirical validation of the model is carried out in two ways. Firstly, the model has been employed in web projects on the field. These case studies and the lessons learnt will be presented and discussed to assess advantages and limits of the proposal. Secondly, a sample of web analysts and designers has been asked to study and apply the model: the feedback gathered is positive and encouraging for further improvement.Lo sviluppo di applicazioni web necessita di strumenti efficaci per gestire alcuni aspetti essenziali del processo di analisi dei requisiti: l'identificazione di obiettivi di comunicazione strategici, la presenza di una varietĂ  di profili utente e di stakeholders, le definizione di requisiti ipermediali (riguardanti navigazione, interazione, contenuto e presentazione), e il riuso dei requisiti per una pianificazione efficace della valutazione dell'usabilitĂ . Sono necessarie tecniche usabili sia dagli stakeholders che dai progettisti, che richiedono un tempo breve per essere appresi ed usati con efficacia, mostrando vantaggi significativi ai gestori di progetti complessi. La tesi definisce AWARE (Analysis of Web Application Requirements) - una metodologia per l'analisi dei requisiti specifica per la gestione di siti web (ed applicazioni interattive) con forti componenti comunicative. La metodologia estende le tecniche esistenti dell''analisi dei requisiti basate su approcci goal-oriented e scenario-based, introducendo una tassonomia di requisiti specifica per siti web (che permette di dare un input strutturato all'attivitĂ  di progetazione), strumenti per l'identificazione e l'analisi di obiettivi ill-defined (generici o mal-definiti) e di obiettivi comunicativi e supporto metodologico per la valutazione dell'usabilitĂ  basata sui requisiti dell'applicazione. La metodologia AWARE Ăš stata valutata sul campo attraverso progetti con professionisti del settore (web designers e IT managers), e grazie ad interventi di formazione in aziende specializzate nella comunicazione su web

    Moving towards personalising translation technology

    Get PDF
    Technology has had an important impact on the work of translators and represents a shift in the boundaries of translation work over time. Improvements in machine translation have brought about further boundary shifts in some translation work and are likely to continue having an impact. Yet translators sometimes feel frustrated with the tools they use. This chapter looks to the field of personalisation in information technology and proposes that personalising translation technology may be a way of improving translator-computer interaction. Personalisation of translation technology is considered from the perspectives of context, user modelling, trust, motivation and well-being

    state of the art analysis ; working packages in project phase II

    Get PDF
    In this report, we introduce our goals and present our requirement analysis for the second phase of the Corporate Semantic Web project. Corporate ontology engineering will improve the facilitation of agile ontology engineering to lessen the costs of ontology development and, especially, maintenance. Corporate semantic collaboration focuses the human-centered aspects of knowledge management in corporate contexts. Corporate semantic search is settled on the highest application level of the three research areas and at that point it is a representative for applications working on and with the appropriately represented and delivered background knowledge
    • 

    corecore