202 research outputs found

    A Reference Architecture for Mobile Knowledge Management

    Get PDF
    Although mobile knowledge management (mKM) is being perceived as an emerging R&D field, its concepts and approaches are not well-settled, as opposed to the general field of Knowledge Management (KM). In this work, we try to establish a definition for mKM. Taking into account building blocks of KM in enterprises and the abstract use cases of mKM systems we introduce an reference architecture for mKM systems as a basis for verifying and comparing concepts and system architectures. Finally we address the potential of mKM to be suitable as a prototype model for mobile, situation-aware information processing in the field of Ambient Intelligence Environments

    The use of data-mining for the automatic formation of tactics

    Get PDF
    This paper discusses the usse of data-mining for the automatic formation of tactics. It was presented at the Workshop on Computer-Supported Mathematical Theory Development held at IJCAR in 2004. The aim of this project is to evaluate the applicability of data-mining techniques to the automatic formation of tactics from large corpuses of proofs. We data-mine information from large proof corpuses to find commonly occurring patterns. These patterns are then evolved into tactics using genetic programming techniques

    HOL(y)Hammer: Online ATP Service for HOL Light

    Full text link
    HOL(y)Hammer is an online AI/ATP service for formal (computer-understandable) mathematics encoded in the HOL Light system. The service allows its users to upload and automatically process an arbitrary formal development (project) based on HOL Light, and to attack arbitrary conjectures that use the concepts defined in some of the uploaded projects. For that, the service uses several automated reasoning systems combined with several premise selection methods trained on all the project proofs. The projects that are readily available on the server for such query answering include the recent versions of the Flyspeck, Multivariate Analysis and Complex Analysis libraries. The service runs on a 48-CPU server, currently employing in parallel for each task 7 AI/ATP combinations and 4 decision procedures that contribute to its overall performance. The system is also available for local installation by interested users, who can customize it for their own proof development. An Emacs interface allowing parallel asynchronous queries to the service is also provided. The overall structure of the service is outlined, problems that arise and their solutions are discussed, and an initial account of using the system is given

    Quantifying terrain factor using GIS applications for real estate property valuation

    Get PDF
    This thesis studies the use of GIS applications to derive adjustment figures for the terrain factor in property valuation tasks. It aims at suggesting a quantitative approach alternative to evaluate the terrain factor as opposed to traditional methods and current industry practices where terrain is qualitatively judged based on visual observation at site and subjected to individual opinion. In this study, the terrain factor is considered by analysing the slope and surface roughness elements of terrain. To achieve this, slope and surface roughness values are generated from available open source digital elevation models (DEMs) within the Esri ArcGIS software environment. For the purposes of this study, the Shuttle Radar Topography Mission (SRTM) DEM developed by National Geospatial-Intelligence Agency (NGA) and United States National Aeronautics and Space Administration (NASA), as well as the Advance Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM jointly developed by Ministry of Economy, Trade and Industry, Japan (METI) and NASA, were used to derive terrain values. The output adjustments were tested on several hypothetical valuation cases, consisting of small and large properties, to see the effects of DEM resolution upon the results. In order to test the accuracy of the proposed-adjustment outputs and applicability of the study methods, feedbacks from industry experts were collected via an online survey for analysis. Upon analysing the feedbacks, this study finds that industry experts are of the opinion that the terrain adjustments proposed by this method are reasonable for use in the industry practice, although some apprehensions were also noted, as property valuers tend to exercise caution when using automated valuation methods. The proposed method is simple to apply and does not require advanced knowledge of GIS functions to operate. Therefore, considering the positive feedback from the valuation community, it could pave way towards future incorporation of geostatistical methods/ components in value analysis.Tesis ini mengkaji kegunaan aplikasi GIS untuk mendapatkan pekali pelarasan bagi faktor rupabumi dalam kerja-kerja penilaian. Maksud kajian adalah untuk mencadangkan pendekatan kuantitatif bagi mempertimbangkan faktor rupabumi sebagai alternatif kepada kaedah tradisional dan amalan semasa industri yang bersifat kualitatif, yang mana faktor rupabumi diputuskan berdasarkan pemerhatian visual di tapak dan tertakluk kepada pendapat peribadi. Dalam kajian ini, faktor rupabumi dipertimbangkan melalui analisis ke atas elemen kecerunan dan kekasaran permukaan rupabumi. Nilai kecerunan dan kekasaran rupabumi dijana daripada model aras digital (DEM) yang diperolehi daripada sumber terbuka (open source) menggunakan pakej perisian Esri ArcGIS. Untuk tujuan kajian ini, nilai elemen cerun diperolehi dari DEM Shuttle Radar Topography Mission (SRTM) yang dibangunkan oleh National Geospatial-Intelligence Agency (NGA) dan United States National Aeronautics and Space Administration (NASA) serta DEM Advance Spaceborne Thermal Emission and Reflection Radiometer (ASTER) yang dibangunkan melalui usahasama Kementerian Ekonomi, Perdagangan dan Industri, Jepun (METI) dan NASA. Cadangan pelarasan yang dijana (output) daripada kajian ini diuji dalam beberapa kes penilaian andaian (hypothetical) yang terdiri daripada harta tanah bersaiz kecil dan besar, bagi mengkaji kesan perincian resolusi DEM ke atas penilaian. Bagi menguji ketepatan output pelarasan yang disyorkan dan kesesuaian aplikasi syor pelarasan oleh kaedah-kaedah kajian, maklum balas daripada pakar-pakar industri dikumpul melalui soal-selidik atas talian (online) untuk dianalisis. Berdasarkan maklum balas soal-selidik, pakar-pakar industri pada umumnya berpandangan kadar pelarasan faktor rupabumi yang disyorkan oleh kaedah-kaedah kajian ini adalah munasabah untuk digunakan walaupun beberapa keraguan turut dikesan, tetapi ini adalah kerana penilai berjaga-jaga dengan nilaian janaan komputer. Kaedah yang dicadangkan oleh kajian ini adalah mudah untuk diaplikasi dan tidak memerlukan pengetahuan yang mendalam tentang GIS untuk digunapakai. Oleh itu, memandangkan maklum balas yang diterima daripada komuniti penilai adalah positif, kaedah kajian mungkin dapat membuka langkah bagi memasukkan (include) komponen analisis geostatistik dalam analisis nilai di masa hadapan.The comparison method of valuation is based on the basic principal that properties that are close to one another in location and most similar to each other in feature would logically be similar in value. Using this method, the value of a subject property at a specific time and for a specific purpose is determined by gathering comparable sale evidences at the stated date of transaction, whereby the transacted amount of the comparable is adjusted to account for factors of dissimilarity between the comparable and the subject. While some factors (e.g.: size) is numerical in nature and thus may be analysed quantitatively, in current practice, many other factors are being analysed qualitatively based on observation and personal opinion. Although this qualitative way of analysing property factors is widely accepted within the valuation community, such approach allows for a wide area for interpretation as it is difficult to put a scale on personal views and opinions. Studies in spatial statistics have contributed towards the development of GIS applications that is able to deal with spatial data in a quantitative manner. In relation to that, this project attempts to use GIS application to analyse a selected adjustment factor to be incorporated into the valuation practice. This project will propose adjustment values for the surface terrain factor, by generating slope and surface roughness values from free Digital Elevation Models (DEMs). These values are then linked to the corresponding property unit to obtain the average terrain value per property. The terrain value of the comparable property unit is then compared to the terrain value of the subject property and the difference is analysed to suggest a reasonable adjustment value for the comparable. The adjustment outputs derived from the study methods are tested by gathering feedback from property experts via an online survey based on several hypothetical valuation cases. Response from the survey notes that most respondents find the derived adjustment outputs as reasonable for application in industry, although there are some inconsistencies noted in the survey results, likely due to the small sample size used in the project, as well as due to the coarse resolution of the DEMs used. It should be noted that the methods proposed in this project is simple to use and does not require advance knowledge in GIS to operate. In fact, it should be said that this method may readily be used, especially in the event of available high quality elevation data

    A Logic-Independent IDE

    Full text link
    The author's MMT system provides a framework for defining and implementing logical systems. By combining MMT with the jEdit text editor, we obtain a logic-independent IDE. The IDE functionality includes advanced features such as context-sensitive auto-completion, search, and change management.Comment: In Proceedings UITP 2014, arXiv:1410.785

    Premise Selection for Mathematics by Corpus Analysis and Kernel Methods

    Get PDF
    Smart premise selection is essential when using automated reasoning as a tool for large-theory formal proof development. A good method for premise selection in complex mathematical libraries is the application of machine learning to large corpora of proofs. This work develops learning-based premise selection in two ways. First, a newly available minimal dependency analysis of existing high-level formal mathematical proofs is used to build a large knowledge base of proof dependencies, providing precise data for ATP-based re-verification and for training premise selection algorithms. Second, a new machine learning algorithm for premise selection based on kernel methods is proposed and implemented. To evaluate the impact of both techniques, a benchmark consisting of 2078 large-theory mathematical problems is constructed,extending the older MPTP Challenge benchmark. The combined effect of the techniques results in a 50% improvement on the benchmark over the Vampire/SInE state-of-the-art system for automated reasoning in large theories.Comment: 26 page

    A partial translation path from MathLang to Isabelle

    Get PDF
    This dissertation describes certain developments in computer techniques formanagingmathematical knowledge. Computers currently assistmathematicians in presenting and archiving mathematics, as well as performing calculation and verification tasks. MathLang is a framework for computerising mathematical documents which features new approaches to these issues. In this dissertation, several extensions to MathLang are described: a system and notation for annotating text; improved methods for annotating complex mathematical expressions; and a method for creating rules to translate document annotations. A typical MathLang work flow for document annotation and computerisation is demonstrated, showing how writing style can complicate the annotation process and how these may be resolved. This workflow is compared with the standard process for producing formal computer theories in a computer proof assistant (Isabelle is the system we choose). The rules for translation are further discussed as a way of producing text in the syntax of Isabelle (without a deep knowledge of the system), with possible use cases of providing a text which can be used either as an aid to learning Isabelle, or as a skeleton framework to be used as a starting point for a formal document
    corecore