654 research outputs found

    Monogenic fields with odd class number Part I: odd degree

    Full text link
    We bound the average number of 22-torsion elements in the class group of monogenised fields of odd degree (and compute it precisely conditional on a tail estimate) using an orbit parametrisation of Wood. Curiously, we find that the average number of non-trivial 22-torsion elements in the class group of monogenised fields of any given odd degree is twice the value predicted for the full family of fields of that degree by the Cohen-Lenstra-Martinet-Malle heuristic! We also find that monogenicity has an increasing effect on the average number of non-trivial 22-torsion elements in the narrow class group. In addition, we obtain unconditional statements for monogenised rings of odd degree. For an order O\mathcal{O}, denote by I2(O)\mathcal{I}_2(\mathcal{O}) the group of 22-torsion ideals of O\mathcal{O}. We show that the average value of the difference ∣Cl2(O)∣−21−r1−r2∣I2(O)∣ \left|{\rm Cl_2}(\mathcal{O})\right|-2^{1-r_1-r_2}\left| \mathcal{I}_2(\mathcal{O}) \right| over all monogenised orders O\mathcal{O} of fixed signature (r1,r2)(r_1,r_2) is 1+21−r1−r21+2^{1-r_1-r_2}. For 33-torsion in quadratic orders, 22-torsion in cubic orders, and 22-torsion in orders arising from odd degree binary forms, work of Bhargava-Varma and Ho-Shankar-Varma shows that the corresponding difference averaged over the full family of orders is equal to 11. This shows that monogenicity has an increasing effect not only on the class group of fields, but also on the class group of orders. Our method gives a dual proof of a result of Bhargava-Hanke-Shankar in the cubic case, reveals an interesting structure underpinning the deviation of these averages from those expected for the full families, and extends to the case of monogenised rings and fields of even degree at least 44.Comment: 48 pages, 2 figure

    Survey on Additive Manufacturing, Cloud 3D Printing and Services

    Full text link
    Cloud Manufacturing (CM) is the concept of using manufacturing resources in a service oriented way over the Internet. Recent developments in Additive Manufacturing (AM) are making it possible to utilise resources ad-hoc as replacement for traditional manufacturing resources in case of spontaneous problems in the established manufacturing processes. In order to be of use in these scenarios the AM resources must adhere to a strict principle of transparency and service composition in adherence to the Cloud Computing (CC) paradigm. With this review we provide an overview over CM, AM and relevant domains as well as present the historical development of scientific research in these fields, starting from 2002. Part of this work is also a meta-review on the domain to further detail its development and structure

    APIbuster Testing Framework

    Get PDF
    In recent years, not only the Service-Oriented Architecture (SOA) became a popular paradigm for the development of distributed systems, but there has been significant progress in terms of their testing. Nonetheless, the multiple testing platforms available fail to fulfil the specific requirements of the Moodbuster platform from Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência (INESC TEC) – provide a systematic process to update the test knowledge, configure and test several Representational State Transfer (REST) Application Programming Interface (API) instances. Moreover, the solution should be implemented as another REST API. The goal is to design, implement and test a platform dedicated to the testing of REST API instances. This new testing platform should allow the addition of new instances to test, the configuration and execution of sets of dedicated tests, as well as, collect and store the results. Furthermore, it should support the updating of the testing knowledge with new test categories and properties on a needs basis. This dissertation describes the design, development and testing of APIbuster, a platform dedicated to the testing of REST API instances, such as Moodbuster. The approach relies on the creation and conversion of the test knowledge ontology into the persistent data model followed by the deployment of the platform (REST API and user dashboard) through a data modelling pipeline. The APIbuster prototype was thoroughly and successfully tested considering the functional, performance, load and usability dimensions. To validate the implementation, functional and performance tests were performed regarding each API call. To ascertain the scalability of the platform, the load tests focused on the most de manding functionality. Finally, a standard usability questionnaire was distributed among users to establish the usability score of the platform. The results show that the data modelling pipeline supports the creation and subsequent updating of the testing platform with new test attributes and classes. The pipeline not only converts the testing knowledge ontology into the corresponding persistent data model, but generates a fully operational testing platform instanceNos últimos anos, o desenvolvimento de sistemas distribuídos do tipo Service-Oriented Architecture (SOA) popularizou-se, tendo ocorrido significativos progressos em ter mos de testagem. Contudo, as múltiplas plataformas de testagem existentes não satisfazem as necessidades específicas de testagem de projetos Application Programming Interfaces (API) do tipo Representational State Transfer (REST) como o Moodbuster do Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência (INESC TEC). O INESC TEC necessita de um processo sistemático de atualização, configuração e testagem de múltiplas instâncias API REST. Adicional mente, esta solução deverá ser implementada como mais uma API REST. O objetivo é conceber, implementar e testar uma plataforma de testagem de instâncias API REST. Esta nova plataforma deverá permitir a adição de instâncias de teste, configuração e execução de grupos de testes, assim como, obter e salvaguardar os resultados. Deverá ainda viabilizar a atualização do conhecimento do domínio mediante a especificação de novas categorias e atributos de teste. Esta dissertação descreve a conceção, desenvolvimento e testagem da plataforma APIbuster dedicada à testagem de instâncias API REST, como as do projecto Moodbuster. A abordagem baseia-se na definição e conversão da ontologia de representação do conhecimento sobre a testagem de API REST no correspondente modelo persistente de dados, seguida da criação da plataforma (REST API e portal do utilizador) através de um processamento sequencial dedicado. O protótipo da APIbuster foi testado detalhadamente com sucesso em relação à funcionalidade, desempenho, carga e usabilidade. Foram efetuados testes funcionais e de desempenho a cada chamada da API para validar a implementação. Para determinar a escalabilidade da plataforma, os testes de carga focaram-se na funcionalidade mais exigente. Finalmente, o questionário de usabilidade foi distribuído entre os utilizadores para definir a usabilidade da plataforma desenvolvida. Os resultados mostram que o processamento sequencial desenvolvido suporta a criação e a subsequente atualização, com novos atributos e categorias, da plataforma de testagem. Este processo não converte apenas a ontologia no modelo de dados persistente, mas gera uma instância atualizada e operacional da plataform

    Fine-Grained Linguistic Soft Constraints on Statistical Natural Language Processing Models

    Get PDF
    This dissertation focuses on effective combination of data-driven natural language processing (NLP) approaches with linguistic knowledge sources that are based on manual text annotation or word grouping according to semantic commonalities. I gainfully apply fine-grained linguistic soft constraints -- of syntactic or semantic nature -- on statistical NLP models, evaluated in end-to-end state-of-the-art statistical machine translation (SMT) systems. The introduction of semantic soft constraints involves intrinsic evaluation on word-pair similarity ranking tasks, extension from words to phrases, application in a novel distributional paraphrase generation technique, and an introduction of a generalized framework of which these soft semantic and syntactic constraints can be viewed as instances, and in which they can be potentially combined. Fine granularity is key in the successful combination of these soft constraints, in many cases. I show how to softly constrain SMT models by adding fine-grained weighted features, each preferring translation of only a specific syntactic constituent. Previous attempts using coarse-grained features yielded negative results. I also show how to softly constrain corpus-based semantic models of words (“distributional profiles”) to effectively create word-sense-aware models, by using semantic word grouping information found in a manually compiled thesaurus. Previous attempts, using hard constraints and resulting in aggregated, coarse-grained models, yielded lower gains. A novel paraphrase generation technique incorporating these soft semantic constraints is then also evaluated in a SMT system. This paraphrasing technique is based on the Distributional Hypothesis. The main advantage of this novel technique over current “pivoting” techniques for paraphrasing is the independence from parallel texts, which are a limited resource. The evaluation is done by augmenting translation models with paraphrase-based translation rules, where fine-grained scoring of paraphrase-based rules yields significantly higher gains. The model augmentation includes a novel semantic reinforcement component: In many cases there are alternative paths of generating a paraphrase-based translation rule. Each of these paths reinforces a dedicated score for the “goodness” of the new translation rule. This augmented score is then used as a soft constraint, in a weighted log-linear feature, letting the translation model learn how much to “trust” the paraphrase-based translation rules. The work reported here is the first to use distributional semantic similarity measures to improve performance of an end-to-end phrase-based SMT system. The unified framework for statistical NLP models with soft linguistic constraints enables, in principle, the combination of both semantic and syntactic constraints -- and potentially other constraints, too -- in a single SMT model

    CIRA annual report 2007-2008

    Get PDF

    Quality, Risk and the Taleb Quadrants

    Get PDF
    Abstract The definition and the management of quality has evolved and assumed a variety of approaches, responding to an increased variety of needs. In industry, quality and its control has responded to the need of maintaining an industrial process operating as "expected", reducing the process sensitivity to uncontrolled disturbances (robustness) etc. By the same token, in services, quality has been defined as "satisfied customers obtaining the services they expect". Quality management, like risk management, has a general negative connotation, arising from the consequential effects of "non-quality". Quality, just as risk, is measured as a consequence resulting from factors and events defined in terms of the statistical characteristics that underlie these events. Quality and risk may thus converge, both conceptually and technically, expanding the concerns that both domains are confronted with and challenged by. In this paper, we analyze such a prospective convergence between quality and risk, and their management. In particular we emphasize aspects of integrated quality, risk, performance and cost in industry and services. Throughout such applications, we demonstrate alternative approaches to quality management, and their merging with risk management, in order to improve both the quality and risk management processes. In the analysis we apply the four quadrants proposed by Nassim Taleb for mapping consequential risks and their probability structure. Three case studies are provided, one on risk finance, a second one on risk management of telecommunication systems and a third one on quality and reliability of web based services

    Biomedical Term Extraction: NLP Techniques in Computational Medicine

    Get PDF
    Artificial Intelligence (AI) and its branch Natural Language Processing (NLP) in particular are main contributors to recent advances in classifying documentation and extracting information from assorted fields, Medicine being one that has gathered a lot of attention due to the amount of information generated in public professional journals and other means of communication within the medical profession. The typical information extraction task from technical texts is performed via an automatic term recognition extractor. Automatic Term Recognition (ATR) from technical texts is applied for the identification of key concepts for information retrieval and, secondarily, for machine translation. Term recognition depends on the subject domain and the lexical patterns of a given language, in our case, Spanish, Arabic and Japanese. In this article, we present the methods and techniques for creating a biomedical corpus of validated terms, with several tools for optimal exploitation of the information therewith contained in said corpus. This paper also shows how these techniques and tools have been used in a prototype

    Understanding task inter-dependence and co-ordination efforts in multi-sourcing: the suppliers' perspective

    Get PDF
    The last decade has witnessed a significant growth in the outsourcing of information technologies and business processes. Of a particular trend within the outsourcing industry is the shift from the client firm contracting a single supplier to utilizing multiple suppliers, which is also known as multi-sourcing. Multi-sourcing may potentially offer numerous advantages to client firms, however, it might present some challenges to suppliers. In particular, multi-sourcing could create coordination challenges, as there are inter-dependencies between the outsourced tasks to numerous suppliers. While the current outsourcing literature acknowledges the existence of inter-dependencies, little is known about the efforts required for coordinating the work between suppliers and how these coordination efforts are made to manage task inter-dependence. Three case studies at Pactera (case one) and TCS (cases two and three) serve as the empirical base to investigate the inter-dependence between outsourced tasks and suppliers coordination efforts. This research offers theoretical contributions to both coordination studies and the outsourcing body of knowledge

    North Pacific Marine Science Organization (PICES): Annual Report, Seventeenth Meeting, Dalian, China, October 24 - November 2, 2008

    Get PDF
    Report of Opening Session. Report of Governing Council. Report of the Finance and Administration Committee. Reports of Science Board and Committees. Report of the Climate Change and Carrying Capacity Scientific Program. Reports of Expert Groups. Session Summaries. Participants. PICES Members. PICES Acronyms
    • …
    corecore