509 research outputs found

    Ontology Driven Web Extraction from Semi-structured and Unstructured Data for B2B Market Analysis

    No full text
    The Market Blended Insight project1 has the objective of improving the UK business to business marketing performance using the semantic web technologies. In this project, we are implementing an ontology driven web extraction and translation framework to supplement our backend triple store of UK companies, people and geographical information. It deals with both the semi-structured data and the unstructured text on the web, to annotate and then translate the extracted data according to the backend schema

    Process assessment for the extended enterprise during early product development using novel computational techniques

    Get PDF
    Manufacturing practices have evolved over the last quarter of a century in the light of changes to manufacturing technology and demand. To sustain this growth companies are increasingly focused on better design and quicker time to market, to stay one step ahead of the competition. Expanding technology capabilities have included microcomputers and telecommunications. In particular the Internet has allowed businesses to trade with an extended customer base, resulting in a greater demand and perpetuating the cycle. To mirror this statement, businesses are looking increasingly far and wide for suitable suppliers. This work identifies a need in the market for an Internet based supplier selection function, during early product development. The development of this work differs significantly from other process selection methods by the use of the Internet to link companies. It has advantages for product development relating to the scope of the opportunities, diversity of possible manufacturing operations and rapid assessment of processes. In particular the system can be broken down into two main functions. Process Selection (PS) and Factory Selection (FS). The PS method presented enables many processes to be modelled, in multiple organisations for a single product. The Internet is used to gain access to supplier facilities by adopting the same principles as on-line banking, or shopping, for data input and access. The results of these assessments are retained by the system for later analysis. The FS method utilises this data to model and compare supplier attributes, allowing the user to manipulate the data to fit their requirements. Testing of the system has proved encouraging for many operations, including Injection Moulding and CNC Machining. It can be concluded that the identification of manufacturing operations outside the remit of companies' normal scope will create further opportunities for supplier integration

    A collaborative citizen science platform for real-time volunteer computing and games

    Full text link
    Volunteer computing (VC) or distributed computing projects are common in the citizen cyberscience (CCS) community and present extensive opportunities for scientists to make use of computing power donated by volunteers to undertake large-scale scientific computing tasks. Volunteer computing is generally a non-interactive process for those contributing computing resources to a project whereas volunteer thinking (VT) or distributed thinking, which allows volunteers to participate interactively in citizen cyberscience projects to solve human computation tasks. In this paper we describe the integration of three tools, the Virtual Atom Smasher (VAS) game developed by CERN, LiveQ, a job distribution middleware, and CitizenGrid, an online platform for hosting and providing computation to CCS projects. This integration demonstrates the combining of volunteer computing and volunteer thinking to help address the scientific and educational goals of games like VAS. The paper introduces the three tools and provides details of the integration process along with further potential usage scenarios for the resulting platform.Comment: 12 pages, 13 figure

    Unlocking the potential of public sector information with Semantic Web technology

    Get PDF
    Governments often hold very rich data and whilst much of this information is published and available for re-use by others, it is often trapped by poor data structures, locked up in legacy data formats or in fragmented databases. One of the great benefits that Semantic Web (SW) technology offers is facilitating the large scale integration and sharing of distributed data sources. At the heart of information policy in the UK, the Office of Public Sector Information (OPSI) is the part of the UK government charged with enabling the greater re-use of public sector information. This paper describes the actions, findings, and lessons learnt from a pilot study, involving several parts of government and the public sector. The aim was to show to government how they can adopt SW technology for the dissemination, sharing and use of its data

    Stochastic Workflow Scheduling with QoS Guarantees in Grid Computing Environments

    Get PDF
    Grid computing infrastructures embody a cost-effective computing paradigm that virtualises heterogenous system resources to meet the dynamic needs of critical business and scientific applications. These applications range from batch processes and long-running tasks to more real-time and even transactional applications. Grid schedulers aim to make efficient use of Grid resources in a cost-effective way, while satisfying the Quality-of-Service requirements of the applications. Scheduling in such a large-scale, dynamic and distributed environment is a complex undertaking. In this paper, we propose an approach to Grid scheduling which abstracts over the details of individual applications and aims to provide a globally optimal schedule, while having the ability to dynamically adjust to varying workloa

    A Semantic Approach to Automatic Program Improvement

    Get PDF
    The programs that are easiest to write and understand are often not the most efficient. This thesis gives methods of converting programs of the former type to those of the latter type; this involves converting definitions of algorithms given as recursion equations using high level primitives into lower level flow chart programs. The main activities involved are recursion removal (c.f. Strong), loop elimination, and the overwriting of shared structures. We have concentrated on the semantics, rather than the syntax, of the programs we are transforming and we have used techniques developed in work done on proving the correctness of programs. The transformations are done in a hierarchical manner and can be regarded as compiling a program defined in a structured manner (Dijkstra) to produce an efficient low level program that simulates it. We describe the implementation of a system that allows the user to specify algorithms in a simple set language and converts them to flow chart programs in either a bitstring or list processing language. Both of these lower languages allow the sharing of structures. The principles are applicable to other domains and we describe how our system can be applied more generally

    Simplifying the Development, Use and Sustainability of HPC Software

    Full text link
    Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud computing become more widely accepted for HPC computations, scientists require more support from computer scientists and resource providers to develop efficient code and make optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. The use of such frameworks has implications for the sustainability of scientific software. In this paper we set out our developing understanding of these challenges based on work carried out in the libhpc project.Comment: 4 page position paper, submission to WSSSPE13 worksho

    Longitudinal deprivation trajectories and risk of cardiovascular disease in New Zealand

    Get PDF
    We used longitudinal information on area deprivation status to explore the relationship between residential-deprivation mobility and Cardiovascular Disease (CVD). Data from 2,418,397 individuals who were: enrolled in any Primary Health Organisation within New Zealand (NZ) during at least 1 of 34 calendar quarters between 1st January 2006 and 30th June 2014; aged between 30 and 84 years (inclusive) at the start of the study period; had no prior history of CVD; and had recorded address information were analysed. Including a novel trajectory analysis, our findings suggest that movers are healthier than stayers. The deprivation characteristics of the move have a larger impact on the relative risk of CVD for younger movers than for older movers. For older movers any kind of move is associated with a decreased risk of CVD
    • ā€¦
    corecore