2,223 research outputs found

    Report of the Stanford Linked Data Workshop

    No full text
    The Stanford University Libraries and Academic Information Resources (SULAIR) with the Council on Library and Information Resources (CLIR) conducted at week-long workshop on the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources. As preparation for the workshop, CLIR sponsored a survey by Jerry Persons, Chief Information Architect emeritus of SULAIR that was published originally for workshop participants as background to the workshop and is now publicly available. The original intention of the workshop was to devise a plan for such a prototype. However, such was the diversity of knowledge, experience, and views of the potential of Linked Data approaches that the workshop participants turned to two more fundamental goals: building common understanding and enthusiasm on the one hand and identifying opportunities and challenges to be confronted in the preparation of the intended prototype and its operation on the other. In pursuit of those objectives, the workshop participants produced:1. a value statement addressing the question of why a Linked Data approach is worth prototyping;2. a manifesto for Linked Libraries (and Museums and Archives and …);3. an outline of the phases in a life cycle of Linked Data approaches;4. a prioritized list of known issues in generating, harvesting & using Linked Data;5. a workflow with notes for converting library bibliographic records and other academic metadata to URIs;6. examples of potential “killer apps” using Linked Data: and7. a list of next steps and potential projects.This report includes a summary of the workshop agenda, a chart showing the use of Linked Data in cultural heritage venues, and short biographies and statements from each of the participants

    AdSplit: Separating smartphone advertising from applications

    Full text link
    A wide variety of smartphone applications today rely on third-party advertising services, which provide libraries that are linked into the hosting application. This situation is undesirable for both the application author and the advertiser. Advertising libraries require additional permissions, resulting in additional permission requests to users. Likewise, a malicious application could simulate the behavior of the advertising library, forging the user's interaction and effectively stealing money from the advertiser. This paper describes AdSplit, where we extended Android to allow an application and its advertising to run as separate processes, under separate user-ids, eliminating the need for applications to request permissions on behalf of their advertising libraries. We also leverage mechanisms from Quire to allow the remote server to validate the authenticity of client-side behavior. In this paper, we quantify the degree of permission bloat caused by advertising, with a study of thousands of downloaded apps. AdSplit automatically recompiles apps to extract their ad services, and we measure minimal runtime overhead. We also observe that most ad libraries just embed an HTML widget within and describe how AdSplit can be designed with this in mind to avoid any need for ads to have native code

    Web service control of component-based agile manufacturing systems

    Get PDF
    Current global business competition has resulted in significant challenges for manufacturing and production sectors focused on shorter product lifecyc1es, more diverse and customized products as well as cost pressures from competitors and customers. To remain competitive, manufacturers, particularly in automotive industry, require the next generation of manufacturing paradigms supporting flexible and reconfigurable production systems that allow quick system changeovers for various types of products. In addition, closer integration of shop floor and business systems is required as indicated by the research efforts in investigating "Agile and Collaborative Manufacturing Systems" in supporting the production unit throughout the manufacturing lifecycles. The integration of a business enterprise with its shop-floor and lifecycle supply partners is currently only achieved through complex proprietary solutions due to differences in technology, particularly between automation and business systems. The situation is further complicated by the diverse types of automation control devices employed. Recently, the emerging technology of Service Oriented Architecture's (SOA's) and Web Services (WS) has been demonstrated and proved successful in linking business applications. The adoption of this Web Services approach at the automation level, that would enable a seamless integration of business enterprise and a shop-floor system, is an active research topic within the automotive domain. If successful, reconfigurable automation systems formed by a network of collaborative autonomous and open control platform in distributed, loosely coupled manufacturing environment can be realized through a unifying platform of WS interfaces for devices communication. The adoption of SOA- Web Services on embedded automation devices can be achieved employing Device Profile for Web Services (DPWS) protocols which encapsulate device control functionality as provided services (e.g. device I/O operation, device state notification, device discovery) and business application interfaces into physical control components of machining automation. This novel approach supports the possibility of integrating pervasive enterprise applications through unifying Web Services interfaces and neutral Simple Object Access Protocol (SOAP) message communication between control systems and business applications over standard Ethernet-Local Area Networks (LAN's). In addition, the re-configurability of the automation system is enhanced via the utilisation of Web Services throughout an automated control, build, installation, test, maintenance and reuse system lifecycle via device self-discovery provided by the DPWS protocol...cont'd

    Anturidatan lähettäminen fyysiseltä kaksoselta digitaaliselle kaksoselle

    Get PDF
    A digital twin is a digital counterpart of a physical thing such as a machine. The term digital twin was first introduced in 2010. Thereafter, it has received an extensive amount of interest because of the numerous benefits it is expected to offer throughout the product life cycle. Currently, the concept is developed by the world’s largest companies such as Siemens. The purpose of this thesis is to examine which application layer protocols and communication technologies are the most suitable for the sensor data transmission from a physical twin to a digital twin. In addition, a platform enabling this data transmission is developed. As the concept of a digital twin is relatively new, a comprehensive literature view on the definition of a digital twin in scientific literature is presented. It has been found that the vision of a digital twin has evolved from the concepts of ‘intelligent products’ presented at the beginning of the 2000s. The most widely adopted definition states that a digital twin accurately mirrors the current state of its corresponding twin. However, the definition of a digital twin is not yet standardized and varies in different fields. Based on the literature review, the communication needs of a digital twin are derived. Thereafter, the suitability of HTTP, MQTT, CoAP, XMPP, AMQP, DDS, and OPC UA for sensor data transmission are examined through a literature review. In addition, a review of 4G, 5G, NB-IoT, LoRa, Sigfox, Bluetooth, Wi-Fi, Z-Wave, ZigBee, and WirelessHART is presented. A platform for the management of the sensors is developed. The platform narrows the gap between the concept and realization of a digital twin by enabling sensor data transmission. The platform allows easy addition of sensors to a physical twin and provides an interface for their configuration remotely over the Internet. It supports multiple sensor types and application protocols and offers both web user iterface and REST API.Digitaalinen kaksonen on fyysisen tuotteen digitaalinen vastinkappale, joka sisältää tiedon sen nykyisestä tilasta. Digitaalisen kaksosen käsite otettiin ensimmäisen kerran käyttöön vuonna 2010. Sen jälkeen digitaalinen kaksonen on saanut paljon huomiota, ja sitä ovat lähteneet kehittämään maailman suurimmat yritykset, kuten Siemens. Tämän työn tarkoituksena tutkia, mitkä sovelluskerroksen protokollat ja langattomat verkot soveltuvat parhaiten anturien keräämän datan lähettämiseen fyysiseltä kaksoselta digitaaliselle kaksoselle. Sen lisäksi työssä esitellään alusta, joka mahdollistaa tämän tiedonsiirron. Digitaalisen kaksosesta esitetään laaja kirjallisuuskatsaus, joka luo pohjan työn myöhemmille osioille. Digitaalisen kaksosen konsepti pohjautuu 2000-luvun alussa esiteltyihin ajatuksiin ”älykkäistä tuotteista”. Yleisimmän käytössä olevan määritelmän mukaan digitaalinen kaksonen heijastaa sen fyysisen vastinparin tämän hetkistä tilaa. Määritelmä kuitenkin vaihtelee eri alojen välillä eikä se ole vielä vakiintunut tieteellisessä kirjallisuudessa. Kirjallisuuskatsauksen avulla johdetaan digitaalisen kaksosen kommunikaatiotarpeet. Sen jälkeen arvioidaan seuraavien sovelluskerroksen protokollien soveltuvuutta anturidatan lähettämiseen kirjallisuuskatsauksen avulla: HTTP, MQTT, CoAP, XMPP, AMQP, DDS ja OPC UA. Myös seuraavien langattomien verkkojen soveltuvuutta tiedonsiirtoon tutkitaan: 4G, 5G, NB-IoT, LoRaWAN, Sigfox, Bluetooth, Wi-Fi, Z-Wave, ZigBee ja WirelessHART. Osana työtä kehitettiin myös ohjelmistoalusta, joka mahdollistaa anturien hallinnan etänä Internetin välityksellä. Alusta on pieni askel kohti digitaalisen kaksosen käytän-nön toteutusta, sillä se mahdollistaa tiedon keräämisen fyysisestä vastinkappaleesta. Sen avulla sensorien lisääminen fyysiseen kaksoseen on helppoa, ja se tukee sekä useita sensorityyppejä että sovelluskerroksen protokollia. Alusta tukee REST API –rajapintaa ja sisältää web-käyttöliittymän

    A model-based approach to multi-domain monitoring data aggregation

    Get PDF
    The essential propellant for any closed-loop management mechanism is data related to the managed entity. While this is a general evidence, it becomes even more true when dealing with advanced closed-loop systems like the ones supported by Artificial Intelligence (AI), as they require a trustworthy, up-to-date and steady flow of state data to be applicable. Modern network infrastructures provide a vast amount of disparate data sources, especially in the multi-domain scenarios considered by the ETSI Industry Specification Group (ISG) Zero Touch Network and Service Management (ZSM) framework, and proper mechanisms for data aggregation, pre-processing and normalization are required to make possible AI-enabled closed-loop management. So far, solutions proposed for these data aggregation tasks have been specific to concrete data sources and consumers, following ad-hoc approaches unsuitable to address the vast heterogeneity of data sources and potential data consumers. This paper presents a model-based approach to a data aggregator framework, relying on standardized data models and telemetry protocols, and integrated with an open-source network orchestration stack to support their incorporation within network service lifecycles.The research leading to these results received funding from the European Union’s Horizon 2020 research and innovation programme under grant agree-ment no 871808 (INSPIRE-5Gplus) and no. 856709 (5GROWTH). The paper reflects only the authors’ views. The Commission is not responsible for any use that may be made of the information it contains

    Actor-based Concurrency in Newspeak 4

    Get PDF
    Actors are a model of computation invented by Carl Hewitt in the 1970s. It has seen a resurrection of mainstream use recently as a potential solution to the latency and concurrency that are quickly rising as the dominant challenges facing the software industry. In this project I explored the history of the actor model and a practical implementation of actor-based concurrency tightly integrated with non-blocking futures in the E programming language developed by Mark Miller. I implemented an actor-based concurrency framework for Newspeak that closely follows the E implementation and includes E-style futures and deep integration into the programming language via new syntax for asynchronous message passing

    Developing front-end Web 2.0 technologies to access services, content and things in the future Internet

    Get PDF
    The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative

    INDUSTRIAL DEVICE INTEGRATION AND VIRTUALIZATION FOR SMART FACTORIES

    Get PDF
    Given the constant industry growth and modernization, several technologies have been introduced in the shop floor, in particular regarding industrial devices. Each device brand and model usually requires different interfaces and communication protocols, a technological diversity which renders the automatic interconnection with production management software extremely challenging. However, combining key technologies such as machine monitoring, digital twin and virtual commissioning, along with a complete communication protocol like OPC UA, it is possible to contribute towards industrial device integration on a Smart Factory environment. To achieve this goal, several methodologies and a set of tools were defined. This set of tools, as well as facilitating the integration tasks, should also be part of a virtual engineering environment, sharing the same virtual model, the digital twin, through the complete lifecycle of the industrial device, namely the project, simulation, implementation and execution/monitoring/supervision and, eventually, decommissioning phases. A key result of this work is the development of a set of virtual engineering tools and methodologies based on OPC UA communication, with the digital twin implemented using RobotStudio, in order to accomplish the complete lifecycle support of an industrial device, from the project and simulation phases, to monitoring and supervision, suitable for integration in Industry 4.0 factories. To evaluate the operation of the developed set of tools, experiments were performed for a test scenario with different devices. Other relevant result is related with the integration of a specific industrial device – CNC machining equipment. Given the variety of monitoring systems and communication protocols, an approach where various solutions available on the market are combined on a single system is followed. These kinds of all-in-one solutions would give production managers access to the information necessary for a continuous monitoring and improvement of the entire production process

    Planning for the Lifecycle Management and Long-Term Preservation of Research Data: A Federated Approach

    Get PDF
    Outcomes of the grant are archived here.The “data deluge” is a recent but increasingly well-understood phenomenon of scientific and social inquiry. Large-scale research instruments extend our observational power by many orders of magnitude but at the same time generate massive amounts of data. Researchers work feverishly to document and preserve changing or disappearing habitats, cultures, languages, and artifacts resulting in volumes of media in various formats. New software tools mine a growing universe of historical and modern texts and connect the dots in our semantic environment. Libraries, archives, and museums undertake digitization programs creating broad access to unique cultural heritage resources for research. Global-scale research collaborations with hundreds or thousands of participants, drive the creation of massive amounts of data, most of which cannot be recreated if lost. The University of Kansas (KU) Libraries in collaboration with two partners, the Greater Western Library Alliance (GWLA) and the Great Plains Network (GPN), received an IMLS National Leadership Grant designed to leverage collective strengths and create a proposal for a scalable and federated approach to the lifecycle management of research data based on the needs of GPN and GWLA member institutions.Institute for Museum and Library Services LG-51-12-0695-1

    The LIFE2 final project report

    Get PDF
    Executive summary: The first phase of LIFE (Lifecycle Information For E-Literature) made a major contribution to understanding the long-term costs of digital preservation; an essential step in helping institutions plan for the future. The LIFE work models the digital lifecycle and calculates the costs of preserving digital information for future years. Organisations can apply this process in order to understand costs and plan effectively for the preservation of their digital collections The second phase of the LIFE Project, LIFE2, has refined the LIFE Model adding three new exemplar Case Studies to further build upon LIFE1. LIFE2 is an 18-month JISC-funded project between UCL (University College London) and The British Library (BL), supported by the LIBER Access and Preservation Divisions. LIFE2 began in March 2007, and completed in August 2008. The LIFE approach has been validated by a full independent economic review and has successfully produced an updated lifecycle costing model (LIFE Model v2) and digital preservation costing model (GPM v1.1). The LIFE Model has been tested with three further Case Studies including institutional repositories (SHERPA-LEAP), digital preservation services (SHERPA DP) and a comparison of analogue and digital collections (British Library Newspapers). These Case Studies were useful for scenario building and have fed back into both the LIFE Model and the LIFE Methodology. The experiences of implementing the Case Studies indicated that enhancements made to the LIFE Methodology, Model and associated tools have simplified the costing process. Mapping a specific lifecycle to the LIFE Model isn’t always a straightforward process. The revised and more detailed Model has reduced ambiguity. The costing templates, which were refined throughout the process of developing the Case Studies, ensure clear articulation of both working and cost figures, and facilitate comparative analysis between different lifecycles. The LIFE work has been successfully disseminated throughout the digital preservation and HE communities. Early adopters of the work include the Royal Danish Library, State Archives and the State and University Library, Denmark as well as the LIFE2 Project partners. Furthermore, interest in the LIFE work has not been limited to these sectors, with interest in LIFE expressed by local government, records offices, and private industry. LIFE has also provided input into the LC-JISC Blue Ribbon Task Force on the Economic Sustainability of Digital Preservation. Moving forward our ability to cost the digital preservation lifecycle will require further investment in costing tools and models. Developments in estimative models will be needed to support planning activities, both at a collection management level and at a later preservation planning level once a collection has been acquired. In order to support these developments a greater volume of raw cost data will be required to inform and test new cost models. This volume of data cannot be supported via the Case Study approach, and the LIFE team would suggest that a software tool would provide the volume of costing data necessary to provide a truly accurate predictive model
    corecore