15 research outputs found

    Metadata as a service (MetaaS) model for cloud computing

    Get PDF
    Cloud computing has become the most attractive field in industry and research. Metadata as a Service (MetaaS) is an emerging technique that could help the cloud users, and cloud service providers (CSPs) according their needs. The increasing of the speed of searching and acquiring against the number of the data services in cloud computing that has leads the researchers to think about implementing a new technique. MetaaS model uses to serve as a backbone for providing and searching for data storage in cloud computing. MetaaS model consists of three main layers as Metadata component, cloud users and CSPs. The Metadata components consists of six main components as Metadata Entity (ME), Metadata File Information (MFI), Metadata Catalog Service (MCS), Metadata Management Engine (MME), Metadata Capturing (MC) and Metadata Analysis (MA). In this paper, an approach for enabling searching, storing, accessing, retrieving, and capturing the data from Cloud Data Storage (CDS) based on MetaaS model is presented. Taking the production of CDS service as example, this paper gives formal analysis of system running and compares with other related work. The results show that the model presents good reference on the construction of cloud computing applications and services according to the cloud services functionalities and MetaaS components

    Mobilizing Community to Transition to the Next Generation of Metadata

    Get PDF
    This article highlights community and stakeholder mobilization initiatives in the library and heritage sectors that help in the transition to the next generation of metadata. We draw from the Next Generation of Metadata round table discussions organized by OCLC Research in March 2021. In these discussions, we saw next generation metadata mobilization taking place along two trajectories: (1) transforming and publishing institutionally sourced metadata and (2) improving metadata already in the supply chain. The article provides context and scope from the round table conversations and highlights national initiatives taking place in both mobilization areas. The article then discusses the challenge of managing at multiple scales, as efforts of local, national and global scale gear up to connect with each other.

    The Transatlantic Archaeology Gateway: Bridging the Digital Ocean

    Get PDF

    role of the Wikidata librarian in a renewed Bibliographical Universe: "next generation metadata", next generation librarians

    Get PDF
    Starting from a brief analysis of the OCLC Research report "Next generation metadata", the contribution proposes a reflection on the librarian metadata and the new centrality of its role in the LIS; among the various declinations that can assume such professional figure, One of the most interesting is the Wikidata librarian, of which a first definition is outlined. Following some international experiences of the use of Wikidata as a tool for new working methods, already consolidated in many library institutions, and as a particularly suitable environment for experimentation, thanks to its features of open, free, collaborative, easy to understand and use system. While information professionals and a large part of the academic and scientific environment have understood the potential of Wikidata, some weaknesses of this instrument have been highlighted, which must be corrected and recalibrated in the name of universally accessible and reusable knowledge, also taking into account the requirements contained in the Semantic Web Manifesto by the Cataloging and Indexing Study Group of the Italian Association of Libraries

    Use of ontologies for metadata records analysis in big data

    Get PDF
    Big Data deals with the sets of information (structured, unstructured, or semi structured) so large that traditional ways and approaches (based on business intelligence decisions and database management systems) cannot be applied to them. Big Data is characterized by phenomenal acceleration of data accumulation and its complication. In different contexts Big Data often means both data of large volume and a set of tools and methods for their processing. Big Data sets are accompanied by metadata which contains a large amount of information about the data, including significant descriptive text information whose understanding by machines lead to better results of Big Data processing. Methods of artificial intelligence and intelligent Web-technologies improve the efficiency of all stages of Big Data processing. Most often this integration concerns the use of machine learning that provides the knowledge acquisition from Big Data and ontological analysis that formalizes for domain knowledge for Big Data analysis. In the paper, the authors present a method for analyzing the Big Data metadata which allows selecting those blocks of information among the heterogeneous sources and data repositories that are pertinent for solving the customer task. Much attention is paid to the matching of the text part of the metadata (metadata annotations) with the text describing the task. We suggest to use for these purposes the methods and instruments of natural language analysis and the Big Data ontology which contains knowledge about the specifics of this domain

    Žmogiškųjų išteklių informacinio valdymo problemos ir sprendimo ypatumai

    Get PDF
    The current article explores one of the traditional management functional areas of enterprises—human resources management and its multi-component information environments, components. The traditional enterprises, usually manufacturing-oriented enterprises, controlled according to the functions of the activity, when many operating divisions is specialized in carrying out some certain tasks, functions (i.e. every department or unit is focused on the specific information technology applications which are not integrated). But quick changes in the modern activity environment fosters enterprises to switch from the classical functional management approaches (i.e. non-effective databases that are of marginal use, duplicative of one another, and operational systems that cannot adequately provide important information for enterprise control) towards more adaptive, contemporary information processing models, knowledge-based enterprises, process management (i.e. a computer-aided knowledge bases, automatic information exchange, structured and metadata-oriented way). As mentioned above, are the databases now really becoming increasingly unmanageable, non-effective? Slow information processing not only costs money, but also endangers competitiveness and makes users unhappy. However, it should be noted that every functional area, group of users of the enterprise, have their specific, purpose, subjects and management structure, otherwise they have different information needs, requirements. Therefore, organizational information systems need be constantly maintained and applied to their surroundings This article presents and critically analyzes the theoretical, practical aspects of the human resources or employee and information management, i.e. the first introduces 1) the major problems of information management (e.g., data integration and interoperability of systems, why business users often don’t have direct access to the important business data); 2) the process of formation, generation of the business process, business information flows and information structure (information system) and its development; and finally examines 3) the possible changes in the information infrastructure of the human resource development sector—presenting a general framework of an enterprise’s human resource information system, based on the meta-data management model and the usage associated with it (e.g., discovery, extraction, acquisition, distribution). Nowadays, human resources management is being renewed in enterprises and becoming one of the fundamental functions of activity management. Unfortunately, most business and industrial enterprises in the country often lack the capacity to effectively manage (identify, collect, store, manage) its real information resources, and lack the ability to perform systems analysis, modelling, re-building or re-engineering of legacy applications, activity processes. This article presents several relatively simple, practical, but effective techniques (specific adaptations of technologies) that allow an increase in the effectiveness of the information systems; continually improving, reviewing, controlling the existing data in the databases, [...]Tikslas – teoriškai ir praktiškai išnagrinėti bei informacinių technologijų kontekste įvertinti žmogiškųjų išteklių veiklos procesų ir informacijos valdymą, pokyčius. Aptarti informacinės veiklos valdymo ir organizavimo sprendimus, aspektus bei efektyvaus taikymo, siekiant informacinės infrastruktūros gerinimo žmogiškųjų išteklių veiklos procesų vykdymo ir valdymo srityje, galimybes. Pagrįsti veiklos valdymo informacinės sistemos, praplėstos metaduomenų valdymu, naudingumą ir taikomumą. Metodologija – 1) mokslinės literatūros šaltinių analizė – žmogiškųjų išteklių ir informacijos valdymo problemoms, aktualiems klausimams, pažangiems pokyčiams informacinės veiklos valdyme aptarti bei įvertinti; 2) pramoninės įmonės personalo valdymo informacinės sistemos, dokumentų, informacijos šaltinių empirinis tyrimas ir duomenų srautų analizė – siekiant ištirti, atskleisti ir atspindėti esamus šios funkcinės srities informacinių vienetų tarpusavio ryšius ir jų sudėtį, duomenų saugojimo, valdymo problemas bei pagrįsti informacinės veiklos kokybės gerinimo veiksmus; 3) veiklos procesų ir duomenų grafinis vaizdavimas, modeliavimas – realioms žmogiškųjų išteklių veiklos procesų (ŽIVP) valdymo situacijoms, informacinėms sąveikoms ir sistemos būsenoms iliustruoti; centralizuoto ir lankstaus informacijos valdymo, tobulinimo sprendimams, požiūriams, veikimo principams atskleisti bei numatyti metaduomenų taikymo informacinės sistemos (IS) funkcionalumui gerinti galimybes. Rezultatai – tyrimu siekiama atskleisti, kodėl nepakankamai racionaliai naudojamos, prižiūrimos žmogiškųjų išteklių (ŽI) valdymą ir duomenų apdorojimą palaikančios informacinės sistemos ir kokius naujus informacinius reikalavimus kelia šiandienos veiklos aplinka bei sparti kaita, skatinanti ieškoti naujų, efektyvių valdymo formų, prieigos prie duomenų, informacinių objektų galimybių ŽI valdymo srityje. Darbe atlikta detali personalo informacinių išteklių valdymo srities problemų analizė; pateikti, apibrėžti ir iliustruoti paveikslais ŽI informacinės veiklos valdymo gerinimui skirti sprendimai ir modeliai, argumentuota paskirtis bei atskleistas turinys. Galiausiai, pateikta metaduomenų valdymo schema, skirta IS funkcionalumui gerinti. Tyrimo ribotumas – pristatoma nedidelė dalis žmogiškųjų išteklių (atskiro veiklos posistemio) valdymo informacinio aprūpinimo problemų ir pateikiamos tik tam tikros efektyvumo užtikrinimo priemonės, požiūriai, skirti informacinės sistemos darbo bei veiklos tęstinumo kokybei gerinti. Praktinė reikšmė – atlikti teoriniai ir empiriniai darbo tyrimai prisideda prie ŽI valdymo informacinio aprūpinimo supratimo stiprinimo. Tyrimas parodė, kad šiai funkcinei sričiai įmonėse ilgą laiką buvo priskiriamas tik palaikomasis vaidmuo, todėl susikaupė daug problemų. Pristatyti praktiniu požiūriu aktualūs pokyčiai, kurie vyksta ŽI informacijos ir su ja susijusių procesų valdyme, o siekiant darnios įmonės informacinės aplinkos bei gerinti informacinio turinio valdymą – pasitelkti įvairūs sprendimo metodai bei atskleisti praktikams vertingi jų vykdymo klausimai. Originalumas /vertingumas – sklandus ir sėkmingas praktinis informacinių technologijų (IT) infrastruktūros įgyvendinimas, plėtotė ar naujinimas reikalauja ne tik išsamių žinių apie IT produktus, bet ir gebėjimo vertinti, suprasti bei formalizuoti (modeliuoti, algoritmizuoti) nuolat kintančią skaitmeninę informacijos, žinių aplinką; nustatyti veiklos problemas bei planuoti kaitą informacijos posistemėje siekiant tenkinti išskirtines sistemų naudotojų reikmes. T. y. rasti efektyvius būdus, [...

    A data transformation model for relational and non-relational data

    Get PDF
    The information systems that support small, medium, and large organisations need data transformation solutions from multiple data sources to fulfill the requirements of new applications and decision-making to stay competitive. Relational data is the foundation for the majority of applications programme, whereas non-relational data is the foundation for the majority of newly produced applications. The relational model is the most elegant one; nonetheless, this kind of database has a drawback when it comes to managing very large volumes of data. Because they can handle massive volumes of data, non-relational databases have evolved into relational database substitutes. The key issue is that rules for data transformation processes across various data types are becoming less well-defined, leading to a steady decline in data quality. Therefore, to handle relational and non-relational data and satisfy the requirements for data quality, an empirical model in this domain knowledge is required. This study seeks to develop a data transformation model used for different data sources while satisfying data quality requirements, especially the transformation processes in relational and non-relational model, named Data Transformation with Two ETL Phases and Central-Library (DTTEPC). The different stages and methods in the developed model are used to transform the metadata information and stored data from relational to non-relational systems, and vice versa. The model is developed and validated through expert review, and the prototype based on the final version is employed in two case studies: education and healthcare. The results of the usability test demonstrate that the developed model is capable of transforming metadata data and stored data across systems. So enhancing the information systems in various organizations through data transformation solutions. The DTTEPC model improved the integrity and completeness of the data transformation processes. Moreover, supports decision-makers by utilizing information from various sources and systems in real-time demands
    corecore