66 research outputs found
60 років базам даних (заключна частина)
Наводиться огляд досліджень і розробок баз даних із моменту їх виникнення в 60-х роках минулого століття і по сьогодні. Виділяються наступні етапи: виникнення і становлення, бурхливий розвиток, епоха реляційних баз даних, розширені реляційні бази даних, постреляційні бази даних і великі дані. На етапі становлення описуються системи IDS, IMS, Total і Adabas. На етапі бурхливого розвитку висвітлені питання архітектури баз даних ANSI/X3/SPARC, пропозицій КОДАСИЛ, концепції і мов концептуального моделювання. На етапі епохи реляційних баз даних розкриваються результати наукової діяльності Е. Кодда, теорія залежностей і нормальних форм, мови запитів, експериментальні дослідження і розробки, оптимізація та стандартизація, управління транзакціями. Етап розширених реляційних баз даних присвячений опису темпоральних, просторових, дедуктивних, активних, об’єктних, розподілених та статистичних баз даних, баз даних масивів, машин баз даних і сховищ даних. На наступному етапі розкрита проблематика постреляційних баз даних, а саме: NOSQL, ключ-значення, стовпчикові, документні, графові, NewSQL, онтологічні. Шостий етап присвячений розкриттю при- чин виникнення, характерних властивостей, класифікації, принципів роботи, методів і технологій великих даних. Нарешті, в останньому із розділів подається короткий огляд досліджень і розробок баз даних у Радянському СоюзіThe article provides an overview of research and development of databases since their appearance in the 60s of the last century to the present time. The following stages are distinguished: the emergence formation and rapid development, the era of relational databases, extended relational databases, post-relational databases and big data. At the stage of formation, the systems IDS, IMS, Total and Adabas are described. At the stage of rapid development, issues of ANSI/X3/ SPARC database architecture, CODASYL proposals, concepts and languages of conceptual modeling are highlighted. At the stage of the era of relational databases, the results of E. Codd’s scientific activities, the theory of dependencies and normal forms, query languages, experimental research and development, optimization and standardization, and transaction management are revealed. The extended relational databases phase is devoted to describing temporal, spatial, deductive, active, object, distributed and statistical databases, array databases, and database machines and data warehouses. At the next stage, the problems of post-relational databases are disclosed, namely, NOSQL-, NewSQL- and ontological databases. The sixth stage is devoted to the disclosure of the causes of occurrence, characteristic properties, classification, principles of work, methods and technologies of big data. Finally, the last section provides a brief overview of database research and development in the Soviet Union
A Survey on Mapping Semi-Structured Data and Graph Data to Relational Data
The data produced by various services should be stored and managed in an appropriate format for gaining valuable knowledge conveniently. This leads to the emergence of various data models, including relational, semi-structured, and graph models, and so on. Considering the fact that the mature relational databases established on relational data models are still predominant in today's market, it has fueled interest in storing and processing semi-structured data and graph data in relational databases so that mature and powerful relational databases' capabilities can all be applied to these various data. In this survey, we review existing methods on mapping semi-structured data and graph data into relational tables, analyze their major features, and give a detailed classification of those methods. We also summarize the merits and demerits of each method, introduce open research challenges, and present future research directions. With this comprehensive investigation of existing methods and open problems, we hope this survey can motivate new mapping approaches through drawing lessons from eachmodel's mapping strategies, aswell as a newresearch topic - mapping multi-model data into relational tables.Peer reviewe
Analysis of the Usability of Automatically Enriched Cultural Heritage Data
This chapter presents the potential of interoperability and standardised data
publication for cultural heritage resources, with a focus on community-driven
approaches and web standards for usability. The Linked Open Usable Data (LOUD)
design principles, which rely on JSON-LD as lingua franca, serve as the
foundation.
We begin by exploring the significant advances made by the International
Image Interoperability Framework (IIIF) in promoting interoperability for
image-based resources. The principles and practices of IIIF have paved the way
for Linked Art, which expands the use of linked data by demonstrating how it
can easily facilitate the integration and sharing of semantic cultural heritage
data across portals and institutions.
To provide a practical demonstration of the concepts discussed, the chapter
highlights the implementation of LUX, the Yale Collections Discovery platform.
LUX serves as a compelling case study for the use of linked data at scale,
demonstrating the real-world application of automated enrichment in the
cultural heritage domain.
Rooted in empirical study, the analysis presented in this chapter delves into
the broader context of community practices and semantic interoperability. By
examining the collaborative efforts and integration of diverse cultural
heritage resources, the research sheds light on the potential benefits and
challenges associated with LOUD.Comment: This is the preprint version of a chapter submitted to be included in
the book "Decoding Cultural Heritage: a critical dissection and taxonomy of
human creativity through digital tools", to be published by Springer Nature.
The chapter is currently undergoing peer review for potential inclusion in
the boo
Security Implications of Adopting a New Data Storage and Access Model in Big Data and Cloud Computing
This article examines the security implications of using cloud computing and Big Data. It employs a mixed methodology of qualitative and quantitative research and takes a critical realist epistemological approach. The objective is to identify the components of a theory for predicting and explaining [1, 4] the security implications associated with adopting the services provided by cloud computing and Big Data. The integration of various information sources and the widespread use of computing across diverse fields have resulted in a significant increase in data volume, scale, quantity, and diversity. Consequently, data management, storage, retrieval, and access have undergone significant changes. The latest developments in IT have brought forth novel technologies such as Cloud Computing and Big Data. Big Data comprises of technologies that rely on NoSQL (Not only SQL) databases, which enable the growth of data volumes, numbers, and types on a large scale. The new NoSQL systems are seen as solutions for meeting scalability requirements of large IT firms. Multiple open-source and pay-as-you-go NoSQL models are available for purchase
Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries
Graph processing has become an important part of multiple areas of computer
science, such as machine learning, computational sciences, medical
applications, social network analysis, and many others. Numerous graphs such as
web or social networks may contain up to trillions of edges. Often, these
graphs are also dynamic (their structure changes over time) and have
domain-specific rich data associated with vertices and edges. Graph database
systems such as Neo4j enable storing, processing, and analyzing such large,
evolving, and rich datasets. Due to the sheer size of such datasets, combined
with the irregular nature of graph processing, these systems face unique design
challenges. To facilitate the understanding of this emerging domain, we present
the first survey and taxonomy of graph database systems. We focus on
identifying and analyzing fundamental categories of these systems (e.g., triple
stores, tuple stores, native graph database systems, or object-oriented
systems), the associated graph models (e.g., RDF or Labeled Property Graph),
data organization techniques (e.g., storing graph data in indexing structures
or dividing data into records), and different aspects of data distribution and
query execution (e.g., support for sharding and ACID). 51 graph database
systems are presented and compared, including Neo4j, OrientDB, or Virtuoso. We
outline graph database queries and relationships with associated domains (NoSQL
stores, graph streaming, and dynamic graph algorithms). Finally, we describe
research and engineering challenges to outline the future of graph databases
Digital Humanities and Libraries and Archives in Religious Studies
How are digital humanists drawing on libraries and archives to advance research in the field of religious studies and theology? How can librarians and archivists make their collections accessible in return? This volume showcases the perspectives of faculty, librarians, archivists, and allied cultural heritage professionals who are drawing on primary and secondary sources in innovative ways to create digital humanities projects in the field
Emerging Technologies
This monograph investigates a multitude of emerging technologies including 3D printing, 5G, blockchain, and many more to assess their potential for use to further humanity’s shared goal of sustainable development. Through case studies detailing how these technologies are already being used at companies worldwide, author Sinan Küfeoğlu explores how emerging technologies can be used to enhance progress toward each of the seventeen United Nations Sustainable Development Goals and to guarantee economic growth even in the face of challenges such as climate change. To assemble this book, the author explored the business models of 650 companies in order to demonstrate how innovations can be converted into value to support sustainable development. To ensure practical application, only technologies currently on the market and in use actual companies were investigated. This volume will be of great use to academics, policymakers, innovators at the forefront of green business, and anyone else who is interested in novel and innovative business models and how they could help to achieve the Sustainable Development Goals. This is an open access book
A software architecture for electro-mobility services: a milestone for sustainable remote vehicle capabilities
To face the tough competition, changing markets and technologies in automotive industry,
automakers have to be highly innovative. In the previous decades, innovations were
electronics and IT-driven, which increased exponentially the complexity of vehicle’s internal
network. Furthermore, the growing expectations and preferences of customers oblige these
manufacturers to adapt their business models and to also propose mobility-based services.
One other hand, there is also an increasing pressure from regulators to significantly reduce
the environmental footprint in transportation and mobility, down to zero in the foreseeable
future.
This dissertation investigates an architecture for communication and data exchange
within a complex and heterogeneous ecosystem. This communication takes place between
various third-party entities on one side, and between these entities and the infrastructure
on the other. The proposed solution reduces considerably the complexity of vehicle
communication and within the parties involved in the ODX life cycle. In such an
heterogeneous environment, a particular attention is paid to the protection of confidential
and private data. Confidential data here refers to the OEM’s know-how which is enclosed
in vehicle projects. The data delivered by a car during a vehicle communication session
might contain private data from customers. Our solution ensures that every entity of this
ecosystem has access only to data it has the right to. We designed our solution to be
non-technological-coupling so that it can be implemented in any platform to benefit from
the best environment suited for each task. We also proposed a data model for vehicle
projects, which improves query time during a vehicle diagnostic session. The scalability and
the backwards compatibility were also taken into account during the design phase of our
solution.
We proposed the necessary algorithms and the workflow to perform an efficient vehicle
diagnostic with considerably lower latency and substantially better complexity time and
space than current solutions. To prove the practicality of our design, we presented a
prototypical implementation of our design. Then, we analyzed the results of a series of tests
we performed on several vehicle models and projects. We also evaluated the prototype
against quality attributes in software engineering
- …