43 research outputs found
Responsible Knowledge Management in Energy Data Ecosystems
This paper analyzes the challenges and requirements of establishing energy data ecosystems (EDEs) as data-driven infrastructures that overcome the limitations of currently fragmented energy applications. It proposes a new data- and knowledge-driven approach for management and processing. This approach aims to extend the analytics services portfolio of various energy stakeholders and achieve two-way flows of electricity and information for optimized generation, distribution, and electricity consumption. The approach is based on semantic technologies to create knowledge-based systems that will aid machines in integrating and processing resources contextually and intelligently. Thus, a paradigm shift in the energy data value chain is proposed towards transparency and the responsible management of data and knowledge exchanged by the various stakeholders of an energy data space. The approach can contribute to innovative energy management and the adoption of new business models in future energy data spaces
Responsible Knowledge Management in Energy Data Ecosystems
This paper analyzes the challenges and requirements of establishing energy data ecosystems (EDEs) as data-driven infrastructures that overcome the limitations of currently fragmented energy applications. It proposes a new data-and knowledge-driven approach for management and process-ing. This approach aims to extend the analytics services portfolio of various energy stakeholders and achieve two-way flows of electricity and information for optimized generation, distribution, and electricity consumption. The approach is based on semantic technologies to create knowledge-based systems that will aid machines in integrating and processing resources contextually and intelligently. Thus, a paradigm shift in the energy data value chain is proposed towards transparency and the responsible management of data and knowledge exchanged by the various stakeholders of an energy data space. The approach can contribute to innovative energy management and the adoption of new business models in future energy data spaces. © 2022 by the authors. Licensee MDPI, Basel, Switzerland
Adaptive Merging on Phase Change Memory
Indexing is a well-known database technique used to facilitate data access
and speed up query processing. Nevertheless, the construction and modification
of indexes are very expensive. In traditional approaches, all records in the
database table are equally covered by the index. It is not effective, since
some records may be queried very often and some never. To avoid this problem,
adaptive merging has been introduced. The key idea is to create index
adaptively and incrementally as a side-product of query processing. As a
result, the database table is indexed partially depending on the query
workload. This paper faces a problem of adaptive merging for phase change
memory (PCM). The most important features of this memory type are: limited
write endurance and high write latency. As a consequence, adaptive merging
should be investigated from the scratch. We solve this problem in two steps.
First, we apply several PCM optimization techniques to the traditional adaptive
merging approach. We prove that the proposed method (eAM) outperforms a
traditional approach by 60%. After that, we invent the framework for adaptive
merging (PAM) and a new PCM-optimized index. It further improves the system
performance by 20% for databases where search queries interleave with data
modifications
FairNN- Conjoint Learning of Fair Representations for Fair Decisions
In this paper, we propose FairNN a neural network that performs joint feature
representation and classification for fairness-aware learning. Our approach
optimizes a multi-objective loss function in which (a) learns a fair
representation by suppressing protected attributes (b) maintains the
information content by minimizing a reconstruction loss and (c) allows for
solving a classification task in a fair manner by minimizing the classification
error and respecting the equalized odds-based fairness regularized. Our
experiments on a variety of datasets demonstrate that such a joint approach is
superior to separate treatment of unfairness in representation learning or
supervised learning. Additionally, our regularizers can be adaptively weighted
to balance the different components of the loss function, thus allowing for a
very general framework for conjoint fair representation learning and decision
making.Comment: Code will be availabl
Federated Query Processing over Heterogeneous Data Sources in a Semantic Data Lake
Data provides the basis for emerging scientific and interdisciplinary data-centric applications with the potential of improving the quality of life for citizens. Big Data plays an important role in promoting both manufacturing and scientific development through industrial digitization and emerging interdisciplinary research. Open data initiatives have encouraged the publication of Big Data by exploiting the decentralized nature of the Web, allowing for the availability of heterogeneous data generated and maintained by autonomous data providers. Consequently, the growing volume of data consumed by different applications raise the need for effective data integration approaches able to process a large volume of data that is represented in different format, schema and model, which may also include sensitive data, e.g., financial transactions, medical procedures, or personal data. Data Lakes are composed of heterogeneous data sources in their original format, that reduce the overhead of materialized data integration. Query processing over Data Lakes require the semantic description of data collected from heterogeneous data sources. A Data Lake with such semantic annotations is referred to as a Semantic Data Lake. Transforming Big Data into actionable knowledge demands novel and scalable techniques for enabling not only Big Data ingestion and curation to the Semantic Data Lake, but also for efficient large-scale semantic data integration, exploration, and discovery. Federated query processing techniques utilize source descriptions to find relevant data sources and find efficient execution plan that minimize the total execution time and maximize the completeness of answers. Existing federated query processing engines employ a coarse-grained description model where the semantics encoded in data sources are ignored. Such descriptions may lead to the erroneous selection of data sources for a query and unnecessary retrieval of data, affecting thus the performance of query processing engine. In this thesis, we address the problem of federated query processing against heterogeneous data sources in a Semantic Data Lake. First, we tackle the challenge of knowledge representation and propose a novel source description model, RDF Molecule Templates, that describe knowledge available in a Semantic Data Lake. RDF Molecule Templates (RDF-MTs) describes data sources in terms of an abstract description of entities belonging to the same semantic concept. Then, we propose a technique for data source selection and query decomposition, the MULDER approach, and query planning and optimization techniques, Ontario, that exploit the characteristics of heterogeneous data sources described using RDF-MTs and provide a uniform access to heterogeneous data sources. We then address the challenge of enforcing privacy and access control requirements imposed by data providers. We introduce a privacy-aware federated query technique, BOUNCER, able to enforce privacy and access control regulations during query processing over data sources in a Semantic Data Lake. In particular, BOUNCER exploits RDF-MTs based source descriptions in order to express privacy and access control policies as well as their automatic enforcement during source selection, query decomposition, and planning. Furthermore, BOUNCER implements query decomposition and optimization techniques able to identify query plans over data sources that not only contain the relevant entities to answer a query, but also are regulated by policies that allow for accessing these relevant entities. Finally, we tackle the problem of interest based update propagation and co-evolution of data sources. We present a novel approach for interest-based RDF update propagation that consistently maintains a full or partial replication of large datasets and deal with co-evolution
Preventing Discriminatory Decision-making in Evolving Data Streams
Bias in machine learning has rightly received significant attention over the
last decade. However, most fair machine learning (fair-ML) work to address bias
in decision-making systems has focused solely on the offline setting. Despite
the wide prevalence of online systems in the real world, work on identifying
and correcting bias in the online setting is severely lacking. The unique
challenges of the online environment make addressing bias more difficult than
in the offline setting. First, Streaming Machine Learning (SML) algorithms must
deal with the constantly evolving real-time data stream. Second, they need to
adapt to changing data distributions (concept drift) to make accurate
predictions on new incoming data. Adding fairness constraints to this already
complicated task is not straightforward. In this work, we focus on the
challenges of achieving fairness in biased data streams while accounting for
the presence of concept drift, accessing one sample at a time. We present Fair
Sampling over Stream (), a novel fair rebalancing approach capable of
being integrated with SML classification algorithms. Furthermore, we devise the
first unified performance-fairness metric, Fairness Bonded Utility (FBU), to
evaluate and compare the trade-off between performance and fairness of
different bias mitigation methods efficiently. FBU simplifies the comparison of
fairness-performance trade-offs of multiple techniques through one unified and
intuitive evaluation, allowing model designers to easily choose a technique.
Overall, extensive evaluations show our measures surpass those of other fair
online techniques previously reported in the literature
A Survey on Legal Question Answering Systems
Many legal professionals think that the explosion of information about local,
regional, national, and international legislation makes their practice more
costly, time-consuming, and even error-prone. The two main reasons for this are
that most legislation is usually unstructured, and the tremendous amount and
pace with which laws are released causes information overload in their daily
tasks. In the case of the legal domain, the research community agrees that a
system allowing to generate automatic responses to legal questions could
substantially impact many practical implications in daily activities. The
degree of usefulness is such that even a semi-automatic solution could
significantly help to reduce the workload to be faced. This is mainly because a
Question Answering system could be able to automatically process a massive
amount of legal resources to answer a question or doubt in seconds, which means
that it could save resources in the form of effort, money, and time to many
professionals in the legal sector. In this work, we quantitatively and
qualitatively survey the solutions that currently exist to meet this challenge.Comment: 57 pages, 1 figure, 10 table
Understanding NUMA Effects on Memory Allocation and Reclamation
Memory management in multicore systems is a well studied area. Many approaches to memory management have been developed and tuned with specific hardware architectures in mind, capitalizing on hardware characteristics to improve performance. In this thesis, the focus is on memory allocation and reclamation in multicore systems.
I first identify and diagnose a performance anomaly in epoch based memory reclamation (EBR), one of the most popular approaches to reclaiming memory in multicore systems. EBR experiences significant performance degradation when running on multiple processor sockets. This degradation is related to the fact that EBR is vulnerable to thread delays. Even minor delays can trigger a chain reaction that induces longer delays and more substantial performance problems. Moreover, I discover a negative interaction between EBR and popular memory allocators, caused by the fact that EBR frees batches of objects, and these allocators attempt to cache batches of objects for reallocation. The batches freed by EBR frequently overflow the allocator buffers, defeating their purpose and causing substantial performance overhead.
To solve these issues, an improvement to EBR, called amortized batch free is introduced to limit the amplification of delays and performance degradation when freeing. Amortized batch free gradually reclaims objects, and can drastically reduce the average time spent freeing an object. This technique is applied to a state of the art EBR algorithms, and significant performance improvements are shown experimentally.
This amortized batch freeing technique appears broadly applicable to other memory reclamation algorithms. As a first step in demonstrating this, I also apply it to a simple token based variant of EBR. Token EBR is conceptually simpler and easier to implement than the state of the art EBR algorithm, but has been shown in other work to perform poorly. When the amortized batch free technique is used, Token EBR performs similarly to (and sometimes slightly better than) the state of the art EBR algorithm.
Finally, I present a new design for an architecture aware memory allocator for multi-socket systems, using a state of the art allocator called Supermalloc as a starting point for my design. Several key bottlenecks in the original Supermalloc design are improved or eliminated in the new design. In particular, the new design dramatically improves performance when the address space is actively growing, reduces contention on shared resources, and optimizes memory accesses to reduce communication across processor sockets. Taking into account the lessons learned in the study of EBR, the new design also attempts to minimize the overhead of freeing objects. Experiments on a prototype of this new allocator show some performance improvement compared to the original Supermalloc allocator
Automated Deduction – CADE 28
This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions