4,076 research outputs found
Towards Exascale Scientific Metadata Management
Advances in technology and computing hardware are enabling scientists from
all areas of science to produce massive amounts of data using large-scale
simulations or observational facilities. In this era of data deluge, effective
coordination between the data production and the analysis phases hinges on the
availability of metadata that describe the scientific datasets. Existing
workflow engines have been capturing a limited form of metadata to provide
provenance information about the identity and lineage of the data. However,
much of the data produced by simulations, experiments, and analyses still need
to be annotated manually in an ad hoc manner by domain scientists. Systematic
and transparent acquisition of rich metadata becomes a crucial prerequisite to
sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and
domain-agnostic metadata management infrastructure that can meet the demands of
extreme-scale science is notable by its absence.
To address this gap in scientific data management research and practice, we
present our vision for an integrated approach that (1) automatically captures
and manipulates information-rich metadata while the data is being produced or
analyzed and (2) stores metadata within each dataset to permeate
metadata-oblivious processes and to query metadata through established and
standardized data access interfaces. We motivate the need for the proposed
integrated approach using applications from plasma physics, climate modeling
and neuroscience, and then discuss research challenges and possible solutions
A pattern-based approach to a cell tracking ontology
Time-lapse microscopy has thoroughly transformed our understanding of biological motion and developmental dynamics from single cells to entire organisms. The increasing amount of cell tracking data demands the creation of tools to make extracted data searchable and interoperable between experiment and data types. In order to address that problem, the current paper reports on the progress in building the Cell Tracking Ontology (CTO): An ontology framework for describing, querying and integrating data from complementary experimental techniques in the domain of cell tracking experiments. CTO is based on a basic knowledge structure: the cellular genealogy serving as a backbone model to integrate specific biological ontologies into tracking data. As a first step we integrate the Phenotype and Trait Ontology (PATO) as one of the most relevant ontologies to annotate cell tracking experiments. The CTO requires both the integration of data on various levels of generality as well as the proper structuring of collected information. Therefore, in order to provide a sound foundation of the ontology, we have built on the rich body of work on top-level ontologies and established three generic ontology design patterns addressing three modeling challenges for properly representing cellular genealogies, i.e. representing entities existing in time, undergoing changes over time and their organization into more complex structures such as situations
Scalable aggregation predictive analytics: a query-driven machine learning approach
We introduce a predictive modeling solution that provides high quality predictive analytics over aggregation queries in Big Data environments. Our predictive methodology is generally applicable in environments in which large-scale data owners may or may not restrict access to their data and allow only aggregation operators like COUNT to be executed over their data. In this context, our methodology is based on historical queries and their answers to accurately predict ad-hoc queries’ answers. We focus on the widely used set-cardinality, i.e., COUNT, aggregation query, as COUNT is a fundamental operator for both internal data system optimizations and for aggregation-oriented data exploration and predictive analytics. We contribute a novel, query-driven Machine Learning (ML) model whose goals are to: (i) learn the query-answer space from past issued queries, (ii) associate the query space with local linear regression & associative function estimators, (iii) define query similarity, and (iv) predict the cardinality of the answer set of unseen incoming queries, referred to the Set Cardinality Prediction (SCP) problem. Our ML model incorporates incremental ML algorithms for ensuring high quality prediction results. The significance of contribution lies in that it (i) is the only query-driven solution applicable over general Big Data environments, which include restricted-access data, (ii) offers incremental learning adjusted for arriving ad-hoc queries, which is well suited for query-driven data exploration, and (iii) offers a performance (in terms of scalability, SCP accuracy, processing time, and memory requirements) that is superior to data-centric approaches. We provide a comprehensive performance evaluation of our model evaluating its sensitivity, scalability and efficiency for quality predictive analytics. In addition, we report on the development and incorporation of our ML model in Spark showing its superior performance compared to the Spark’s COUNT method
Experimental evaluation of big data querying tools
Nos últimos anos, o termo Big Data tornou-se um tópico bastanta debatido em várias
áreas de negócio. Um dos principais desafios relacionados com este conceito é como lidar
com o enorme volume e variedade de dados de forma eficiente. Devido à notória
complexidade e volume de dados associados ao conceito de Big Data, são necessários
mecanismos de consulta eficientes para fins de análise de dados. Motivado pelo rápido
desenvolvimento de ferramentas e frameworks para Big Data, há muita discussão sobre
ferramentas de consulta e, mais especificamente, quais são as mais apropriadas para
necessidades analíticas específica. Esta dissertação descreve e compara as principais
características e arquiteturas das seguintes conhecidas ferramentas analíticas para Big Data:
Drill, HAWQ, Hive, Impala, Presto e Spark. Para testar o desempenho dessas ferramentas
analíticas para Big Data, descrevemos também o processo de preparação, configuração e
administração de um Cluster Hadoop para que possamos instalar e utilizar essas ferramentas,
tendo um ambiente capaz de avaliar seu desempenho e identificar quais cenários mais
adequados à sua utilização. Para realizar esta avaliação, utilizamos os benchmarks TPC-H e
TPC-DS, onde os resultados mostraram que as ferramentas de processamento em memória
como HAWQ, Impala e Presto apresentam melhores resultados e desempenho em datasets de
dimensão baixa e média. No entanto, as ferramentas que apresentaram tempos de execuções
mais lentas, especialmente o Hive, parecem apanhar as ferramentas de melhor desempenho
quando aumentamos os datasets de referência
ProcessGPT: Transforming Business Process Management with Generative Artificial Intelligence
Generative Pre-trained Transformer (GPT) is a state-of-the-art machine
learning model capable of generating human-like text through natural language
processing (NLP). GPT is trained on massive amounts of text data and uses deep
learning techniques to learn patterns and relationships within the data,
enabling it to generate coherent and contextually appropriate text. This
position paper proposes using GPT technology to generate new process models
when/if needed. We introduce ProcessGPT as a new technology that has the
potential to enhance decision-making in data-centric and knowledge-intensive
processes. ProcessGPT can be designed by training a generative pre-trained
transformer model on a large dataset of business process data. This model can
then be fine-tuned on specific process domains and trained to generate process
flows and make decisions based on context and user input. The model can be
integrated with NLP and machine learning techniques to provide insights and
recommendations for process improvement. Furthermore, the model can automate
repetitive tasks and improve process efficiency while enabling knowledge
workers to communicate analysis findings, supporting evidence, and make
decisions. ProcessGPT can revolutionize business process management (BPM) by
offering a powerful tool for process augmentation, automation and improvement.
Finally, we demonstrate how ProcessGPT can be a powerful tool for augmenting
data engineers in maintaining data ecosystem processes within large bank
organizations. Our scenario highlights the potential of this approach to
improve efficiency, reduce costs, and enhance the quality of business
operations through the automation of data-centric and knowledge-intensive
processes. These results underscore the promise of ProcessGPT as a
transformative technology for organizations looking to improve their process
workflows.Comment: Accepted in: 2023 IEEE International Conference on Web Services
(ICWS); Corresponding author: Prof. Amin Beheshti ([email protected]
- …