653 research outputs found
Towards a linked information architecture for integrated law enforcement
Ponència presentada al Workshop on Linked Democracy: Artificial Intelligence for Democratic Innovation co-located with the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017) celebrat el 19 d'agost de 2017 a Melbourne, AustraliaLaw enforcement agencies are facing an ever-increasing flood of data to be acquired, stored, assessed and used. Automation and advanced data analy-sis capabilities are required to supersede traditional manual work processes and legacy information silos by automatically acquiring information from a range of sources, analyzing it in the context of on-going investigations, and linking it to other pieces of knowledge pertaining to the investigation. This paper outlines a modular architecture for management of linked data in the law enforcement domain and discusses legal and policy issues related to workflows and infor-mation sharing in this context
An architecture for establishing legal semantic workflows in the context of integrated law enforcement
A previous version of this paper was presented at the Third Workshop on Legal Knowledge and the Semantic Web (LK&SW-2016), EKAW-2016, November 19th, Bologna, ItalyTraditionally the integration of data from multiple sources is done on an ad-hoc basis for each to "silos" that prevent sharing data across different agencies or tasks, and is unable to cope with the modern environment, where workflows, tasks, and priorities frequently change. Operating within the Data to Decision Cooperative Research Centre (D2D CRC), the authors are currently involved in the Integrated Law Enforcement Project, which has the goal of developing a federated data platform that will enable the execution of integrated analytics on data accessed from different external and internal sources, thereby providing effective support to an investigator or analyst working to evaluate evidence and manage lines of inquiries in the investigation. Technical solutions should also operate ethically, in compliance with the law, and subject to good governance principles
MULTI-X, a State-of-the-Art Cloud-Based Ecosystem for Biomedical Research
With the exponential growth of clinical data, and the fast development of AI technologies, researchers are facing unprecedented challenges in managing data storage, scalable processing, and analysis capabilities for heterogeneous multisourced datasets. Beyond the complexity of executing data-intensive workflows over large-scale distributed data, the reproducibility of computed results is of paramount importance to validate scientific discoveries. In this paper, we present MULTIX, a cross-domain research-oriented platform, designed for collaborative and reproducible science. This cloud-based framework simplifies the logistical challenges of implementing data analytics and AI solutions by providing pre-configured environments with ad-hoc scalable computing resources and secure distributed storage, to efficiently build, test, share and reproduce scientific pipelines. An exemplary use-case in the area of cardiac image analysis will be presented together with the practical application of the platform for the analysis of ~20.000 subjects of the UK-Biobank database
Introducing distributed dynamic data-intensive (D3) science: Understanding applications and infrastructure
A common feature across many science and engineering applications is the
amount and diversity of data and computation that must be integrated to yield
insights. Data sets are growing larger and becoming distributed; and their
location, availability and properties are often time-dependent. Collectively,
these characteristics give rise to dynamic distributed data-intensive
applications. While "static" data applications have received significant
attention, the characteristics, requirements, and software systems for the
analysis of large volumes of dynamic, distributed data, and data-intensive
applications have received relatively less attention. This paper surveys
several representative dynamic distributed data-intensive application
scenarios, provides a common conceptual framework to understand them, and
examines the infrastructure used in support of applications.Comment: 38 pages, 2 figure
After the success of DevOps introduce DataOps in enterprise culture
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementA lot of organizations implemented DevOps processes with success. This allowed different
areas like development, operations, security and quality work together. This cooperation, and
processes associated to the work with these areas are producing excellent results. The
organizations are developing many applications that support operation and are producing a
lot of data. This data has a significant value for organizations because must be used in analysis,
reporting and more recently data science projects to support decisions.
It is time to take decisions supported in data and for this is necessary to transform
organizations in a data-driven organizations and for this we need processes to deal with this
data across all teams.
This dissertation follows a design science research approach to apply multiple analytical
methods and perspectives to create an artifact. The type of evidence within this methodology
is a systematic literature review, with the goal to attain insights into the current state-of-the art research of DataOps implementation. Additionally, proven best practices from the industry
are examined in depth to further strengthen the credibility. Thereby, the systematic literature
review shall be used to pinpoint, analyze, and comprehend the obtainable empirical studies
and research questions. This methodology supports the main goal of this dissertation, to
develop and propose evidence-based practice guidelines for the DataOps implementation
that can be followed by organizations
Technologies and Applications for Big Data Value
This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
- …