2,606 research outputs found
Information actors beyond modernity and coloniality in times of climate change:A comparative design ethnography on the making of monitors for sustainable futures in Curaçao and Amsterdam, between 2019-2022
In his dissertation, Mr. Goilo developed a cutting-edge theoretical framework for an Anthropology of Information. This study compares information in the context of modernity in Amsterdam and coloniality in Curaçao through the making process of monitors and develops five ways to understand how information can act towards sustainable futures. The research also discusses how the two contexts, that is modernity and coloniality, have been in informational symbiosis for centuries which is producing negative informational side effects within the age of the Anthropocene. By exploring the modernity-coloniality symbiosis of information, the author explains how scholars, policymakers, and data-analysts can act through historical and structural roots of contemporary global inequities related to the production and distribution of information. Ultimately, the five theses propose conditions towards the collective production of knowledge towards a more sustainable planet
Recommended from our members
Timing, origin, and potential global connections of mid-Ediacaran phenomena in South Australia and eastern California
Mid-Ediacaran incised valleys in the Johnnie Formation of eastern California (the Johnnie valleys) and the Wonoka Formation of South Australia (the Wonoka canyons) are of interest for their unusually large scale and broad time concordance with the largest negative carbon-isotope anomaly in Earth history (the Shuram excursion) and the emergence of multicellular life (the Ediacara fauna). The Johnnie valleys and Wonoka canyons have been widely accepted as originating in a submarine setting at a continental margin. My new data suggest an alternative scenario: that both features were cut subaerially concomitant with sea-level lowering in excess of 200 m, and were subsequently drowned and filled by marine sediments.
Critical evidence includes 1) the presence in the basal fill of both valley systems of polymictic conglomerate/breccia with a quartz sand matrix that is locally associated with stratified quartz sandstone, suggesting both local and far-traveled fill components; 2) multiple upward-fining, polymictic conglomerate-based cycles in the basal Wonoka canyon fill; 3) beds and blocks of giant ooid packstone-grainstone indicative of shallow marine sedimentation during the early stages of Johnnie valley filling; 4) the observed transition in the direction of paleoflow in the Wonoka from stratified boulder conglomerate to sandstone and siltstone event beds; and 5) regional restoration of the northern Flinders Ranges indicating that several deep canyons in the Wonoka are > 20 km inboard of the paleoshelf edge. Modern submarine canyons rarely incise that far into continental shelves.
My new carbon isotopic data demonstrate negative carbon-13 (δ13C) values in the basal Johnnie valley fill, indicating that like the Wonoka canyons, the Johnnie valleys are bracketed by the Shuram excursion. Additionally, in South Australia, regional allochthonous salt breakout is observed at the same stratigraphic level as the canyon-cutting unconformity, with no evidence for triggering by regional crustal shortening or deep marine non-deposition. Clasts from diapiric breccia and the basal Wonoka canyon fill share sedimentologic, petrographic, and geochemical characteristics indicating the presence of diapiric contributions to the canyon fill, and that allochthonous salt and the canyons interacted dynamically at the Earth’s surface during the Ediacaran.
Each of these observations is more consistent with the expectations of a subaerial rather than submarine setting. I hypothesize that the Johnnie valleys and Wonoka canyons were cut by a combination of fluvial incision and subaerial mass wasting, before being drowned. Sea-level lowering is thought to have been triggered by the ~580 Ma Gaskiers glaciation. My interpretation is based on high-resolution physical stratigraphic mapping supported by sub-meter scale 3-D drone imagery, geochemical analysis (δ13C, δ18O, δ26Mg, Mg/Ca), structural restoration, as well as sedimentologic and petrographic analysis. The overall interpretation has several implications for connections between mid-Ediacaran phenomena globally. Given that the Johnnie valleys and Wonoka canyons are stratigraphically bracketed by negative δ13C values putatively correlated with the Shuram excursion, my data suggest that the Shuram excursion may encompass rather than postdate the Gaskiers glaciation in eastern California and South Australia, and that the onset of the excursion may be diachronous at a global scale.
My interpretation presents the first outcrop evidence for subaerial erosion and non-deposition as a mechanism capable of triggering appreciable salt breakout. The suggested occurrence of regional isolation and rapid environmental change closely precedes the emergence of the Ediacara fauna, and presents new context for the organisms and the sediments in which they are recorded
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Design of new algorithms for gene network reconstruction applied to in silico modeling of biomedical data
Programa de Doctorado en Biotecnología, Ingeniería y Tecnología QuímicaLínea de Investigación: Ingeniería, Ciencia de Datos y BioinformáticaClave Programa: DBICódigo Línea: 111The root causes of disease are still poorly understood. The success of current therapies is limited because persistent diseases are frequently treated based on their symptoms rather than the underlying cause of the disease. Therefore, biomedical research is experiencing a technology-driven shift to data-driven holistic approaches to better characterize the molecular mechanisms causing disease. Using omics data as an input, emerging disciplines like network biology attempt to model the relationships between biomolecules. To this effect, gene co- expression networks arise as a promising tool for deciphering the relationships between genes in large transcriptomic datasets. However, because of their low specificity and high false positive rate, they demonstrate a limited capacity to retrieve the disrupted mechanisms that lead to disease onset, progression, and maintenance. Within the context of statistical modeling, we dove deeper into the reconstruction of gene co-expression networks with the specific goal of discovering disease-specific features directly from expression data. Using ensemble techniques, which combine the results of various metrics, we were able to more precisely capture biologically significant relationships between genes. We were able to find de novo potential disease-specific features with the help of prior biological knowledge and the development of new network inference techniques.
Through our different approaches, we analyzed large gene sets across multiple samples and used gene expression as a surrogate marker for the inherent biological processes, reconstructing robust gene co-expression networks that are simple to explore. By mining disease-specific gene co-expression networks we come up with a useful framework for identifying new omics-phenotype associations from conditional expression datasets.In this sense, understanding diseases from the perspective of biological network perturbations will improve personalized medicine, impacting rational biomarker discovery, patient stratification and drug design, and ultimately leading to more targeted therapies.Universidad Pablo de Olavide de Sevilla. Departamento de Deporte e Informátic
Identifying Relevant Features of CSE-CIC-IDS2018 Dataset for the Development of an Intrusion Detection System
Intrusion detection systems (IDSs) are essential elements of IT systems.
Their key component is a classification module that continuously evaluates some
features of the network traffic and identifies possible threats. Its efficiency
is greatly affected by the right selection of the features to be monitored.
Therefore, the identification of a minimal set of features that are necessary
to safely distinguish malicious traffic from benign traffic is indispensable in
the course of the development of an IDS. This paper presents the preprocessing
and feature selection workflow as well as its results in the case of the
CSE-CIC-IDS2018 on AWS dataset, focusing on five attack types. To identify the
relevant features, six feature selection methods were applied, and the final
ranking of the features was elaborated based on their average score. Next,
several subsets of the features were formed based on different ranking
threshold values, and each subset was tried with five classification algorithms
to determine the optimal feature set for each attack type. During the
evaluation, four widely used metrics were taken into consideration.Comment: 24 page
AI: Limits and Prospects of Artificial Intelligence
The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence
Reconstruction and Synthesis of Human-Scene Interaction
In this thesis, we argue that the 3D scene is vital for understanding, reconstructing, and synthesizing human motion. We present several approaches which take the scene into consideration in reconstructing and synthesizing Human-Scene Interaction (HSI). We first observe that state-of-the-art pose estimation methods ignore the 3D scene and hence reconstruct poses that are inconsistent with the scene. We address this by proposing a pose estimation method that takes the 3D scene explicitly into account. We call our method PROX for Proximal Relationships with Object eXclusion. We leverage the data generated using PROX and build a method to automatically place 3D scans of people with clothing in scenes. The core novelty of our method is encoding the proximal relationships between the human and the scene in a novel HSI model, called POSA for Pose with prOximitieS and contActs. POSA is limited to static HSI, however. We propose a real-time method for synthesizing dynamic HSI, which we call SAMP for Scene-Aware Motion Prediction. SAMP enables virtual humans to navigate cluttered indoor scenes and naturally interact with objects. Data-driven kinematic models, like SAMP, can produce high-quality motion when applied in environments similar to those shown in the dataset. However, when applied to new scenarios, kinematic models can struggle to generate realistic behaviors that respect scene constraints. In contrast, we present InterPhys which uses adversarial imitation learning and reinforcement learning to train physically-simulated characters that perform scene interaction tasks in a physical and life-like manner
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Computational Approaches to Drug Profiling and Drug-Protein Interactions
Despite substantial increases in R&D spending within the pharmaceutical industry, denovo drug design has become a time-consuming endeavour. High attrition rates led to a
long period of stagnation in drug approvals. Due to the extreme costs associated with
introducing a drug to the market, locating and understanding the reasons for clinical failure
is key to future productivity. As part of this PhD, three main contributions were made in
this respect. First, the web platform, LigNFam enables users to interactively explore
similarity relationships between ‘drug like’ molecules and the proteins they bind. Secondly,
two deep-learning-based binding site comparison tools were developed, competing with
the state-of-the-art over benchmark datasets. The models have the ability to predict offtarget interactions and potential candidates for target-based drug repurposing. Finally, the
open-source ScaffoldGraph software was presented for the analysis of hierarchical scaffold
relationships and has already been used in multiple projects, including integration into a
virtual screening pipeline to increase the tractability of ultra-large screening experiments.
Together, and with existing tools, the contributions made will aid in the understanding of
drug-protein relationships, particularly in the fields of off-target prediction and drug
repurposing, helping to design better drugs faster
- …