117 research outputs found
Data journeys in the sciences
This is the final version. Available from Springer via the DOI in this record. This groundbreaking, open access volume analyses and compares data practices across several fields through the analysis of specific cases of data journeys. It brings together leading scholars in the philosophy, history and social studies of science to achieve two goals: tracking the travel of data across different spaces, times and domains of research practice; and documenting how such journeys affect the use of data as evidence and the knowledge being produced. The volume captures the opportunities, challenges and concerns involved in making data move from the sites in which they are originally produced to sites where they can be integrated with other data, analysed and re-used for a variety of purposes. The in-depth study of data journeys provides the necessary ground to examine disciplinary, geographical and historical differences and similarities in data management, processing and interpretation, thus identifying the key conditions of possibility for the widespread data sharing associated with Big and Open Data. The chapters are ordered in sections that broadly correspond to different stages of the journeys of data, from their generation to the legitimisation of their use for specific purposes. Additionally, the preface to the volume provides a variety of alternative “roadmaps” aimed to serve the different interests and entry points of readers; and the introduction provides a substantive overview of what data journeys can teach about the methods and epistemology of research.European CommissionAustralian Research CouncilAlan Turing Institut
Chapter 34 - Biocompatibility of nanocellulose: Emerging biomedical applications
Nanocellulose already proved to be a highly relevant material for biomedical
applications, ensued by its outstanding mechanical properties and, more importantly, its biocompatibility. Nevertheless, despite their previous intensive
research, a notable number of emerging applications are still being developed.
Interestingly, this drive is not solely based on the nanocellulose features, but also
heavily dependent on sustainability. The three core nanocelluloses encompass
cellulose nanocrystals (CNCs), cellulose nanofibrils (CNFs), and bacterial nanocellulose (BNC). All these different types of nanocellulose display highly interesting biomedical properties per se, after modification and when used in
composite formulations. Novel applications that use nanocellulose includewell-known areas, namely, wound dressings, implants, indwelling medical
devices, scaffolds, and novel printed scaffolds. Their cytotoxicity and biocompatibility using recent methodologies are thoroughly analyzed to reinforce their
near future applicability. By analyzing the pristine core nanocellulose, none
display cytotoxicity. However, CNF has the highest potential to fail long-term
biocompatibility since it tends to trigger inflammation. On the other hand, neverdried BNC displays a remarkable biocompatibility. Despite this, all nanocelluloses clearly represent a flag bearer of future superior biomaterials, being
elite materials in the urgent replacement of our petrochemical dependence
Recommended from our members
Computational Methods in Multi-Messenger Astrophysics using Gravitational Waves and High Energy Neutrinos
This dissertation seeks to describe advancements made in computational methods for multi-messenger astrophysics (MMA) using gravitational waves GW and neutrinos during Advanced LIGO (aLIGO)’s first through third observing runs (O1-O3) and, looking forward, to describe novel computational techniques suited to the challenges of both the burgeoning MMA field and high-performance computing as a whole.
The first two chapters provide an overview of MMA as it pertains to gravitational wave/high energy neutrino (GWHEN) searches, including a summary of expected astrophysical sources as well as GW, neutrino, and gamma-ray detectors used in their detection. These are followed in the third chapter by an in-depth discussion of LIGO’s timing system, particularly the diagnostic subsystem, describing both its role in MMA searches and the author’s contributions to the system itself.
The fourth chapter provides a detailed description of the Low-Latency Algorithm for Multi-messenger Astrophysics (LLAMA), the GWHEN pipeline developed by the author and used in O2 and O3. Relevant past multi-messenger searches are described first, followed by the O2 and O3 analysis methods, the pipeline’s performance, scientific results, and finally, an in-depth account of the library’s structure and functionality. In particular, the author’s high-performance multi-order coordinates (MOC) HEALPix image analysis library, HPMOC, is described. HPMOC increases performance of HEALPix image manipulations by several orders of magnitude vs. naive single-resolution approaches while presenting a simple high-level interface and should prove useful for diverse future MMA searches. The performance improvements it provides for LLAMA are also covered.
The final chapter of this dissertation builds on the approaches taken in developing HPMOC, presenting several novel methods for efficiently storing and analyzing large data sets, with applications to MMA and other data-intensive fields. A family of depth-first multi-resolution ordering of HEALPix images — DEPTH9, DEPTH19, and DEPTH40 — is defined, along with algorithms and use cases where it can improve on current approaches, including high-speed streaming calculations suitable for serverless compute or FPGAs.
For performance-constrained analyses on HEALPix data (e.g. image analysis in multi-messenger search pipelines) using SIMD processors, breadth-first data structures can provide short-circuiting calculations in a data-parallel way on compressed data; a simple compression method is described with application to further improving LLAMA performance.
A new storage scheme and associated algorithms for efficiently compressing and contracting tensors of varying sparsity is presented; these demuxed tensors (D-Tensors) have equivalent asymptotic time and space complexity to optimal representations of both dense and sparse matrices, and could be used as a universal drop-in replacement to reduce code complexity and developer effort while improving performance of existing non-optimized numerical code. Finally, the big bucket hash table (B-Table), a novel type of hash table making guarantees on data layout (vs. load factor), is described, along with optimizations it allows for (like hardware acceleration, online rebuilds, and hard realtime applications) that are not possible with existing hash table approaches. These innovations are presented in the hope that some will prove useful for improving future MMA searches and other data-intensive applications
Sensors and Systems for Indoor Positioning
This reprint is a reprint of the articles that appeared in Sensors' (MDPI) Special Issue on “Sensors and Systems for Indoor Positioning". The published original contributions focused on systems and technologies to enable indoor applications
Multimedia Forensics
This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
Collected Papers (on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics), Volume XI
This eleventh volume of Collected Papers includes 90 papers comprising 988 pages on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics, written between 2001-2022 by the author alone or in collaboration with the following 84 co-authors (alphabetically ordered) from 19 countries: Abhijit Saha, Abu Sufian, Jack Allen, Shahbaz Ali, Ali Safaa Sadiq, Aliya Fahmi, Atiqa Fakhar, Atiqa Firdous, Sukanto Bhattacharya, Robert N. Boyd, Victor Chang, Victor Christianto, V. Christy, Dao The Son, Debjit Dutta, Azeddine Elhassouny, Fazal Ghani, Fazli Amin, Anirudha Ghosha, Nasruddin Hassan, Hoang Viet Long, Jhulaneswar Baidya, Jin Kim, Jun Ye, Darjan Karabašević, Vasilios N. Katsikis, Ieva Meidutė-Kavaliauskienė, F. Kaymarm, Nour Eldeen M. Khalifa, Madad Khan, Qaisar Khan, M. Khoshnevisan, Kifayat Ullah,, Volodymyr Krasnoholovets, Mukesh Kumar, Le Hoang Son, Luong Thi Hong Lan, Tahir Mahmood, Mahmoud Ismail, Mohamed Abdel-Basset, Siti Nurul Fitriah Mohamad, Mohamed Loey, Mai Mohamed, K. Mohana, Kalyan Mondal, Muhammad Gulfam, Muhammad Khalid Mahmood, Muhammad Jamil, Muhammad Yaqub Khan, Muhammad Riaz, Nguyen Dinh Hoa, Cu Nguyen Giap, Nguyen Tho Thong, Peide Liu, Pham Huy Thong, Gabrijela Popović, Surapati Pramanik, Dmitri Rabounski, Roslan Hasni, Rumi Roy, Tapan Kumar Roy, Said Broumi, Saleem Abdullah, Muzafer Saračević, Ganeshsree Selvachandran, Shariful Alam, Shyamal Dalapati, Housila P. Singh, R. Singh, Rajesh Singh, Predrag S. Stanimirović, Kasan Susilo, Dragiša Stanujkić, Alexandra Şandru, Ovidiu Ilie Şandru, Zenonas Turskis, Yunita Umniyati, Alptekin Ulutaș, Maikel Yelandi Leyva Vázquez, Binyamin Yusoff, Edmundas Kazimieras Zavadskas, Zhao Loon Wang.
Recommended from our members
Applications and hardware considerations for quantum computing
Quantum computers are expected to one day be able to solve a set of problems which are practically impossible with classical super computers, even with their projected continued improvement. As the field of quantum computing has continued to evolve the somewhat disparate research areas of algorithms and hardware have improved in their integration. Fully optimizing quantum algorithms requires a solid understanding of the quantum hardware, and metering experimental hardware priorities requires an understanding of the general algorithm requirements. This thesis initially provides an overview of quantum hardware and applications and discusses their interplay in both the near term and fault tolerant regime. A greater focus is placed on trapped ion architectures in this thesis and in particular the shuttling based approach of the Ion Quantum Technology group at the University of Sussex.
A routing algorithm is provided which can efficiently enable all to all connectivity for the shuttling based trapped ion design without positional swaps. A simulation tool was created and used to develop and characterize routing algorithms. The cost of enabling connectivity in Noisy-Intermediate-Scale-Quantum devices is an important factor in determining computational power. The core ideas of this routing algorithm are currently being integrated into a software compiler stack that will control real quantum hardware. An error model for the shuttling based design is presented which makes use of the time cost for connectivity results from the simulation tool. The error model is used to estimate the computational power (quantum volume) of the design as a function of experimental parameters. The error model can be used to help meter experimental priorities by identifying the most impactful parameters across particular regimes. A comparison is performed using metrics such as Quantum Volume, between the shuttling based trapped ion design and a superconducting grid which uses logical swaps to enable connectivity, and it is found that the trapped ion design has a substantially lower cost associated with connectivity. Large scale trapped ion devices are considered and the total time required to enable all to all connectivity is estimated for both the modular shuttling approach and for the approach that uses small scale modules connected via photonic interconnects.
A review of fault tolerant methods for quantum chemistry is presented. Resource estimations are provided all the way down to the required wall-clock time and number of physical qubits, for ground state energy calculations for molecules across different basis set sizes. The basis set size at which a quantum computer can meaningfully outperform a classical supercomputer is estimated. Determining the point at which a quantum advantage may be realised can help the field progress by setting realistic expectations and by having a device size to aim for. The impact of hardware considerations such as the code cycle time is investigated by including a wider range of possible surface code error correction configurations. Two distinct methods are investigated which allow one to incrementally speed up the rate of computation until the time optimal limit is reached by introducing additional qubits. The number of physical qubits required to reach a desirable run time is estimated as a function of the hardware's code cycle time, for problems such as the ground state estimation of the FeMoco molecule, and for breaking the encryption of the Bitcoin network. It is found that for the quantum advantage problems investigated in this work, hardware with considerably slower code cycle times than the more usually considered 1µs of superconducting qubits, will still be able to reach desirable run times provided enough physical qubits are available
DUNE Offline Computing Conceptual Design Report
This document describes the conceptual design for the Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE). The goals of the experiment include 1) studying neutrino oscillations using a beam of neutrinos sent from Fermilab in Illinois to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, 2) studying astrophysical neutrino sources and rare processes and 3) understanding the physics of neutrino interactions in matter. We describe the development of the computing infrastructure needed to achieve the physics goals of the experiment by storing, cataloging, reconstructing, simulating, and analyzing 30 PB of data/year from DUNE and its prototypes. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. We describe the physics objectives, organization, use cases, and proposed technical solutions
Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease
Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges.
The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified.
The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing.
The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD.
The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD
- …