317 research outputs found
The effects of banking market structure on corporate cash holdings and the value of cash
We investigate the impact of the local banking market structure on the level of corporate cash holdings and the value of cash. We find that, in more concentrated banking markets, firms increase their cash holdings by issuing more equity. The marginal value of $1 cash increases by 10 cents with a one-standard-deviation increase in bank concentration. The positive relationship between bank concentration and value of cash is robust to a rich set of tests such as for firms having access to bond markets or firms using syndicated loans and is more prominent for more financially constrained firms. We also explore the mechanism, and our results suggest that in more concentrated banking markets firms demand more cash to shield against default risk
Security and Privacy for Modern Wireless Communication Systems
The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks
Towards Scalable OLTP Over Fast Networks
Online Transaction Processing (OLTP) underpins real-time data processing in many mission-critical applications, from banking to e-commerce.
These applications typically issue short-duration, latency-sensitive transactions that demand immediate processing.
High-volume applications, such as Alibaba's e-commerce platform, achieve peak transaction rates as high as 70 million transactions per second, exceeding the capacity of a single machine.
Instead, distributed OLTP database management systems (DBMS) are deployed across multiple powerful machines.
Historically, such distributed OLTP DBMSs have been primarily designed to avoid network communication, a paradigm largely unchanged since the 1980s.
However, fast networks challenge the conventional belief that network communication is the main bottleneck.
In particular, emerging network technologies, like Remote Direct Memory Access (RDMA), radically alter how data can be accessed over a network.
RDMA's primitives allow direct access to the memory of a remote machine within an order of magnitude of local memory access.
This development invalidates the notion that network communication is the primary bottleneck.
Given that traditional distributed database systems have been designed with the premise that the network is slow, they cannot efficiently exploit these fast network primitives, which requires us to reconsider how we design distributed OLTP systems.
This thesis focuses on the challenges RDMA presents and its implications on the design of distributed OLTP systems.
First, we examine distributed architectures to understand data access patterns and scalability in modern OLTP systems.
Drawing on these insights, we advocate a distributed storage engine optimized for high-speed networks.
The storage engine serves as the foundation of a database, ensuring efficient data access through three central components: indexes, synchronization primitives, and buffer management (caching).
With the introduction of RDMA, the landscape of data access has undergone a significant transformation.
This requires a comprehensive redesign of the storage engine components to exploit the potential of RDMA and similar high-speed network technologies.
Thus, as the second contribution, we design RDMA-optimized tree-based indexes — especially applicable for disaggregated databases to access remote data efficiently.
We then turn our attention to the unique challenges of RDMA.
One-sided RDMA, one of the network primitives introduced by RDMA, presents a performance advantage in enabling remote memory access while bypassing the remote CPU and the operating system.
This allows the remote CPU to process transactions uninterrupted, with no requirement to be on hand for network communication. However, that way, specialized one-sided RDMA synchronization primitives are required since traditional CPU-driven primitives are bypassed.
We found that existing RDMA one-sided synchronization schemes are unscalable or, even worse, fail to synchronize correctly, leading to hard-to-detect data corruption.
As our third contribution, we address this issue by offering guidelines to build scalable and correct one-sided RDMA synchronization primitives.
Finally, recognizing that maintaining all data in memory becomes economically unattractive, we propose a distributed buffer manager design that efficiently utilizes cost-effective NVMe flash storage.
By leveraging low-latency RDMA messages, our buffer manager provides a transparent memory abstraction, accessing the aggregated DRAM and NVMe storage across nodes.
Central to our approach is a distributed caching protocol that dynamically caches data.
With this approach, our system can outperform RDMA-enabled in-memory distributed databases while managing larger-than-memory datasets efficiently
Two-Stage Vehicle Routing Problems with Profits and Buffers: Analysis and Metaheuristic Optimization Algorithms
This thesis considers the Two-Stage Vehicle Routing Problem (VRP) with Profits and Buffers, which generalizes various optimization problems that are relevant for practical applications, such as the Two-Machine Flow Shop with Buffers and the Orienteering Problem. Two optimization problems are considered for the Two-Stage VRP with Profits and Buffers, namely the minimization of total time while respecting a profit constraint and the maximization of total profit under a budget constraint. The former generalizes the makespan minimization problem for the Two-Machine Flow Shop with Buffers, whereas the latter is comparable to the problem of maximizing score in the Orienteering Problem.
For the three problems, a theoretical analysis is performed regarding computational complexity, existence of optimal permutation schedules (where all vehicles traverse the same nodes in the same order) and potential gaps in attainable solution quality between permutation schedules and non-permutation schedules. The obtained theoretical results are visualized in a table that gives an overview of various subproblems belonging to the Two-Stage VRP with Profits and Buffers, their theoretical properties and how they are connected.
For the Two-Machine Flow Shop with Buffers and the Orienteering Problem, two metaheuristics 2BF-ILS and VNSOP are presented that obtain favorable results in computational experiments when compared to other state-of-the-art algorithms. For the Two-Stage VRP with Profits and Buffers, an algorithmic framework for Iterative Search Algorithms with Variable Neighborhoods (ISAVaN) is proposed that generalizes aspects from 2BF-ILS as well as VNSOP. Various algorithms derived from that framework are evaluated in an experimental study. The evaluation methodology used for all computational experiments in this thesis takes the performance during the run time into account and demonstrates that algorithms for structurally different problems, which are encompassed by the Two-Stage VRP with Profits and Buffers, can be evaluated with similar methods.
The results show that the most suitable choice for the components in these algorithms is dependent on the properties of the problem and the considered evaluation criteria. However, a number of similarities to algorithms that perform well for the Two-Machine Flow Shop with Buffers and the Orienteering Problem can be identified. The framework unifies these characteristics, providing a spectrum of algorithms that can be adapted to the specifics of the considered Vehicle Routing Problem.:1 Introduction
2 Background
2.1 Problem Motivation
2.2 Formal Definition of the Two-Stage VRP with Profits and Buffers
2.3 Review of Literature on Related Vehicle Routing Problems
2.3.1 Two-Stage Vehicle Routing Problems
2.3.2 Vehicle Routing Problems with Profits
2.3.3 Vehicle Routing Problems with Capacity- or Resource-based Restrictions
2.4 Preliminary Remarks on Subsequent Chapters
3 The Two-Machine Flow Shop Problem with Buffers
3.1 Review of Literature on Flow Shop Problems with Buffers
3.1.1 Algorithms and Metaheuristics for Flow Shops with Buffers
3.1.2 Two-Machine Flow Shop Problems with Buffers
3.1.3 Blocking Flow Shops
3.1.4 Non-Permutation Schedules
3.1.5 Other Extensions and Variations of Flow Shop Problems
3.2 Theoretical Properties
3.2.1 Computational Complexity
3.2.2 The Existence of Optimal Permutation Schedules
3.2.3 The Gap Between Permutation Schedules an Non-Permutation
3.3 A Modification of the NEH Heuristic
3.4 An Iterated Local Search for the Two-Machine Flow Shop Problem with Buffers
3.5 Computational Evaluation
3.5.1 Algorithms for Comparison
3.5.2 Generation of Problem Instances
3.5.3 Parameter Values
3.5.4 Comparison of 2BF-ILS with other Metaheuristics
3.5.5 Comparison of 2BF-OPT with NEH
3.6 Summary
4 The Orienteering Problem
4.1 Review of Literature on Orienteering Problems
4.2 Theoretical Properties
4.3 A Variable Neighborhood Search for the Orienteering Problem
4.4 Computational Evaluation
4.4.1 Measurement of Algorithm Performance
4.4.2 Choice of Algorithms for Comparison
4.4.3 Problem Instances
4.4.4 Parameter Values
4.4.5 Experimental Setup
4.4.6 Comparison of VNSOP with other Metaheuristics
4.5 Summary
5 The Two-Stage Vehicle Routing Problem with Profits and Buffers
5.1 Theoretical Properties of the Two-Stage VRP with Profits and Buffers
5.1.1 Computational Complexity of the General Problem
5.1.2 Existence of Permutation Schedules in the Set of Optimal Solutions
5.1.3 The Gap Between Permutation Schedules an Non-Permutation Schedules
5.1.4 Remarks on Restricted Cases
5.1.5 Overview of Theoretical Results
5.2 A Metaheuristic Framework for the Two-Stage VRP with Profits and Buffers
5.3 Experimental Results
5.3.1 Problem Instances
5.3.2 Experimental Results for O_{max R, Cmax≤B}
5.3.3 Experimental Results for O_{min Cmax, R≥Q}
5.4 Summary
Bibliography
List of Figures
List of Tables
List of Algorithm
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum
Fundamentals
Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters
Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico
Conference proceedings info:
ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies
Raleigh, HI, United States, March 24-26, 2023
Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center
of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologÃas de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clÃnicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la SecretarÃa de Salud, el Centro de Comando, Comunicaciones y Control Informático.
de la SecretarÃa del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-
A High-Performance Design, Implementation, Deployment, and Evaluation of The Slim Fly Network
Novel low-diameter network topologies such as Slim Fly (SF) offer significant
cost and power advantages over the established Fat Tree, Clos, or Dragonfly. To
spearhead the adoption of low-diameter networks, we design, implement, deploy,
and evaluate the first real-world SF installation. We focus on deployment,
management, and operational aspects of our test cluster with 200 servers and
carefully analyze performance. We demonstrate techniques for simple cabling and
cabling validation as well as a novel high-performance routing architecture for
InfiniBand-based low-diameter topologies. Our real-world benchmarks show SF's
strong performance for many modern workloads such as deep neural network
training, graph analytics, or linear algebra kernels. SF outperforms
non-blocking Fat Trees in scalability while offering comparable or better
performance and lower cost for large network sizes. Our work can facilitate
deploying SF while the associated (open-source) routing architecture is fully
portable and applicable to accelerate any low-diameter interconnect
Chapter 34 - Biocompatibility of nanocellulose: Emerging biomedical applications
Nanocellulose already proved to be a highly relevant material for biomedical
applications, ensued by its outstanding mechanical properties and, more importantly, its biocompatibility. Nevertheless, despite their previous intensive
research, a notable number of emerging applications are still being developed.
Interestingly, this drive is not solely based on the nanocellulose features, but also
heavily dependent on sustainability. The three core nanocelluloses encompass
cellulose nanocrystals (CNCs), cellulose nanofibrils (CNFs), and bacterial nanocellulose (BNC). All these different types of nanocellulose display highly interesting biomedical properties per se, after modification and when used in
composite formulations. Novel applications that use nanocellulose includewell-known areas, namely, wound dressings, implants, indwelling medical
devices, scaffolds, and novel printed scaffolds. Their cytotoxicity and biocompatibility using recent methodologies are thoroughly analyzed to reinforce their
near future applicability. By analyzing the pristine core nanocellulose, none
display cytotoxicity. However, CNF has the highest potential to fail long-term
biocompatibility since it tends to trigger inflammation. On the other hand, neverdried BNC displays a remarkable biocompatibility. Despite this, all nanocelluloses clearly represent a flag bearer of future superior biomaterials, being
elite materials in the urgent replacement of our petrochemical dependence
Recommended from our members
Computational Methods in Multi-Messenger Astrophysics using Gravitational Waves and High Energy Neutrinos
This dissertation seeks to describe advancements made in computational methods for multi-messenger astrophysics (MMA) using gravitational waves GW and neutrinos during Advanced LIGO (aLIGO)’s first through third observing runs (O1-O3) and, looking forward, to describe novel computational techniques suited to the challenges of both the burgeoning MMA field and high-performance computing as a whole.
The first two chapters provide an overview of MMA as it pertains to gravitational wave/high energy neutrino (GWHEN) searches, including a summary of expected astrophysical sources as well as GW, neutrino, and gamma-ray detectors used in their detection. These are followed in the third chapter by an in-depth discussion of LIGO’s timing system, particularly the diagnostic subsystem, describing both its role in MMA searches and the author’s contributions to the system itself.
The fourth chapter provides a detailed description of the Low-Latency Algorithm for Multi-messenger Astrophysics (LLAMA), the GWHEN pipeline developed by the author and used in O2 and O3. Relevant past multi-messenger searches are described first, followed by the O2 and O3 analysis methods, the pipeline’s performance, scientific results, and finally, an in-depth account of the library’s structure and functionality. In particular, the author’s high-performance multi-order coordinates (MOC) HEALPix image analysis library, HPMOC, is described. HPMOC increases performance of HEALPix image manipulations by several orders of magnitude vs. naive single-resolution approaches while presenting a simple high-level interface and should prove useful for diverse future MMA searches. The performance improvements it provides for LLAMA are also covered.
The final chapter of this dissertation builds on the approaches taken in developing HPMOC, presenting several novel methods for efficiently storing and analyzing large data sets, with applications to MMA and other data-intensive fields. A family of depth-first multi-resolution ordering of HEALPix images — DEPTH9, DEPTH19, and DEPTH40 — is defined, along with algorithms and use cases where it can improve on current approaches, including high-speed streaming calculations suitable for serverless compute or FPGAs.
For performance-constrained analyses on HEALPix data (e.g. image analysis in multi-messenger search pipelines) using SIMD processors, breadth-first data structures can provide short-circuiting calculations in a data-parallel way on compressed data; a simple compression method is described with application to further improving LLAMA performance.
A new storage scheme and associated algorithms for efficiently compressing and contracting tensors of varying sparsity is presented; these demuxed tensors (D-Tensors) have equivalent asymptotic time and space complexity to optimal representations of both dense and sparse matrices, and could be used as a universal drop-in replacement to reduce code complexity and developer effort while improving performance of existing non-optimized numerical code. Finally, the big bucket hash table (B-Table), a novel type of hash table making guarantees on data layout (vs. load factor), is described, along with optimizations it allows for (like hardware acceleration, online rebuilds, and hard realtime applications) that are not possible with existing hash table approaches. These innovations are presented in the hope that some will prove useful for improving future MMA searches and other data-intensive applications
- …