17,741 research outputs found
Configuration Management of Distributed Systems over Unreliable and Hostile Networks
Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems.
This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration.
Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture.
The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn.
Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts
DenMerD: a feature enhanced approach to radar beam blockage correction with edge-cloud computing
In the field of meteorology, the global radar network is indispensable for detecting weather phenomena and offering early warning services. Nevertheless, radar data frequently exhibit anomalies, including gaps and clutter, arising from atmospheric refraction, equipment malfunctions, and other factors, resulting in diminished data quality. Traditional radar blockage correction methods, such as employing approximate radial information interpolation and supplementing missing data, often fail to effectively exploit potential patterns in massive radar data, for the large volume of data precludes a thorough analysis and understanding of the inherent complex patterns and dependencies through simple interpolation or supplementation techniques. Fortunately, edge computing possesses certain data processing capabilities and cloud center boasts substantial computational power, which together can collaboratively offer timely computation and storage for the correction of radar beam blockage. To this end, an edge-cloud collaborative driven deep learning model named DenMerD is proposed in this paper, which includes dense connection module and merge distribution (MD) unit. Compared to existing models such as RC-FCN, DenseNet, and VGG, this model greatly improves key performance metrics, with 30.7% improvement in Critical Success Index (CSI), 30.1% improvement in Probability of Detection (POD), and 3.1% improvement in False Alarm Rate (FAR). It also performs well in the Structure Similarity Index Measure (SSIM) metrics compared to its counterparts. These findings underscore the efficacy of the design in improving feature propagation and beam blockage accuracy, and also highlights the potential and value of mobile edge computing in processing large-scale meteorological data
GeoYCSB: A Benchmark Framework for the Performance and Scalability Evaluation of Geospatial NoSQL Databases
The proliferation of geospatial applications has tremendously increased the variety, velocity, and volume of spatial data that data stores have to manage. Traditional relational databases reveal limitations in handling such big geospatial data, mainly due to their rigid schema requirements and limited scalability. Numerous NoSQL databases have emerged and actively serve as alternative data stores for big spatial data. This study presents a framework, called GeoYCSB, developed for benchmarking NoSQL databases with geospatial workloads. To develop GeoYCSB, we extend YCSB, a de facto benchmark framework for NoSQL systems, by integrating into its design architecture the new components necessary to support geospatial workloads. GeoYCSB supports both microbenchmarks and macrobenchmarks and facilitates the use of real datasets in both. It is extensible to evaluate any NoSQL database, provided they support spatial queries, using geospatial workloads performed on datasets of any geometric complexity. We use GeoYCSB to benchmark two leading document stores, MongoDB and Couchbase, and present the experimental results and analysis. Finally, we demonstrate the extensibility of GeoYCSB by including a new dataset consisting of complex geometries and using it to benchmark a system with a wide variety of geospatial queries: Apache Accumulo, a wide-column store, with the GeoMesa framework applied on top
Towards A Practical High-Assurance Systems Programming Language
Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation.
Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code.
To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
Interactive visualizations of unstructured oceanographic data
The newly founded company Oceanbox is creating a novel oceanographic forecasting system to provide oceanography as a service. These services use mathematical models that generate large hydrodynamic data sets as unstructured triangular grids with high-resolution model areas. Oceanbox makes the model results accessible in a web application. New visualizations are needed to accommodate land-masking and large data volumes.
In this thesis, we propose using a k-d tree to spatially partition unstructured triangular grids to provide the look-up times needed for interactive visualizations. A k-d tree is implemented in F# called FsKDTree. This thesis also describes the implementation of dynamic tiling map layers to visualize current barbs, scalar fields, and particle streams. The current barb layer queries data from the data server with the help of the k-d tree and displays it in the browser. Scalar fields and particle streams are implemented using WebGL, which enables the rendering of triangular grids. Stream particle visualization effects are implemented as velocity advection computed on the GPU with textures.
The new visualizations are used in Oceanbox's production systems, and spatial indexing has been integrated into Oceanbox's archive retrieval system. FsKDTree improves tree creation times by up to 4x over the C# equivalent and improves search times by up to 13x compared to the .NET C# implementation. Finally, the largest model areas can be viewed with current barbs, scalar fields, and particle stream visualizations at 60 FPS, even for the largest model areas provided by the service
LKP: Simulasi Desain Load Balancing dengan Menggunakan Metode NTH
Load Balancing adalah teknik untuk mendistribusikan beban trafik pada dua atau lebih jalur koneksi secara seimbang, agar trafik dapat berjalan optimal, memaksimalkan throughput, memperkecil waktu tanggap dan menghindari overload pada salah satu jalur koneksi. Load balancing digunakan pada saat sebuah server telah memiliki jumlah user yang telah melebihi maksimal kapasitasnya. Load Balancing juga mendistribusikan beban kerja secara merata di dua atau lebih komputer, link jaringan, CPU, hard drive, atau sumber daya lainnya, untuk mendapatkan pemanfaatan sumber daya yang optimal.
MEDIA DATA NUSANTARA adalah perusahaan yang berfokus pada Internet Service Provider dalam proses pemasangan seperti Wifi. Pada perusahaan ISP seperti MEDIA DATA NUSANTARA biasanya menggunakan BGP untuk mengontrol dan mengatur trafik-trafik dari sumber berbeda di dalam network multi-home (tersambung ke lebih dari 1 ISP/Internet Service Provider). BGP mempunyai skalabilitas yang tinggi dan jangkauan BGP sangat luas dalam melayani para pengguna jaringan. Karena banyaknya para pengguna jaringan (user), maka harus ada Load Balancing pada setiap user dan biasanya Media Data Nusantara menggunakan metode NTH untuk konfigurasi Load Balancing pada user.
Pada Kerja Praktik ini, menggunakan dua ISP dan menjadikan sebagai Load Balancing. Mekanismenya yaitu router mikrotik akan menandai paket yang ingin mengakses internet, lalu memilih jalur ISP mana yang akan di lewatinya dan menyetarakan beban pada kedua ISP tersebut. Teknik fail overakan di terapkan juga pada jaringan ini, yaitu jika salah satu koneksi gateway sedang terputus, maka gateway yang lainnya otomatis akan menopang semua traffic jaringan dengan begitu koneksi internet pada jaringan internet tidak sepenuhnya putus. Ini dilakukan agar pengguna internet pada setiap bidang pada MEDIA DATA NUSANTARA dapat bekerja secara optimal. Oleh karena itu untuk menyelesaikan permasalahan di atas maka penulis melakukan penelitian dengan judul “Simulasi Desain Load Balancing Dengan Mengguakan Metode Nth
The regulation of digital platforms: the case of pagoPA
How can EU regulation affect innovation. Digital revolution: How big data have changed the world and the legal landscape. The regulation of digital platforms in Europe. Digital revolution: How distributed ledger technologies are changing the world and the legal landscape. Regulation of digital payments: the case of pagopa
Artificial Intelligence Based Deep Bayesian Neural Network (DBNN) Toward Personalized Treatment of Leukemia with Stem Cells
The dynamic development of computer and software technology in recent years was accompanied by the expansion and widespread implementation of artificial intelligence (AI) based methods in many aspects of human life. A prominent field where rapid progress was observed are high‐throughput methods in biology that generate big amounts of data that need to be processed and analyzed. Therefore, AI methods are more and more applied in the biomedical field, among others for RNA‐protein binding sites prediction, DNA sequence function prediction, protein‐protein interaction prediction, or biomedical image classification. Stem cells are widely used in biomedical research, e.g., leukemia or other disease studies. Our proposed approach of Deep Bayesian Neural Network (DBNN) for the personalized treatment of leukemia cancer has shown a significant tested accuracy for the model. DBNNs used in this study was able to classify images with accuracy exceeding 98.73%. This study depicts that the DBNN can classify cell cultures only based on unstained light microscope images which allow their further use. Therefore, building a bayesian‐based model to great help during commercial cell culturing, and possibly a first step in the process of creating an automated/semiautomated neural network‐based model for classification of good and bad quality cultures when images of such will be available
Advancements and Challenges in Energy-efficient 6G Mobile Communication Network
The arrival of 6G mobile communication networks is anticipated to revolutionize the technological landscape, bringing about profound innovations. This research paper explores the various technological advancements that will pave the way for the advent of 6G networks, with a particular focus on addressing energy consumption. It is widely recognized that energy efficiency plays a crucial role in the evolution of 6G networks. To enhance network performance, user experience, and resource management, the integration of Artificial Intelligence (AI) is expected to be a pivotal technology. AI-based solutions can effectively optimize energy usage and contribute to the overall efficiency of 6G networks. Furthermore, the incorporation of wireless communication systems, telecommunication, and the Internet of Things (IoT) will be integral to the infrastructure of 6G networks. The need for significant enhancements in 6G networks is also examined in this study. Ensuring the safety and protection of 6G networks from cyber threats becomes increasingly important due to the growing reliance on networked communication and the sensitive nature of transmitted information. Cutting-edge security methods such as homomorphic encryption and blockchain technology may be essential in this regard. Moreover, this research paper explores the impact of 6G networks on various domains and discusses the challenges that must be overcome to unlock the technology’s full potential. To ensure responsible adoption and usage of 6G networks, the development of new business models and regulatory frameworks may be necessary to support their implementation while addressing energy consumption concerns
- …