9,475 research outputs found
A New WRR Algorithm for an Efficient Load Balancing System in IoT Networks under SDN
The Internet of Things (IoT) connects various smart objects and manages a vast network using diverse technologies, which present numerous challenges. Software-defined networking (SDN) is a system that addresses the challenges of traditional networks and ensures the centralized configuration of network entities to manage network integrity. Furthermore, the uneven distribution of IoT network load results in the depletion of IoT device resources. To address this issue, traffic must be distributed equally, requiring efficient load balancing to be ensured. This requires the development of an efficient architecture for IoT networks. The main goal of this paper is to propose a novel architecture that leverages the potential of SDN, the clustering technique, and a new weighted round-robin (N-WRR) protocol. The objective of this architecture is to achieve load balancing, which is a crucial aspect in the development of IoT networks as it ensures the network’s efficiency. Furthermore, to prevent network congestion and ensure efficient data flow by redistributing traffic from overloaded paths to less burdened ones. The simulation results demonstrate that our N-WRR algorithm achieves highly efficient load balancing compared to the simple weighted round-robin (WRR), and without the application of any load balancing method. Furthermore, our proposed approach enhances throughput, data transfer, and bandwidth availability. This results in an increase in processed requests
Analysis of Internal Control and Fraud Prevention Efforts in Public Sector Accounting
This study aims to analyze the effect of internal control on accounting fraud that occurs in the public sector. This research was conducted using a literature review by reviewing 10 previous articles both national and international articles. The keywords used in the iterator search include 'internal control', 'fraud', 'public sector accounting fraud'. The results of the review of 10 articles show that internal control is an effective effort in preventing accounting fraud in public sector organizations
Incidencia de los patrones de diseño de software en la seguridad de aplicaciones web
Design patterns are recurring solutions to common design problems; they have gained recognition as fundamental tools for efficiently structuring and organizing code. In this context, the question arises of how these patterns can influence the security of web applications, which are often exposed to a wide range of threats and vulnerabilities.
Investigating the influence of software design patterns is crucial since they provide approaches for data validation, authentication, and responsibility segregation. This can help identify and prevent common and specific vulnerabilities, thereby reducing the likelihood of an attack on the application. Web applications handle sensitive information as they provide various services to users, making their security crucial for end users. The research aims to determine how software design patterns contribute to mitigating vulnerabilities in web applications. Controlling and mitigating vulnerabilities is a daily task for developers and incurs costs in software maintainability. An essential aspect of the research is highlighting that, when developing applications based on design patterns, future security incidents can be addressed thanks to well-defined structures and guidelines that guide pattern-based development. Design patterns are recurring solutions to common design problems; they have gained recognition as fundamental tools for efficiently structuring and organizing code. In this context, the question arises of how these patterns can influence the security of web applications, which are often exposed to a wide range of threats and vulnerabilities. Investigating the influence of software design patterns is crucial since they provide approaches for data validation, authentication, and responsibility segregation. This can help identify and prevent common and specific vulnerabilities, thereby reducing the likelihood of an attack on the application. Web applications handle sensitive information as they provide various services to users, making their security crucial for end users. The research aims to determine how software design patterns contribute to mitigating vulnerabilities in web applications.
Controlling and mitigating vulnerabilities is a daily task for developers and incurs costs in software maintainability. An essential aspect of the research is highlighting that, when developing applications based on design patterns, future security incidents can be addressed thanks to well-defined structures and guidelines that guide pattern-based development.Los patrones de diseño son soluciones recurrentes para problemas de diseño comunes, estas han ganado reconocimiento como herramientas fundamentales para estructurar y organizar el código de manera eficiente. En este contexto, surge la pregunta de cómo estos patrones pueden influir en la seguridad de las aplicaciones web, que a menudo están expuestas a una amplia gama de amenazas y vulnerabilidades.
Investigar la influencia de los patrones de diseño de software es importante ya que, al brindarnos enfoques para la validación de datos, la autenticación y la segregación de responsabilidades puede ayudarnos a identificar y prevenir vulnerabilidades más comunes y específicas, por lo tanto, disminuir la posibilidad de un ataque en la aplicación.
Las aplicaciones web al brindar diferentes servicios a usuarios manejan información sensible, por lo que su seguridad es fundamental para el usuario final. El objetivo de la investigación es determinar cómo los patrones de diseño de software contribuyen a mitigar vulnerabilidades en aplicaciones web.
Controlar y mitigar las vulnerabilidades es tarea diaria de los desarrolladores por lo que causan costos a la mantenibilidad de software, un aspecto importante de la investigación es denotar que al momento de desarrollar aplicaciones basada en patrones de diseño se puede solventar futuras incidencias de seguridad gracias a las estructuras y pautas bien definidas que guían el desarrollo por patrones de diseño.
 
Two fast and accurate routines for solving the elliptic Kepler equation for all values of the eccentricity and mean anomaly
Context.
The repetitive solution of Kepler’s equation (KE) is the slowest step for several highly demanding computational tasks in astrophysics. Moreover, a recent work demonstrated that the current solvers face an accuracy limit that becomes particularly stringent for high eccentricity orbits.
Aims.
Here we describe two routines, ENRKE and ENP5KE, for solving KE with both high speed and optimal accuracy, circumventing the abovementioned limit by avoiding the use of derivatives for the critical values of the eccentricity
e
and mean anomaly
M
, namely
e
> 0.99 and
M
close to the periapsis within 0.0045 rad.
Methods.
The ENRKE routine enhances the Newton-Raphson algorithm with a conditional switch to the bisection algorithm in the critical region, an efficient stopping condition, a rational first guess, and one fourth-order iteration. The ENP5KE routine uses a class of infinite series solutions of KE to build an optimized piecewise quintic polynomial, also enhanced with a conditional switch for close bracketing and bisection in the critical region. High-performance Cython routines are provided that implement these methods, with the option of utilizing parallel execution.
Results.
These routines outperform other solvers for KE both in accuracy and speed. They solve KE for every
e
∈ [0, 1 −
ϵ
], where
ϵ
is the machine epsilon, and for every
M
, at the best accuracy that can be obtained in a given
M
interval. In particular, since the ENP5KE routine does not involve any transcendental function evaluation in its generation phase, besides a minimum amount in the critical region, it outperforms any other KE solver, including the ENRKE, when the solution
E
(
M
) is required for a large number
N
of values of
M
.
Conclusions.
The ENRKE routine can be recommended as a general purpose solver for KE, and the ENP5KE can be the best choice in the large
N
regime.Axencia Galega de InnovaciónAgencia Estatal de Investigación | Ref. FIS2017-83762-
Generate fuzzy string-matching to build self attention on Indonesian medical-chatbot
Chatbot is a form of interactive conversation that requires quick and precise answers. The process of identifying answers to users’ questions involves string matching and handling incorrect spelling. Therefore, a system that can independently predict and correct letters is highly necessary. The approach used to address this issue is to enhance the fuzzy string-matching method by incorporating several features for self-attention. The combination of fuzzy string-matching methods employed includes Jaro Winkler distance + Levenshtein Damerau distance and Damerau Levenshtein + Rabin Carp. The reason for using this combination is their ability not only to match strings but also to correct word typing errors. This research contributes by developing a self-attention mechanism through a modified fuzzy string-matching model with enhanced word feature structures. The goal is to utilize this self-attention mechanism in constructing the Indonesian medical bidirectional encoder representations from transformers (IM-BERT). This will serve as a foundation for additional features to provide accurate answers in the Indonesian medical question and answer system, achieving an exact match of 85.7% and an F1-score of 87.6%
Polynomial time and dependent types
We combine dependent types with linear type systems that soundly and completely capture polynomial time computation. We explore two systems for capturing polynomial time: one system that disallows construction of iterable data, and one, based on the LFPL system of Martin Hofmann, that controls construction via a payment method. Both of these are extended to full dependent types via Quantitative Type Theory, allowing for arbitrary computation in types alongside guaranteed polynomial time computation in terms. We prove the soundness of the systems using a realisability technique due to Dal Lago and Hofmann. Our long-term goal is to combine the extensional reasoning of type theory with intensional reasoning about the resources intrinsically consumed by programs. This paper is a step along this path, which we hope will lead both to practical systems for reasoning about programs’ resource usage, and to theoretical use as a form of synthetic computational complexity theory
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Towards an induction principle for nested data types
A well-known problem in the theory of dependent types is how to handle
so-called nested data types. These data types are difficult to program and to
reason about in total dependently typed languages such as Agda and Coq. In
particular, it is not easy to derive a canonical induction principle for such
types. Working towards a solution to this problem, we introduce dependently
typed folds for nested data types. Using the nested data type Bush as a guiding
example, we show how to derive its dependently typed fold and induction
principle. We also discuss the relationship between dependently typed folds and
the more traditional higher-order folds.Comment: 11 page
Guided rewriting and constraint satisfaction for parallel GPU code generation
Graphics Processing Units (GPUs) are notoriously hard to optimise for manually due to their scheduling and memory hierarchies. What is needed are good automatic code generators and optimisers for such parallel hardware. Functional approaches such as Accelerate, Futhark and LIFT leverage a high-level algorithmic Intermediate Representation (IR) to expose parallelism and abstract the implementation details away from the user. However, producing efficient code for a given accelerator remains challenging. Existing code generators depend on the user input to choose a subset of hard-coded optimizations or automated exploration of implementation search space. The former suffers from the lack of extensibility, while the latter is too costly due to the size of the search space. A hybrid approach is needed, where a space of valid implementations is built automatically and explored with the aid of human expertise.
This thesis presents a solution combining user-guided rewriting and automatically generated constraints to produce high-performance code. The first contribution is an automatic tuning technique to find a balance between performance and memory consumption. Leveraging its functional patterns, the LIFT compiler is empowered to infer tuning constraints and limit the search to valid tuning combinations only.
Next, the thesis reframes parallelisation as a constraint satisfaction problem. Parallelisation constraints are extracted automatically from the input expression, and a solver is used to identify valid rewriting. The constraints truncate the search space to valid parallel mappings only by capturing the scheduling restrictions of the GPU in the context of a given program. A synchronisation barrier insertion technique is proposed to prevent data races and improve the efficiency of the generated parallel mappings.
The final contribution of this thesis is the guided rewriting method, where the user encodes a design space of structural transformations using high-level IR nodes called rewrite points. These strongly typed pragmas express macro rewrites and expose design choices as explorable parameters. The thesis proposes a small set of reusable rewrite points to achieve tiling, cache locality, data reuse and memory optimisation.
A comparison with the vendor-provided handwritten kernel ARM Compute Library and the TVM code generator demonstrates the effectiveness of this thesis' contributions. With convolution as a use case, LIFT-generated direct and GEMM-based convolution implementations are shown to perform on par with the state-of-the-art solutions on a mobile GPU. Overall, this thesis demonstrates that a functional IR yields well to user-guided and automatic rewriting for high-performance code generation
- …