9 research outputs found

    A Comprehensive Bibliometric Analysis on Social Network Anonymization: Current Approaches and Future Directions

    Full text link
    In recent decades, social network anonymization has become a crucial research field due to its pivotal role in preserving users' privacy. However, the high diversity of approaches introduced in relevant studies poses a challenge to gaining a profound understanding of the field. In response to this, the current study presents an exhaustive and well-structured bibliometric analysis of the social network anonymization field. To begin our research, related studies from the period of 2007-2022 were collected from the Scopus Database then pre-processed. Following this, the VOSviewer was used to visualize the network of authors' keywords. Subsequently, extensive statistical and network analyses were performed to identify the most prominent keywords and trending topics. Additionally, the application of co-word analysis through SciMAT and the Alluvial diagram allowed us to explore the themes of social network anonymization and scrutinize their evolution over time. These analyses culminated in an innovative taxonomy of the existing approaches and anticipation of potential trends in this domain. To the best of our knowledge, this is the first bibliometric analysis in the social network anonymization field, which offers a deeper understanding of the current state and an insightful roadmap for future research in this domain.Comment: 73 pages, 28 figure

    Enhanced artificial bee colony-least squares support vector machines algorithm for time series prediction

    Get PDF
    Over the past decades, the Least Squares Support Vector Machines (LSSVM) has been widely utilized in prediction task of various application domains. Nevertheless, existing literature showed that the capability of LSSVM is highly dependent on the value of its hyper-parameters, namely regularization parameter and kernel parameter, where this would greatly affect the generalization of LSSVM in prediction task. This study proposed a hybrid algorithm, based on Artificial Bee Colony (ABC) and LSSVM, that consists of three algorithms; ABC-LSSVM, lvABC-LSSVM and cmABC-LSSVM. The lvABC algorithm is introduced to overcome the local optima problem by enriching the searching behaviour using Levy mutation. On the other hand, the cmABC algorithm that incorporates conventional mutation addresses the over- fitting or under-fitting problem. The combination of lvABC and cmABC algorithm, which is later introduced as Enhanced Artificial Bee Colony–Least Squares Support Vector Machine (eABC-LSSVM), is realized in prediction of non renewable natural resources commodity price. Upon the completion of data collection and data pre processing, the eABC-LSSVM algorithm is designed and developed. The predictability of eABC-LSSVM is measured based on five statistical metrics which include Mean Absolute Percentage Error (MAPE), prediction accuracy, symmetric MAPE (sMAPE), Root Mean Square Percentage Error (RMSPE) and Theils’ U. Results showed that the eABC-LSSVM possess lower prediction error rate as compared to eight hybridization models of LSSVM and Evolutionary Computation (EC) algorithms. In addition, the proposed algorithm is compared to single prediction techniques, namely, Support Vector Machines (SVM) and Back Propagation Neural Network (BPNN). In general, the eABC-LSSVM produced more than 90% prediction accuracy. This indicates that the proposed eABC-LSSVM is capable of solving optimization problem, specifically in the prediction task. The eABC-LSSVM is hoped to be useful to investors and commodities traders in planning their investment and projecting their profit

    Static analysis of concurrrent and distributed systems: concurrent objects and Ethereum Bytecode

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, leída el 23-01-2020Hoy en día la concurrencia y la distribución se han convertido en una parte fundamental del proceso de desarrollo de software. Indiscutiblemente, Internet y el uso cada vez más extendido de los procesadores multicore ha influido en el tipo de aplicaciones que se desarrollan. Esto ha dado lugar a la creación de distintos modelos de concurrencia .En particular, uno de los modelos de concurrencia que está ganando importancia es el modelo de objetos concurrentes basado en actores. En este modelo, los objetos (denominados actores) son las unidades de concurrencia. Cada objeto tiene su propio procesador y un estado local. La comunicación entre los mismos se lleva a cabo mediante el paso de mensajes. Cuando un objeto recibe un mensaje puede: actualizar su estado, mandar mensajes o crear nuevos objetos. Es bien sabido que la creación de programas concurrentes correctos es más compleja que la de programas secuenciales ya que es necesario tener en cuenta distintos aspectos inherentes a la concurrencia como los errores asociados a las carreras de datos o a los interbloqueos. Con el n de asegurar el correcto comportamiento de estos programas concurrentes se han desarrollado distintas técnicas de análisis estático y verificación para los diversos modelos de concurrencia existentes...Nowadays concurrency and distribution have become a fundamental part in the softwaredevelopment process. The Internet and the more extended use of multicore processorshave in uenced the type of the applications which are being developed. This has lead tothe creation of several concurrency models. In particular, a concurrency model that isgaining popularity is the actor model, the basis for concurrent objects. In this model,the objects (actors) are the concurrent units. Each object has its own processor and alocal state, and the communication between them is carried out using message passing.In response to receiving a message, an actor can update its local state, send messages orcreate new objects.Developing correct concurrent programs is known to be harder than writing sequentialones because of inherent aspects of concurrency such as data races or deadlocks. To ensurethe correct behavior of concurrent programs, static analyses and verication techniqueshave been developed for the diverse existent concurrency models...Fac. de InformáticaTRUEunpu

    GPGPU Reliability Analysis: From Applications to Large Scale Systems

    Get PDF
    Over the past decade, GPUs have become an integral part of mainstream high-performance computing (HPC) facilities. Since applications running on HPC systems are usually long-running, any error or failure could result in significant loss in scientific productivity and system resources. Even worse, since HPC systems face severe resilience challenges as progressing towards exascale computing, it is imperative to develop a better understanding of the reliability of GPUs. This dissertation fills this gap by providing an understanding of the effects of soft errors on the entire system and on specific applications. To understand system-level reliability, a large-scale study on GPU soft errors in the field is conducted. The occurrences of GPU soft errors are linked to several temporal and spatial features, such as specific workloads, node location, temperature, and power consumption. Further, machine learning models are proposed to predict error occurrences on GPU nodes so as to proactively and dynamically turning on/off the costly error protection mechanisms based on prediction results. To understand the effects of soft errors at the application level, an effective fault-injection framework is designed aiming to understand the reliability and resilience characteristics of GPGPU applications. This framework is effective in terms of reducing the tremendous number of fault injection locations to a manageable size while still preserving remarkable accuracy. This framework is validated with both single-bit and multi-bit fault models for various GPGPU benchmarks. Lastly, taking advantage of the proposed fault-injection framework, this dissertation develops a hierarchical approach to understanding the error resilience characteristics of GPGPU applications at kernel, CTA, and warp levels. In addition, given that some corrupted application outputs due to soft errors may be acceptable, we present a use case to show how to enable low-overhead yet reliable GPU computing for GPGPU applications

    An Approach to Guide Users Towards Less Revealing Internet Browsers

    Get PDF
    When browsing the Internet, HTTP headers enable both clients and servers send extra data in their requests or responses such as the User-Agent string. This string contains information related to the sender’s device, browser, and operating system. Previous research has shown that there are numerous privacy and security risks result from exposing sensitive information in the User-Agent string. For example, it enables device and browser fingerprinting and user tracking and identification. Our large analysis of thousands of User-Agent strings shows that browsers differ tremendously in the amount of information they include in their User-Agent strings. As such, our work aims at guiding users towards using less exposing browsers. In doing so, we propose to assign an exposure score to browsers based on the information they expose and vulnerability records. Thus, our contribution in this work is as follows: first, provide a full implementation that is ready to be deployed and used by users. Second, conduct a user study to identify the effectiveness and limitations of our proposed approach. Our implementation is based on using more than 52 thousand unique browsers. Our performance and validation analysis show that our solution is accurate and efficient. The source code and data set are publicly available and the solution has been deployed

    Towards an information security framework for government to government transactions : a perspective from East Africa

    Get PDF
    The need for a regional framework for information security in e-Government for the East African Community (EAC) has become more urgent with the signing in 2009 of the EAC Common Market Protocol. This protocol will entail more electronic interactions amongst government agencies in the EAC partner states which are Burundi, Kenya, Rwanda, Tanzania, and Uganda. Government to Government (G2G) transactions are the backbone of e-Government transactions. If a government wants to provide comprehensive services that are easy to use by citizens, employees or businesses, it needs to be able to combine information or services that are provided by different government agencies or departments. Furthermore, the governments must ensure that the services provided are secure so that citizens trust that an electronic transaction is as good as or better than a manual one. Thus governments in the EAC must address information security in ways that take into consideration that these governments have limited resources and skills to use for e-Government initiatives. The novel contribution of this study is an information security framework dubbed the TOG framework, comprising of technical, operational, governance, process and maturity models to address information security requirements for G2G transactions in the EAC. The framework makes reference to standards that can be adopted by the EAC while taking into consideration contextual factors which are resource, legislative and cultural constraints. The process model uses what is termed a ‘Plug and Play’ approach which provides the resource poor countries with a means of addressing information security that can be implemented as and when resources allow but eventually leading to a comprehensive framework. Thus government agencies can start implementation based on the operational and technical guidelines while waiting for governance structures to be put in place, or can specifically address governance requirements where they already exist. Conversely, governments using the same framework can take into consideration existing technologies and operations while putting governance structures in place. As a proof of concept, the proposed framework is applied to a case study of a G2G transaction in Tanzania. The framework is evaluated against critical success factors.ComputingD. Phil. (Computer Science
    corecore