78 research outputs found

    Security risk modeling in smart grid critical infrastructures in the era of big data and artificial intelligence

    Get PDF
    Smart grids (SG) emerged as a response to the need to modernize the electricity grid. The current security tools are almost perfect when it comes to identifying and preventing known attacks in the smart grid. Still, unfortunately, they do not quite meet the requirements of advanced cybersecurity. Adequate protection against cyber threats requires a whole set of processes and tools. Therefore, a more flexible mechanism is needed to examine data sets holistically and detect otherwise unknown threats. This is possible with big modern data analyses based on deep learning, machine learning, and artificial intelligence. Machine learning, which can rely on adaptive baseline behavior models, effectively detects new, unknown attacks. Combined known and unknown data sets based on predictive analytics and machine intelligence will decisively change the security landscape. This paper identifies the trends, problems, and challenges of cybersecurity in smart grid critical infrastructures in big data and artificial intelligence. We present an overview of the SG with its architectures and functionalities and confirm how technology has configured the modern electricity grid. A qualitative risk assessment method is presented. The most significant contributions to the reliability, safety, and efficiency of the electrical network are described. We expose levels while proposing suitable security countermeasures. Finally, the smart grid’s cybersecurity risk assessment methods for supervisory control and data acquisition are presented

    SERVICE-BASED AUTOMATION OF SOFTWARE CONSTRUCTION ACTIVITIES

    Get PDF
    The reuse of software units, such as classes, components and services require professional knowledge to be performed. Today a multiplicity of different software unit technologies, supporting tools, and related activities used in reuse processes exist. Each of these relevant reuse elements may also include a high number of variations and may differ in the level and quality of necessary reuse knowledge. In such an environment of increasing variations and, therefore, an increasing need for knowledge, software engineers must obtain such knowledge to be able to perform software unit reuse activities. Today many different reuse activities exist for a software unit. Some typical knowledge intensive activities are: transformation, integration, and deployment. In addition to the problem of the amount of knowledge required for such activities, other difficulties also exist. The global industrial environment makes it challenging to identify sources of, and access to, knowledge. Typically, such sources (e.g., repositories) are made to search and retrieve information about software unitsand not about the required reuse activity knowledge for a special unit. Additionally, the knowledge has to be learned by inexperienced software engineers and, therefore, to be interpreted. This interpretation may lead to variations in the reuse result and can differ from the estimated result of the knowledge creator. This makes it difficult to exchange knowledge between software engineers or global teams. Additionally, the reuse results of reuse activities have to be repeatable and sustainable. In such a scenario, the knowledge about software reuse activities has to be exchanged without the above mentioned problems by an inexperienced software engineer. The literature shows a lack of techniques to store and subsequently distribute relevant reuse activity knowledge among software engineers. The central aim of this thesis is to enable inexperienced software engineers to use knowledge required to perform reuse activities without experiencing the aforementioned problems. The reuse activities: transformation, integration, and deployment, have been selected as the foundation for the research. Based on the construction level of handling a software unit, these activities are called Software Construction Activities (SCAcs) throughout the research. To achieve the aim, specialised software construction activity models have been created and combined with an abstract software unit model. As a result, different SCAc knowledge is described and combined with different software unit artefacts needed by the SCAcs. Additionally, the management (e.g., the execution of an SCAc) will be provided in a service-oriented environment. Because of the focus on reuse activities, an approach which avoids changing the knowledge level of software engineers and the abstraction view on software units and activities, the object of the investigation differs from other approaches which aim to solve the insufficient reuse activity knowledge problem. The research devised novel abstraction models to describe SCAcs as knowledge models related to the relevant information of software units. The models and the focused environment have been created using standard technologies. As a result, these were realised easily in a real world environment. Softwareengineers were able to perform single SCAcs without having previously acquired the necessary knowledge. The risk of failing reuse decreases because single activities can be performed. The analysis of the research results is based on a case study. An example of a reuse environmenthas been created and tested in a case study to prove the operational capability of the approach. The main result of the research is a proven concept enabling inexperienced software engineers to reuse software units by reusing SCAcs. The research shows the reduction in time for reuse and a decrease of learning effort is significant

    Are Social Networks Watermarking Us or Are We (Unawarely) Watermarking Ourself?

    Get PDF
    In the last decade, Social Networks (SNs) have deeply changed many aspects of society, and one of the most widespread behaviours is the sharing of pictures. However, malicious users often exploit shared pictures to create fake profiles, leading to the growth of cybercrime. Thus, keeping in mind this scenario, authorship attribution and verification through image watermarking techniques are becoming more and more important. In this paper, we firstly investigate how thirteen of the most popular SNs treat uploaded pictures in order to identify a possible implementation of image watermarking techniques by respective SNs. Second, we test the robustness of several image watermarking algorithms on these thirteen SNs. Finally, we verify whether a method based on the Photo-Response Non-Uniformity (PRNU) technique, which is usually used in digital forensic or image forgery detection activities, can be successfully used as a watermarking approach for authorship attribution and verification of pictures on SNs. The proposed method is sufficiently robust, in spite of the fact that pictures are often downgraded during the process of uploading to the SNs. Moreover, in comparison to conventional watermarking methods the proposed method can successfully pass through different SNs, solving related problems such as profile linking and fake profile detection. The results of our analysis on a real dataset of 8400 pictures show that the proposed method is more effective than other watermarking techniques and can help to address serious questions about privacy and security on SNs. Moreover, the proposed method paves the way for the definition of multi-factor online authentication mechanisms based on robust digital features

    Accurate Evaluation of Feature Contributions for Sentinel Lymph Node Status Classification in Breast Cancer

    Get PDF
    The current guidelines recommend the sentinel lymph node biopsy to evaluate the lymph node involvement for breast cancer patients with clinically negative lymph nodes on clinical or radiological examination. Machine learning (ML) models have significantly improved the prediction of lymph nodes status based on clinical features, thus avoiding expensive, time-consuming and invasive procedures. However, the classification of sentinel lymph node status represents a typical example of an unbalanced classification problem. In this work, we developed a ML framework to explore the effects of unbalanced populations on the performance and stability of feature ranking for sentinel lymph node status classification in breast cancer. Our results indicate state-of-the-art AUC (Area under the Receiver Operating Characteristic curve) values on a hold-out set (67%) while providing particularly stable features related to tumor size, histological subtype and estrogen receptor expression, which should therefore be considered as potential biomarkers

    Método para la evaluación de usabilidad de sitios web transaccionales basado en el proceso de inspección heurística

    Get PDF
    La usabilidad es considerada uno de los factores más importantes en el desarrollo de productos de software. Este atributo de calidad está referido al grado en que, usuarios específicos de un determinado aplicativo, pueden fácilmente hacer uso del software para lograr su propósito. Dada la importancia de este aspecto en el éxito de las aplicaciones informáticas, múltiples métodos de evaluación han surgido como instrumentos de medición que permiten determinar si la propuesta de diseño de la interfaz de un sistema de software es entendible, fácil de usar, atractiva y agradable al usuario. El método de evaluación heurística es uno de los métodos más utilizados en el área de Interacción Humano-Computador (HCI) para este propósito debido al bajo costo de su ejecución en comparación otras técnicas existentes. Sin embargo, a pesar de su amplio uso extensivo durante los últimos años, no existe un procedimiento formal para llevar a cabo este proceso de evaluación. Jakob Nielsen, el autor de esta técnica de inspección, ofrece únicamente lineamientos generales que, según la investigación realizada, tienden a ser interpretados de diferentes maneras por los especialistas. Por tal motivo, se ha desarrollado el presente proyecto de investigación que tiene como objetivo establecer un proceso sistemático, estructurado, organizado y formal para llevar a cabo evaluaciones heurísticas a productos de software. En base a un análisis exhaustivo realizado a aquellos estudios que reportan en la literatura el uso del método de evaluación heurística como parte del proceso de desarrollo de software, se ha formulado un nuevo método de evaluación basado en cinco fases: (1) planificación, (2) entrenamiento, (3) evaluación, (4) discusión y (5) reporte. Cada una de las fases propuestas que componen el protocolo de inspección contiene un conjunto de actividades bien definidas a ser realizadas por el equipo de evaluación como parte del proceso de inspección. Asimismo, se han establecido ciertos roles que deberán desempeñar los integrantes del equipo de inspectores para asegurar la calidad de los resultados y un apropiado desarrollo de la evaluación heurística. La nueva propuesta ha sido validada en dos escenarios académicos distintos (en Colombia, en una universidad pública, y en Perú, en dos universidades tanto en una pública como en una privada) demostrando en todos casos que es posible identificar más problemas de usabilidad altamente severos y críticos cuando un proceso estructurado de inspección es adoptado por los evaluadores. Otro aspecto favorable que muestran los resultados es que los evaluadores tienden a cometer menos errores de asociación (entre heurística que es incumplida y problemas de usabilidad identificados) y que la propuesta es percibida como fácil de usar y útil. Al validarse la nueva propuesta desarrollada por el autor de este estudio se consolida un nuevo conocimiento que aporta al bagaje cultural de la ciencia.Tesi

    FIN-DM: finantsteenuste andmekaeve protsessi mudel

    Get PDF
    Andmekaeve hõlmab reeglite kogumit, protsesse ja algoritme, mis võimaldavad ettevõtetel iga päev kogutud andmetest rakendatavaid teadmisi ammutades suurendada tulusid, vähendada kulusid, optimeerida tooteid ja kliendisuhteid ning saavutada teisi eesmärke. Andmekaeves ja -analüütikas on vaja hästi määratletud metoodikat ja protsesse. Saadaval on mitu andmekaeve ja -analüütika standardset protsessimudelit. Kõige märkimisväärsem ja laialdaselt kasutusele võetud standardmudel on CRISP-DM. Tegu on tegevusalast sõltumatu protsessimudeliga, mida kohandatakse sageli sektorite erinõuetega. CRISP-DMi tegevusalast lähtuvaid kohandusi on pakutud mitmes valdkonnas, kaasa arvatud meditsiini-, haridus-, tööstus-, tarkvaraarendus- ja logistikavaldkonnas. Seni pole aga mudelit kohandatud finantsteenuste sektoris, millel on omad valdkonnapõhised erinõuded. Doktoritöös käsitletakse seda lünka finantsteenuste sektoripõhise andmekaeveprotsessi (FIN-DM) kavandamise, arendamise ja hindamise kaudu. Samuti uuritakse, kuidas kasutatakse andmekaeve standardprotsesse eri tegevussektorites ja finantsteenustes. Uurimise käigus tuvastati mitu tavapärase raamistiku kohandamise stsenaariumit. Lisaks ilmnes, et need meetodid ei keskendu piisavalt sellele, kuidas muuta andmekaevemudelid tarkvaratoodeteks, mida saab integreerida organisatsioonide IT-arhitektuuri ja äriprotsessi. Peamised finantsteenuste valdkonnas tuvastatud kohandamisstsenaariumid olid seotud andmekaeve tehnoloogiakesksete (skaleeritavus), ärikesksete (tegutsemisvõime) ja inimkesksete (diskrimineeriva mõju leevendus) aspektidega. Seejärel korraldati tegelikus finantsteenuste organisatsioonis juhtumiuuring, mis paljastas 18 tajutavat puudujääki CRISP- DMi protsessis. Uuringu andmete ja tulemuste abil esitatakse doktoritöös finantsvaldkonnale kohandatud CRISP-DM nimega FIN-DM ehk finantssektori andmekaeve protsess (Financial Industry Process for Data Mining). FIN-DM laiendab CRISP-DMi nii, et see toetab privaatsust säilitavat andmekaevet, ohjab tehisintellekti eetilisi ohte, täidab riskijuhtimisnõudeid ja hõlmab kvaliteedi tagamist kui osa andmekaeve elutsüklisData mining is a set of rules, processes, and algorithms that allow companies to increase revenues, reduce costs, optimize products and customer relationships, and achieve other business goals, by extracting actionable insights from the data they collect on a day-to-day basis. Data mining and analytics projects require well-defined methodology and processes. Several standard process models for conducting data mining and analytics projects are available. Among them, the most notable and widely adopted standard model is CRISP-DM. It is industry-agnostic and often is adapted to meet sector-specific requirements. Industry- specific adaptations of CRISP-DM have been proposed across several domains, including healthcare, education, industrial and software engineering, logistics, etc. However, until now, there is no existing adaptation of CRISP-DM for the financial services industry, which has its own set of domain-specific requirements. This PhD Thesis addresses this gap by designing, developing, and evaluating a sector-specific data mining process for financial services (FIN-DM). The PhD thesis investigates how standard data mining processes are used across various industry sectors and in financial services. The examination identified number of adaptations scenarios of traditional frameworks. It also suggested that these approaches do not pay sufficient attention to turning data mining models into software products integrated into the organizations' IT architectures and business processes. In the financial services domain, the main discovered adaptation scenarios concerned technology-centric aspects (scalability), business-centric aspects (actionability), and human-centric aspects (mitigating discriminatory effects) of data mining. Next, an examination by means of a case study in the actual financial services organization revealed 18 perceived gaps in the CRISP-DM process. Using the data and results from these studies, the PhD thesis outlines an adaptation of CRISP-DM for the financial sector, named the Financial Industry Process for Data Mining (FIN-DM). FIN-DM extends CRISP-DM to support privacy-compliant data mining, to tackle AI ethics risks, to fulfill risk management requirements, and to embed quality assurance as part of the data mining life-cyclehttps://www.ester.ee/record=b547227

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    Towards Secure Fog Computing: A Survey on Trust Management, Privacy, Authentication, Threats and Access Control

    Get PDF
    Fog computing is an emerging computing paradigm that has come into consideration for the deployment of Internet of Things (IoT) applications amongst researchers and technology industries over the last few years. Fog is highly distributed and consists of a wide number of autonomous end devices, which contribute to the processing. However, the variety of devices offered across different users are not audited. Hence, the security of Fog devices is a major concern that should come into consideration. Therefore, to provide the necessary security for Fog devices, there is a need to understand what the security concerns are with regards to Fog. All aspects of Fog security, which have not been covered by other literature works, need to be identified and aggregated. On the other hand, privacy preservation for user’s data in Fog devices and application data processed in Fog devices is another concern. To provide the appropriate level of trust and privacy, there is a need to focus on authentication, threats and access control mechanisms as well as privacy protection techniques in Fog computing. In this paper, a survey along with a taxonomy is proposed, which presents an overview of existing security concerns in the context of the Fog computing paradigm. Moreover, the Blockchain-based solutions towards a secure Fog computing environment is presented and various research challenges and directions for future research are discussed
    corecore