404 research outputs found

    DYNAMIC DEVICE COMPROMISE FOOTPRINT PRE-FILTERING

    Get PDF
    Techniques are provided herein to enhance detection of compromised network devices by maintaining a list of network device indicators of compromise (i.e., a footprint), determining the viable footprints relevant to the device’s specific deployment context, and placing checks for these footprints onto a device. Based on the reduced set of footprints, these footprints are placed in a cryptoprocessor (e.g., Trusted Platform Module (TPM)) to ensure that potentially relevant evidence cannot be silently discarded. As soon as a new footprint is characterized, devices may forward found instances of these footprints to a security controller. This allows the security controller to do remediation well prior to the installation of fixes/patches. Placement of events in a TPM also allows attacks on bare metal machines to be detected by virtual machines

    TRACEABILITY AND TROUBLESHOOTING IN WIRELESS CLUSTER DEPLOYMENTS USING PROVENANCE METADATA AND HYPER LEDGER

    Get PDF
    Techniques are described herein for enhancing traceability and troubleshooting in complex enterprise wireless cluster deployments using provenance metadata and a hyper ledger. State and event information are captured and used to reconstruct/recreate state machines and event diagrams (e.g., using Unified Modeling Language (UML)) which may be directly mapped to the code. The states and events of all Wireless Local Area Network (LAN) Controllers (WLCs) in the cluster are maintained as provenance metadata. Provenance metadata may improve troubleshooting abnormalities/issues caused by an event or state change (positive provenance), and may help in debugging issues caused by missing events (negative provenance). The metadata is maintained as a transaction in the hyper ledger of a private blockchain, which may help in troubleshooting incidents caused by attacks (e.g., repudiation attacks, etc.). The transaction records are signed by the source to provide authenticity of the information that is especially required in the absence of a Trusted Platform Module (TPM)

    Research and Proof of Concept of Selected ISKE Highest Level Integrity Requirements

    Get PDF
    Informatsiooni turvalisus on saamas üha olulisemaks tänapäeva ühiskonnas, kus üha rohkem protsesse ja tegevusi digitaliseeritakse ja andmed liiguvad paberilt bittideks ja baitideks digitaalsele kujule. Eesti riigi- ja avalikud asutused koguvad ja töötlevad informatsiooni, et tagada kõrgetasemelisi teenuseid, täita põhiseaduse kohustusi või rahvusvahelisi lepinguid. Avalik sektor Eestis peab täitma andmete käitlemisel informatsiooni turvalisuse standardi Infosüsteemide turvameetmete süsteemi ISKE nõudeid kolmes teguris: käideldavus, terviklus ja konfidentsiaalsus.Käesolev töö võtab tervikluse valdkonna detailsema uurimise alla, et saavutada ISKE meetmete täitmine ja turvaeesmärkide saavutamine, millised on nõutud kõrgeima terviklusega andmetele. Analüüsides ISKE tervikluse valdkonda ja luues mitmekülgse kontseptsiooni teostuse tõestamise projekti turvanõuete realiseerimise meetmetele on võimalik suurendada arendajate ja ISKE rakendamise partnerite teadlikkust saavutamaks parem informatsiooni turvalisus.Information security becomes more and more important in today's society, where more processes and operations will be digitised and data moves from paper to bits and bytes and receive digital form. In Estonia state and public institutions are collecting and processing information for providing high level services, fulfilling state needs on constitutional tasks or international contracts. Public sector in Estonia must apply information security standard IT Baseline Security System ISKE requirements in three factors: availability, integrity and confidentiality of processed data.This work takes integrity domain under detail research to meet ISKE requirements and security objectives demanded for data with highest integrity needs. By analysing integrity domain of ISKE and providing versatile proof of concept about solution for implementing security controls, it is possible to increase awareness of software developers and ISKE implementation participants to achieve better security of information

    A Dynamically Configurable Log-based Distributed Security Event Detection Methodology using Simple Event Correlator

    Get PDF
    Log event correlation is an effective means of detecting system faults and security breaches encountered in information technology environments. Centralized, database-driven log event correlation is common, but suffers from flaws such as high network bandwidth utilization, significant requirements for system resources, and difficulty in detecting certain suspicious behaviors. This research presents a distributed event correlation system which performs security event detection, and compares it with a centralized alternative. The comparison measures the value in distributed event correlation by considering network bandwidth utilization, detection capability and database query efficiency, as well as through the implementation of remote configuration scripts and correlation of multiple log sources. These capabilities produce a configuration which allows a 99% reduction of network syslog traffic in the low-accountability case, and a significant decrease in database execution time through context-addition in the high-accountability case. In addition, the system detects every implemented malicious use case, with a low false positive rate

    AUDITING THE SECURITY OF INFORMATION SYSTEMS WITHIN AN ORGANIZATION

    Get PDF
    The safety provided by a well configured firewall is no excuse for neglecting the standard security procedures;setting up and installing a firewall is the first line of defense and not a full proof solution, auditing being only onecomponent of the system, whilst the other is protecting the resources and when we consider auditing as being theprocess of recording certain events that take place on a computer or within a network, we must come to the conclusionthat this is the only technique that allows us to identify the source of a possible issue within the network.Information security is used as a means to protect the intellectual property rights, whilst the main objective insetting up an information security system is to enlist the confidence of prospective business partners. In accordancewith the legal requisites and the principle of maximizing one’s investment, regardless of the many forms it could take,or the means through which it is stored, transmitted or distributed, information must be protected.Information security is not only a technical problem, but mainly a managerial issue, as the security standard,ISO/IEC 17799 meets the needs of any type of organization, be it public or private, through a series of practices relatedto the management of information security.This paper aims to present the process of taking entry data from a plethora of programs and storing it in acentral location. Due to its flexibility, this process can be a useful auditing instrument, as long as we are familiar withthe way it works and how the events are recorded

    Tietoverkkojen valvonnan yhdenmukaistaminen

    Get PDF
    As the modern society is increasingly dependant on computer networks especially as the Internet of Things gaining popularity, a need to monitor computer networks along with associated devices increases. Additionally, the amount of cyber attacks is increasing and certain malware such as Mirai target especially network devices. In order to effectively monitor computer networks and devices, effective solutions are required for collecting and storing the information. This thesis designs and implements a novel network monitoring system. The presented system is capable of utilizing state-of-the-art network monitoring protocols and harmonizing the collected information using a common data model. This design allows effective queries and further processing on the collected information. The presented system is evaluated by comparing the system against the requirements imposed on the system, by assessing the amount of harmonized information using several protocols and by assessing the suitability of the chosen data model. Additionally, the protocol overheads of the used network monitoring protocols are evaluated. The presented system was found to fulfil the imposed requirements. Approximately 21% of the information provided by the chosen network monitoring protocols could be harmonized into the chosen data model format. The result is sufficient for effective querying and combining the information, as well as for processing the information further. The result can be improved by extending the data model and improving the information processing. Additionally, the chosen data model was shown to be suitable for the use case presented in this thesis.Yhteiskunnan ollessa jatkuvasti verkottuneempi erityisesti Esineiden Internetin kasvattaessa suosiotaan, tarve seurata sekä verkon että siihen liitettyjen laitteiden tilaa ja mahdollisia poikkeustilanteita kasvaa. Lisäksi tietoverkkohyökkäysten määrä on kasvamassa ja erinäiset haittaohjelmat kuten Mirai, ovat suunnattu erityisesti verkkolaitteita kohtaan. Jotta verkkoa ja sen laitteiden tilaa voidaan seurata, tarvitaan tehokkaita ratkaisuja tiedon keräämiseen sekä säilöntään. Tässä diplomityössä suunnitellaan ja toteutetaan verkonvalvontajärjestelmä, joka mahdollistaa moninaisten verkonvalvontaprotokollien hyödyntämisen tiedonkeräykseen. Lisäksi järjestelmä säilöö kerätyn tiedon käyttäen yhtenäistä tietomallia. Yhtenäisen tietomallin käyttö mahdollistaa tiedon tehokkaan jatkojalostamisen sekä haut tietosisältöihin. Diplomityössä esiteltävän järjestelmän ominaisuuksia arvioidaan tarkastelemalla, minkälaisia osuuksia eri verkonvalvontaprotokollien tarjoamasta informaatiosta voidaan yhdenmukaistaa tietomalliin, onko valittu tietomalli soveltuva verkonvalvontaan sekä varmistetaan esiteltävän järjestelmän täyttävän sille asetetut vaatimukset. Lisäksi työssä arvioidaan käytettävien verkonvalvontaprotokollien siirtämisen kiinteitä kustannuksia kuten otsakkeita. Työssä esitellyn järjestelmän todettiin täyttävän sille asetetut vaatimukset. Eri verkonvalvontaprotokollien tarjoamasta informaatiosta keskimäärin 21% voitiin harmonisoida tietomalliin. Saavutettu osuus on riittävä, jotta eri laitteista saatavaa informaatiota voidaan yhdistellä ja hakea tehokkaasti. Lukemaa voidaan jatkossa parantaa laajentamalla tietomallia sekä kehittämällä kerätyn informaation prosessointia. Lisäksi valittu tietomalli todettiin soveltuvaksi tämän diplomityön käyttötarkoitukseen

    Implementation of Secure Log Management Over Cloud

    Get PDF
    A Log records are very important information which is related to activities of systems, applications or networks and these log records having various fields and their syntax. Actually logs are automatically generated on activities that are done and doing by user on system, or on any Applications such as Google Chrome or in networks. These logs are costly and need to any organization for future references such as to identify or finding any problems, to record all events, to find performance, and to investigate malicious activities in systems or networks or in application. So, protection of logs from attackers is required. Hence organization should maintain integrity, confidentiality, and security of logs. The cost to maintain logs for organizations for longer period is very less. Hence, we developed secure log management over cloud to decrease cost as well as provide security of log from attackers. To achieve this, we have done this with the help of Blowfish algorithm to Encrypt log records then SHA-1 is used to provide confidentiality while transmitting and at end point security purpose we used Shamir’s Secret sharing algorithm. DOI: 10.17762/ijritcc2321-8169.150511
    corecore