4 research outputs found

    Design and Development of Network Monitoring and Controlling Tool for Department of Computer Studies CSIBER

    Get PDF
    In most of the organizations it is highly desirable to perform different tasks on different machines based on their configuration and permissions assigned to the machines for execution of different tasks. This can be achieved by performing a user-machine mapping by stating clearing the list of tasks that can be performed by a particular user on a particular machine. This kind of discipline further enables traffic control, prevents internal DOS (Denial Of Service) attacks for legitimate users and helps in fair resource sharing. The intent of this research is to enable the end user performing only the tasks permissible to him/her. In this paper we have developed a network monitor and control tool for monitoring the tasks on a medium sized local area network. To facilitate this, task permissions are assigned to different machines which is stored in XML configuration file which is then parsed using JDOM (Java Document Object Model) Parser. The configuration file contains the details such as machine name, and the list of tasks not permitted on that machine. The list of machines and the list of tasks denied on that machine is configurable by an end user. A background thread will continuously monitor the execution of illegal task on a machine and will abort and report the same in a database. This also facilitates the control of network traffic thereby improving the network performance by aborting illegal tasks. Network monitoring tool is tested for local area network of department of computer studies at SIBER by setting up specific monitors to check status and to carry out specific operations. The tool developed by us requires a small amount of system resources, and it is an open source tool. Presently, the tool generates a report comprising of a list of illegal tasks in a specified time period, which enables network administrator to take corrective measures for the smooth operation of the network. DOI: 10.17762/ijritcc2321-8169.15037

    Analysis of the Impact of Data Normalization on Cyber Event Correlation Query Performance

    Get PDF
    A critical capability required in the operation of cyberspace is the ability to maintain situational awareness of the status of the infrastructure elements that constitute cyberspace. Event logs from cyber devices can yield significant information, and when properly utilized they can provide timely situational awareness about the state of the cyber infrastructure. In addition, proper Information Assurance requires the validation and verification of the integrity of results generated by a commercial log analysis tool. Event log analysis can be performed using relational databases. To enhance database query performance, previous literatures affirm denormalization of databases. Yet database normalization can also increase query performance. Database normalization improved the majority of the queries performed using very large data sets of router events. In addition, queries performed faster on normalized tables when all the necessary data were contained in the normalized tables. Database normalization improves table organization and maintains better data consistency than a lack of normalization. Nonetheless, there are some tradeoffs when normalizing a database, such as additional preprocessing time and extra storage requirements. But overall, normalization improved query performance and must be considered an option when analyzing event logs using relational databases. There are three primary research questions addressed in this thesis: (1) What standards exist for the generation, transport, storage, and analysis of event log data for security analysis?; (2) How does database normalization impact query performance when using very large data sets (over 30 million) of router events?; and (3) What are the tradeoffs between using a normalized versus non-normalized database in terms of preprocessing time, query performance, storage requirements, and database consistency

    Sistema de correlação de eventos e notificações - SCEN

    Get PDF
    Mestrado em Engenharia Electrotécnica e de ComputadoresAs redes actuais, com um crescimento de eventos nos sistemas de gestão cada vez mais sofisticado, um dos grandes ojectivos para o futuro é condensarem uma grande quantidade desses eventos num pequeno número de eventos, mas mais significativo para o relatório de falhas. Esta necessidade prática pode ser realizada recorrendo a mecanismos de correlação de eventos. A correlação de eventos é uma área de intensa investigação na comunidade científica e industrial. Neste contexto surgiu o desejo de realizar um projecto no âmbito da correlação de eventos em redes com gestão e monitorização na infra-estrutura SNMP. O objectivo do trabalho realizado foi a definição e criação de uma infra-estrutura de software com capacidade de correlacionar autonomamente e de forma inteligente os eventos recebidos a partir de sistemas informáticos. A ideia para a criação desta infra-estrutura nasce da dificuldade que surge na constante configuração e adaptação dos parâmetros de análise dos actuais sistemas. É por isso, apresentado uma plataforma de valor acrescentado que permite auxiliar os responsáveis pela gestão de infra-estruturas de sistemas informáticos. Para melhorar a sua eficácia na resolução dos problemas dos seus sistemas e redes, através da observação de um subconjunto de eventos que é relevante na globalidade de eventos recebidos. Para tal foram estudadas outras soluções, várias ferramentas comerciais, de fonte de código aberto e foram estudadas as suas características e os seus modelos. Foi posteriormente desenvolvido um novo modelo adaptado com base numa ferramenta escolhida e que contempla a interacção de vários elementos de monitorização de rede. Estes elementos criam a capacidade de observar mudanças de estado dos serviços definidos e corrrelacionar as referidas alterações com os eventos que vão sendo obtidos dos sistemas. A infra-estrutura de monitorização proposta visa assim permitir a avaliação da relevância dos eventos recebidos a partir de sistemas de pooling ou de notificação e daí inferir a importância dos eventos, deixando de ser necessário ao administrador da plataforma de gestão ou ao administrador de sistema ter esse trabalho. Para validar o modelo foi implementado um laboratório virtual onde foram criados os elementos constituintes do modelo proposto e foram feitas simulações, com vista à obtenção e validação de resultados. Como conclusão, a infra-estrutura que foi definida e testada reflectiu o funcionamento pretendido, baseando-se apenas nas definições preexistentes sobre os serviços monitorizados, no conhecimento das bases de dados do sistema de monitorização existente através de pooling e cruzamento de tabelas das bases de dados, e dos vários tipos de equipamentos de rede e nas suas mensagens de estado. No final foram apresentados a resposta do sistema á saída e desenvolvimento futuro.Current networks, with increasingly sophisticated systems management events, one of the major objectives is to condense a large amount of these events in a small number, but more significative in the report of their failures. This practical necessity can be accomplished through event correlation mechanisms. Event correlation is an area of intense research in scientific and industrial community. From this context, came the desire to accomplish a project work within the network event correlation data management based on SNMP infrastructure.The aim of this work was the development of an infrastructure for monitoring with the ability to correlate autonomously and intelligently events received from computer systems. The idea for the creation of this infrastructure was born of the difficulty that arises on the constant setting and adjustment of parameters of analysis of existing systems. It is presented as a value-added platform, that enables help who those responsible for managing an infrastructure of computer systems. To improve the effectiveness of the resolution of the problems of their systems and networks, through observation of a subset of events that is relevant to the whole of events received. For such other solutions were studied, several commercial tools and open source code, were studied their characteristics and their models. Was subsequently developed a new model adapted on the basis of the chosen tool and that contemplates the interaction of various elements of network monitoring. These elements that create the ability of observe state changes and correlation services defined these amendments with the events that are being obtained from the systems. Infrastructure monitoring proposal aims therefore allow an assessment of the relevance of the events received from pool systems or notification and inferred the importance of events, leaving to be necessary for the management platform administrator or system administrator to have this job. To validate the model was implemented a virtual lab where they were created the constituent elements of the proposed model simulations, and were made with a view to obtaining and validating results. As a conclusion, the infrastructure that was defined and tested reflected the intended operation, relying only on pre-existing settings on services monitored, knowledge of databases existing monitoring system through pooling and crossing connecting tables of databases, and various types of network equipment and its status messages. At the end were presented the system response will output and future development
    corecore