783 research outputs found

    Painless migration from passwords to two factor authentication

    Get PDF
    Abstract-In spite of growing frequency and sophistication of attacks two factor authentication schemes have seen very limited adoption in the US, and passwords remain the single factor of authentication for most bank and brokerage accounts. Clearly the cost benefit analysis is not as strongly in favor of two factor as we might imagine. Upgrading from passwords to a two factor authentication system usually involves a large engineering effort, a discontinuity of user experience and a hard key management problem. In this paper we describe a system to convert a legacy password authentication server into a two factor system. The existing password system is untouched, but is cascaded with a new server that verifies possession of a smartphone device. No alteration, patching or updates to the legacy system is necessary. There are now two alternative authentication paths: one using passwords alone, and a second using passwords and possession of the trusted device. The bank can leave the password authentication path available while users migrate to the two factor scheme. Once migration is complete the passwordonly path can be severed. We have implemented the system and carried out two factor authentication against real accounts at several major banks

    Security monitoring tool system using threat intelligence vs threat hunting

    Get PDF
    This project is about developing a Security Monitoring Tool System using Graylog SIEM (Security Information Event Management) with a combination of Threat Intelligence and an expected outcome for Threat Hunting results. This is built in accordance to specific ruleset been made for threat hunting purposes with an automation of logs from Windows endpoint host and Network activity. A datasets of Threat Intelligence enrichment will be integrated to the provided platform which is Graylog. Main objective is to ensure Security Analyst or Network Analyst to have a look at any suspicious behavior of attacks by hackers and act to it in a timely manner. Most organizations normally ingesting network and endpoint logs to the SIEM tools and integrating with some commercial tools to detect or trigger anomalies and directly send them notifications via email or 3rd party channel like Slack channel. Bear in mind that, the commercial tools is highly expensive and not really cost effective, however with this development definitely will help them to deploy the same approach with very limited budget or could be at zero cost for small medium enterprise but for big enterprise it will only cost $1500 at fixed price which considered as cheaper than the other tools. There are many developments out there whereby they are using wellknown open-source IDS like Suricata and open source SIEM like elastic stack comprises of Elasticsearch, Kibana and Logstash. However, in this development, Graylog been used with the usage of Elasticsearch and MongoDB as a database server and to store, search and analyze huge volumes of data ingested. Generally, the Graylog is introduced as a powerful logging tool with a simple user-friendly interface visualized with Grafana as well as offering minimal effort to configure with very low maintenance. Due to that, creating a ruleset for Threat Hunting and Threat Intelligence enrichment, it will be much easier to configure and straight forward to compare with other competitors in the market. (Abstract by author

    Service-oriented models for audiovisual content storage

    No full text
    What are the important topics to understand if involved with storage services to hold digital audiovisual content? This report takes a look at how content is created and moves into and out of storage; the storage service value networks and architectures found now and expected in the future; what sort of data transfer is expected to and from an audiovisual archive; what transfer protocols to use; and a summary of security and interface issues

    ANALYSIS OF DATA & COMPUTER NETWORKS IN STUDENTS' RESIDENTIAL AREA IN UNIVERSITI TEKNOLOGI PETRONAS

    Get PDF
    In Universiti Teknologi Petronas (UTP), most of the students depend on the Internet and computer network connection to gain academics information and share educational resources. Even though the Internet connections and computers networks are provided, the service always experience interruption, such as slow Internet access, viruses and worms distribution, and network abuse by irresponsible students. Since UTP organization keeps on expanding, the need for a better service in UTP increases. Several approaches were put into practice to address the problems. Research on data and computer network was performed to understand the network technology applied in UTP. A questionnaire forms were distributed among the students to obtain feedback and statistical data about UTP's network in Students' Residential Area. The studies concentrate only on Students' Residential Area as it is where most of the users reside. From the survey, it can be observed that 99% of the students access the network almost 24 hours a day. In 2005, the 2 Mbps allocated bandwidth was utilized 100% almost continuously but in 2006, the bottleneck of Internet access has reduced significantly since the bandwidth allocated have been increased to 8 Mbps. Server degradation due to irresponsible acts by users also adds burden to the main server. In general, if the proposal to ITMS (Information Technology & Media Services) Department for them to improve their Quality of Service (QoS) and established UTP Computer Emergency Response Team (UCert), most of the issues addressed in this report can be solved

    Federating Heterogeneous Digital Libraries by Metadata Harvesting

    Get PDF
    This dissertation studies the challenges and issues faced in federating heterogeneous digital libraries (DLs) by metadata harvesting. The objective of federation is to provide high-level services (e.g. transparent search across all DLs) on the collective metadata from different digital libraries. There are two main approaches to federate DLs: distributed searching approach and harvesting approach. As the distributed searching approach replies on executing queries to digital libraries in real time, it has problems with scalability. The difficulty of creating a distributed searching service for a large federation is the motivation behind Open Archives Initiatives Protocols for Metadata Harvesting (OAI-PMH). OAI-PMH supports both data providers (repositories, archives) and service providers. Service providers develop value-added services based on the information collected from data providers. Data providers are simply collections of harvestable metadata. This dissertation examines the application of the metadata harvesting approach in DL federations. It addresses the following problems: (1) Whether or not metadata harvesting provides a realistic and scalable solution for DL federation. (2) What is the status of and problems with current data provider implementations, and how to solve these problems. (3) How to synchronize data providers and service providers. (4) How to build different types of federation services over harvested metadata. (5) How to create a scalable and reliable infrastructure to support federation services. The work done in this dissertation is based on OAI-PMH, and the results have influenced the evolution of OAI-PMH. However, the results are not limited to the scope of OAI-PMH. Our approach is to design and build key services for metadata harvesting and to deploy them on the Web. Implementing a publicly available service allows us to demonstrate how these approaches are practical. The problems posed above are evaluated by performing experiments over these services. To summarize the results of this thesis, we conclude that the metadata harvesting approach is a realistic and scalable approach to federate heterogeneous DLs. We present two models of building federation services: a centralized model and a replicated model. Our experiments also demonstrate that the repository synchronization problem can be addressed by push, pull, and hybrid push/pull models; each model has its strengths and weaknesses and fits a specific scenario. Finally, we present a scalable and reliable infrastructure to support the applications of metadata harvesting

    Master of Science in Computer Science

    Get PDF
    thesisIn today's IP networks, any host can send packets to any other host irrespective of whether the recipient is interested in communicating with the sender or not. The downside of this openness is that every host is vulnerable to an attack by any other host. We ob- serve that this unrestricted network access (network ambient authority) from compromised systems is also a main reason for data exfiltration attacks within corporate networks. We address this issue using the network version of capability based access control. We bring the idea of capabilities and capability-based access control to the domain of networking. CeNet provides policy driven, fine-grained network level access control enforced in the core of the network (and not at the end-hosts) thereby removing network ambient authority. Thus CeNet is able to limit the scope of spread of an attack from a compromised host to other hosts in the network. We built a capability-enabled SDN network where communication privileges of an endpoint are limited according to its function in the network. Network capabilities can be passed between hosts, thereby allowing a delegation-oriented security policy to be realized. We believe that this base functionality can pave the way for the realization of sophisticated security policies within an enterprise network. Further we built a policy manager that is able to realize Role-Based Access Control (RBAC) policy based network access control using capability operations. We also look at some of the results of formal analysis of capability propagation models in the context of networks

    Information Sharing Solutions for Nato Headquarters

    Get PDF
    NATO is an Alliance of 26 nations that operates on a consensus basis, not a majority basis. Thorough and timely information exchange between nations is fundamental to the Business Process. Current technology and practices at NATO HQ are inadequate to meet modern-day requirements despite the availability of demonstrated and accredited Cross-Domain technology solutions. This lack of integration between networks is getting more complicated with time, as nations continue to invest in IT and ignore the requirements for inter-networked gateways. This contributes to inefficiencies, fostering an atmosphere where shortcuts are taken in order to get the job done. The author recommends that NATO HQ should improve its presence on the Internet, building on the desired tenets of availability and security
    • …
    corecore