475 research outputs found

    Performance Evaluation of Machine Learning Techniques for Identifying Forged and Phony Uniform Resource Locators (URLs)

    Get PDF
    Since the invention of Information and Communication Technology (ICT), there has been a great shift from the erstwhile traditional approach of handling information across the globe to the usage of this innovation. The application of this initiative cut across almost all areas of human endeavours. ICT is widely utilized in education and production sectors as well as in various financial institutions. It is of note that many people are using it genuinely to carry out their day to day activities while others are using it to perform nefarious activities at the detriment of other cyber users. According to several reports which are discussed in the introductory part of this work, millions of people have become victims of fake Uniform Resource Locators (URLs) sent to their mails by spammers. Financial institutions are not left out in the monumental loss recorded through this illicit act over the years. It is worth mentioning that, despite several approaches currently in place, none could confidently be confirmed to provide the best and reliable solution. According to several research findings reported in the literature, researchers have demonstrated how machine learning algorithms could be employed to verify and confirm compromised and fake URLs in the cyberspace. Inconsistencies have however been noticed in the researchers’ findings and also their corresponding results are not dependable based on the values obtained and conclusions drawn from them. Against this backdrop, the authors carried out a comparative analysis of three learning algorithms (Naïve Bayes, Decision Tree and Logistics Regression Model) for verification of compromised, suspicious and fake URLs and determine which is the best of all based on the metrics (F-Measure, Precision and Recall) used for evaluation. Based on the confusion metrics measurement, the result obtained shows that the Decision Tree (ID3) algorithm achieves the highest values for recall, precision and f-measure. It unarguably provides efficient and credible means of maximizing the detection of compromised and malicious URLs. Finally, for future work, authors are of the opinion that two or more supervised learning algorithms can be hybridized to form a single effective and more efficient algorithm for fake URLs verification.Keywords: Learning-algorithms, Forged-URL, Phoney-URL, performance-compariso

    Segurança e privacidade em terminologia de rede

    Get PDF
    Security and Privacy are now at the forefront of modern concerns, and drive a significant part of the debate on digital society. One particular aspect that holds significant bearing in these two topics is the naming of resources in the network, because it directly impacts how networks work, but also affects how security mechanisms are implemented and what are the privacy implications of metadata disclosure. This issue is further exacerbated by interoperability mechanisms that imply this information is increasingly available regardless of the intended scope. This work focuses on the implications of naming with regards to security and privacy in namespaces used in network protocols. In particular on the imple- mentation of solutions that provide additional security through naming policies or increase privacy. To achieve this, different techniques are used to either embed security information in existing namespaces or to minimise privacy ex- posure. The former allows bootstraping secure transport protocols on top of insecure discovery protocols, while the later introduces privacy policies as part of name assignment and resolution. The main vehicle for implementation of these solutions are general purpose protocols and services, however there is a strong parallel with ongoing re- search topics that leverage name resolution systems for interoperability such as the Internet of Things (IoT) and Information Centric Networks (ICN), where these approaches are also applicable.Segurança e Privacidade são dois topicos que marcam a agenda na discus- são sobre a sociedade digital. Um aspecto particularmente subtil nesta dis- cussão é a forma como atribuímos nomes a recursos na rede, uma escolha com consequências práticas no funcionamento dos diferentes protocols de rede, na forma como se implementam diferentes mecanismos de segurança e na privacidade das várias partes envolvidas. Este problema torna-se ainda mais significativo quando se considera que, para promover a interoperabili- dade entre diferentes redes, mecanismos autónomos tornam esta informação acessível em contextos que vão para lá do que era pretendido. Esta tese foca-se nas consequências de diferentes políticas de atribuição de nomes no contexto de diferentes protocols de rede, para efeitos de segurança e privacidade. Com base no estudo deste problema, são propostas soluções que, através de diferentes políticas de atribuição de nomes, permitem introdu- zir mecanismos de segurança adicionais ou mitigar problemas de privacidade em diferentes protocolos. Isto resulta na implementação de mecanismos de segurança sobre protocolos de descoberta inseguros, assim como na intro- dução de mecanismos de atribuiçao e resolução de nomes que se focam na protecçao da privacidade. O principal veículo para a implementação destas soluções é através de ser- viços e protocolos de rede de uso geral. No entanto, a aplicabilidade destas soluções extende-se também a outros tópicos de investigação que recorrem a mecanismos de resolução de nomes para implementar soluções de intero- perabilidade, nomedamente a Internet das Coisas (IoT) e redes centradas na informação (ICN).Programa Doutoral em Informátic

    The impact of brand elements on the purchase behaviour of University of KwaZulu-Natal students in relation to fast-moving consumer goods.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.Branding, as well as its counterpart brand elements, plays an important role when it comes to a product or company. Branding is not a new concept and has been around for several years; however, the role of branding has become important over the years owing to changing business environments. Therefore, this study focuses on the impact of brand elements on the purchase behaviour of UKZN students in relation to fast-moving consumer goods (FMCGs). The study focuses on four main objectives. The first objective is to determine the impact that brand elements (brand names, URLs, logos, symbols, slogans, jingles, characters, packaging and spokespeople) have on the purchase decisions of UKZN students in relation to fast-moving consumer goods (FMCGs). The second objective is to understand the perceptions of UKZN students about the effectiveness of brand elements in building brand awareness, brand image and customer loyalty. The next objective focuses on uncovering the evaluative criteria that UKZN students rely upon when making purchase decisions of fast-moving consumer goods (FMCGs). Finally, the study focuses on formulating creative competitive advantage strategies that marketers can adopt for successful marketing of fast-moving consumer goods (FMCGs) to university students. A sample of 210 students from the University of KwaZulu-Natal’s Westville campus were drawn using the convenience sampling technique. The data were collected through questionnaires, which were then analysed accordingly. The data were analysed using descriptive and inferential statistics. The results of the study show that there is a significant positive relationship between the impact of brand elements on purchase decisions and perceptions of the effectiveness of brand elements, as well as the evaluative criteria used when making purchase decisions. There is also a significant positive relationship between perceptions of effectiveness of brand elements and evaluative criteria used when making purchase decisions. Recommendations from the results of the study provide insight into how marketers can adopt creative competitive advantage strategies for successful marketing of fast-moving consumer goods (FMCGs) to university students

    Lessons Learned from Applying Human Computer Interaction (HCI) Techniques to the Redesign of a User Interface

    Get PDF
    This research details the finding on web page design principles focusing on the Human Computer Interaction (HCI) aspect. The focus was derived from the Top Ten (10) Web Page Design Mistakes (2003) by Jakob Nielsen, a well-known guru of HCI and usability. In this technological era, there are thousands and millions of web sites and pages but how many of these pages are properly designed? Web page designers nowadays focus too much on the functionality of a system instead interface design which actually projects an application's uniqueness and key messages that creates the desired emotional response from the users involved The objectives of this research includes investigation of principles applied in HCI for web page interface design, redefinition of the erroneous web pages and formulation of domain-specific rules to ensure the effectiveness, practicality and acceptance of these techniques. Usability lab testing, questionnaires, prototype screens are done to focus on evaluation, based on the usability criteria of web pages identified from many credible sources. This research was done based on the fact that Internet users' preference and ease of browsing plays a vital role in deciding the acceptance of a web page. A powerful system would be left behind by users if it is not user friendly or designed according to the standards, principles and guidelines of HCI. The methodology used concentrates on a problem-specified framework which was developed by the author. There were six (6) processes involved namely Identification of target users, User consultation, Task analysis, Usability and accessibility assurance, Consideration of web design issues and Formulation of user interface design specification. The final result of this study is a domain specific guideline of HCI for web page design customized for profit making organizations and individuals. In conclusion, HCI principles is inseparable when it comes to web designing issues and this will continue to be vital as long as web page exist and is used by many

    Secure Connectivity With Persistent Identities

    Get PDF
    In the current Internet the Internet Protocol address is burdened with two roles. It serves as the identifier and the locator for the host. As the host moves its identity changes with its locator. The research community thinks that the Future Internet will include identifier-locator split in some form. Identifier-locator split is seen as the solution to multiple problems. However, identifier-locator split introduces multiple new problems to the Internet. In this dissertation we concentrate on: the feasibility of using identifier-locator split with legacy applications, securing the resolution steps, using the persistent identity for access control, improving mobility in environments using multiple address families and so improving the disruption tolerance for connectivity. The proposed methods achieve theoretical and practical improvements over the earlier state of the art. To raise the overall awareness, our results have been published in interdisciplinary forums.Nykypäivän Internetissä IP-osoite on kuormitettu kahdella eri roolilla. IP toimii päätelaitteen osoitteena, mutta myös usein sen identiteetinä. Tällöin laitteen identiteetti muuttuu laitteen liikkuessa, koska laitteen osoite vaihtuu. Tutkimusyhteisön mielestä paikan ja identiteetin erottaminen on välttämätöntä tulevaisuuden Internetissä. Paikan ja identiteetin erottaminen tuo kuitenkin esiin joukon uusia ongelmia. Tässä väitöskirjassa keskitytään selvittämään paikan ja identiteetin erottamisen vaikutusta olemassa oleviin verkkoa käyttäviin sovelluksiin, turvaamaan nimien muuntaminen osoitteiksi, helpottamaan pitkäikäisten identiteettien käyttöä pääsyvalvonnassa ja parantamaan yhteyksien mahdollisuuksia selviytyä liikkumisesta usean osoiteperheen ympäristöissä. Väitöskirjassa ehdotetut menetelmät saavuttavat sekä teoreettisia että käytännön etuja verrattuna aiempiin kirjallisuudessa esitettyihin menetelmiin. Saavutetut tulokset on julkaistu eri osa-alojen foorumeilla

    Geometric dimensioning and tolerancing a tool for concurrent engineering

    Get PDF
    The concept of Concurrent Engineering recognizes an immediate need for a new design environment and technology and so requires extensive interdisciplinary cooperation and integration of diverse functions of a manufacturing organization such as marketing, design, manufacturing and finance. One of the key factors to achieve successful integration among the departments is better communication and it becomes imperative in cases of varying levels of communication needs, especially in interdepartmental cases. Concurrent Engineering is a philosophy which provides certain benefits. There are various tools and methods available for implementation of Concurrent Engineering concepts. One of the tools is Geometric Dimensioning & Tolerancing (GD & T), which can be used for indespensible communication of exact part design and its proper execution. Unlike other tools, GD & T concepts emphasize on the integration of various functions in a manufacturing organization. This thesis discusses the applicability of Geometric Dimensioning and Tol-erancing as an integrating tool for related functional departments in the concurrent environment. It also establishes the synchronization between the objectives of the two concepts. Also, it discusses the effect of using GD & T on vendor lead time and manufacturing lead time. The effect on the product quality, the cost economics and the learning curve is also investigated. Lastly, the thesis concludes that the implementation of GD & T concepts automatically attains the objectives of concurrent engineering. The use of GD & T in industries may lead to widespread implementation of the concurrent engineering concepts globally. Therefore, it can be considered as a medium or tool for Concurrent Engineering

    WHERE DO YOU LOOK? RELATING VISUAL ATTENTION TO LEARNING OUTCOMES AND URL PARSING

    Get PDF
    Visual behavior provides a dynamic trail of where attention is directed. It is considered the behavioral interface between engagement and gaining information, and researchers have used it for several decades to study user\u27s behavior. This thesis focuses on employing visual attention to understand user\u27s behavior in two contexts: 3D learning and gauging URL safety. Such understanding is valuable for improving interactive tools and interface designs. In the first chapter, we present results from studying learners\u27 visual behavior while engaging with tangible and virtual 3D representations of objects. This is a replication of a recent study, and we extended it using eye tracking. By analyzing the visual behavior, we confirmed the original study results and added more quantitative explanations for the corresponding learning outcomes. Among other things, our results indicated that the users allocate similar visual attention while analyzing virtual and tangible learning material. In the next chapter, we present a user study\u27s outcomes wherein participants are instructed to classify a set of URLs wearing an eye tracker. Much effort is spent on teaching users how to detect malicious URLs. There has been significantly less focus on understanding exactly how and why users routinely fail to vet URLs properly. This user study aims to fill the void by shedding light on the underlying processes that users employ to gauge the UR L\u27s trustworthiness at the time of scanning. Our findings suggest that users have a cap on the amount of cognitive resources they are willing to expend on vetting a URL. Also, they tend to believe that the presence of www in the domain name indicates that the URL is safe

    On the scalability of LISP and advanced overlaid services

    Get PDF
    In just four decades the Internet has gone from a lab experiment to a worldwide, business critical infrastructure that caters to the communication needs of almost a half of the Earth's population. With these figures on its side, arguing against the Internet's scalability would seem rather unwise. However, the Internet's organic growth is far from finished and, as billions of new devices are expected to be joined in the not so distant future, scalability, or lack thereof, is commonly believed to be the Internet's biggest problem. While consensus on the exact form of the solution is yet to be found, the need for a semantic decoupling of a node's location and identity, often called a location/identity separation, is generally accepted as a promising way forward. Typically, this requires the introduction of new network elements that provide the binding of the two names-paces and caches that avoid hampering router packet forwarding speeds. But due to this increased complexity the solution's scalability is itself questioned. This dissertation evaluates the suitability of using the Locator/ID Separation Protocol (LISP), one of the most successful proposals to follow the location/identity separation guideline, as a solution to the Internet's scalability problem. However, because the deployment of any new architecture depends not only on solving the incumbent's technical problems but also on the added value that it brings, our approach follows two lines. In the first part of the thesis, we develop the analytical tools to evaluate LISP's control plane scalability while in the second we show that the required control/data plane separation provides important benefits that could drive LISP's adoption. As a first step to evaluating LISP's scalability, we propose a methodology for an analytical analysis of cache performance that relies on the working-set theory to estimate traffic locality of reference. One of our main contribution is that we identify the conditions network traffic must comply with for the theory to be applicable and then use the result to develop a model that predicts average cache miss rates. Furthermore, we study the model's suitability for long term cache provisioning and assess the cache's vulnerability in front of malicious users through an extension that accounts for cache polluting traffic. As a last step, we investigate the main sources of locality and their impact on the asymptotic scalability of the LISP cache. An important finding here is that destination popularity distribution can accurately describe cache performance, independent of the much harder to model short term correlations. Under a small set of assumptions, this result finally enables us to characterize asymptotic scalability with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the models and discuss the accuracy of our assumptions using several one-day-long packet traces collected at the egress points of a campus and an academic network. To show the added benefits that could drive LISP's adoption, in the second part of the thesis we investigate the possibilities of performing inter-domain multicast and improving intra-domain routing. Although the idea of using overlaid services to improve underlay performance is not new, this dissertation argues that LISP offers the right tools to reliably and easily implement such services due to its reliance on network instead of application layer support. In particular, we present and extensively evaluate Lcast, a network-layer single-source multicast framework designed to merge the robustness and efficiency of IP multicast with the configurability and low deployment cost of application-layer overlays. Additionally, we describe and evaluate LISP-MPS, an architecture capable of exploiting LISP to minimize intra-domain routing tables and ensure, among other, support for multi protocol switching and virtual networks.En menos de cuatro décadas Internet ha evolucionado desde un experimento de laboratorio hasta una infraestructura de alcance mundial, de importancia crítica para negocios y que atiende a las necesidades de casi un tercio de los habitantes del planeta. Con estos números, es difícil tratar de negar la necesidad de escalabilidad de Internet. Sin embargo, el crecimiento orgánico de Internet está aún lejos de finalizar ya que se espera que mil millones de dispositivos nuevos se conecten en el futuro cercano. Así pues, la falta de escalabilidad es el mayor problema al que se enfrenta Internet hoy en día. Aunque la solución definitiva al problema está aún por definir, la necesidad de desacoplar semánticamente la localización e identidad de un nodo, a menudo llamada locator/identifier separation, es generalmente aceptada como un camino prometedor a seguir. Sin embargo, esto requiere la introducción de nuevos dispositivos en la red que unan los dos espacios de nombres disjuntos resultantes y de cachés que almacenen los enlaces temporales entre ellos con el fin de aumentar la velocidad de transmisión de los enrutadores. A raíz de esta complejidad añadida, la escalabilidad de la solución en si misma es también cuestionada. Este trabajo evalúa la idoneidad de utilizar Locator/ID Separation Protocol (LISP), una de las propuestas más exitosas que siguen la pauta locator/identity separation, como una solución para la escalabilidad de la Internet. Con tal fin, desarrollamos las herramientas analíticas para evaluar la escalabilidad del plano de control de LISP pero también para mostrar que la separación de los planos de control y datos proporciona un importante valor añadido que podría impulsar la adopción de LISP. Como primer paso para evaluar la escalabilidad de LISP, proponemos una metodología para un estudio analítico del rendimiento de la caché que se basa en la teoría del working-set para estimar la localidad de referencias. Identificamos las condiciones que el tráfico de red debe cumplir para que la teoría sea aplicable y luego desarrollamos un modelo que predice las tasas medias de fallos de caché con respecto a parámetros de tráfico fácilmente medibles. Por otra parte, para demostrar su versatilidad y para evaluar la vulnerabilidad de la caché frente a usuarios malintencionados, extendemos el modelo para considerar el rendimiento frente a tráfico generado por usuarios maliciosos. Como último paso, investigamos como usar la popularidad de los destinos para estimar el rendimiento de la caché, independientemente de las correlaciones a corto plazo. Bajo un pequeño conjunto de hipótesis conseguimos caracterizar la escalabilidad con respecto a la cantidad de prefijos (el crecimiento de Internet) y los usuarios (crecimiento del sitio LISP). Validamos los modelos y discutimos la exactitud de nuestras suposiciones utilizando varias trazas de paquetes reales. Para mostrar los beneficios adicionales que podrían impulsar la adopción de LISP, también investigamos las posibilidades de realizar multidifusión inter-dominio y la mejora del enrutamiento dentro del dominio. Aunque la idea de utilizar servicios superpuestos para mejorar el rendimiento de la capa subyacente no es nueva, esta tesis sostiene que LISP ofrece las herramientas adecuadas para poner en práctica de forma fiable y fácilmente este tipo de servicios debido a que LISP actúa en la capa de red y no en la capa de aplicación. En particular, presentamos y evaluamos extensamente Lcast, un marco de multidifusión con una sola fuente diseñado para combinar la robustez y eficiencia de la multidifusión IP con la capacidad de configuración y bajo coste de implementación de una capa superpuesta a nivel de aplicación. Además, describimos y evaluamos LISP-MPS, una arquitectura capaz de explotar LISP para minimizar las tablas de enrutamiento intra-dominio y garantizar, entre otras, soporte para conmutación multi-protocolo y redes virtuales
    corecore