27 research outputs found

    Survey on the Family of the Recursive-Rule Extraction Algorithm

    Get PDF
    In this paper, we first review the theoretical and historical backgrounds on rule extraction from neural network ensembles. Because the structures of previous neural network ensembles were quite complicated, research on an efficient rule extraction algorithm from neural network ensembles has been sparse, even though a practical need exists for rule extraction in Big Data datasets. We describe the Recursive-Rule extraction (Re-RX) algorithm, which is an important step toward handling large datasets. Then we survey the family of the Recursive-Rule extraction algorithm, i.e. the Multiple-MLP Ensemble Re-RX algorithm, and present concrete applications in financial and medical domains that require extremely high accuracy for classification rules. Finally, we mention two promising ideas to considerably enhance the accuracy of the Multiple-MLP Ensemble Re-RX algorithm. We also discuss developments in the near future that will make the Multiple-MLP Ensemble Re-RX algorithm much more accurate, concise, and comprehensible rule extraction from mixed datasets

    Soft computing-based methods for semantic service retrieval

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Nowadays, a large number of business services have been advertised to customers via online channels. To access the published services, the customers typically search for the services by using search engines. Consequently, in order to meet the customers' desires, many researchers have focused on improving performance of the retrieval process. In the recent past, semantic technologies have played an important role in service retrieval and service querying. A service retrieval system consists of two main processes; service annotation and service querying. Annotating services semantically enables machines to understand the purpose of services, while semantic service querying helps machines to expand user queries by considering meanings of query terms, and retrieve services which are relevant to the queries. Because of dealing with semantics of services and queries, both processes can further assist in intelligent and precise service retrieval, selection and composition. In terms of semantic service annotation, a key issue is the manual nature of service annotation. Manual service annotation requires not just large amount of time, but updating the annotation is infrequent and, hence, annotation of the service description changes may be out-of-date. Although some researchers have studied semantic service annotation, they have focused only on Web services, not business service information. Moreover, their approaches are semi-automated, so service providers are still required to select appropriate service annotations. Similar to semantic service annotation, existing literature in semantic service querying has focused on processing Web pages or Web services, not business service information. In addition, because of issues of ubiquity, heterogeneity, and ambiguity of services, the use of soft computing methods offers an interesting solution for handling complex tasks in service retrieval. Unfortunately, based on the literature review, no soft-computing based methods have been used for semantic service annotation or semantic service querying. In this research, intelligent soft-computing driven methods are developed to improve the performance of a semantic retrieval system for business services. The research includes three main parts, namely, intelligent methods for semantically annotating services, querying service concepts, and retrieving services based on relevant concepts. Furthermore, a prototype of a service retrieval system is built to validate the developed intelligent methods. The research proposes three semantic-based methods; ECBR, Vector-based and Classification-based, for accomplishing each research part. The experimental results present that the Classification-based method, which is based on soft-computing techniques, performs well in the service annotation and outperforms both the ECBR and the Vector-based methods in the service querying and service retrieval

    When Eye-Tracking Meets Cognitive Modeling: Applications to Cyber Security Systems

    Get PDF
    Human cognitive modeling techniques and related software tools have been widely used by researchers and practitioners to evaluate the effectiveness of user interface (UI) designs and related human performance. However, they are rarely used in the cyber security field despite the fact that human factors have been recognized as a key element for cyber security systems. For a cyber security system involving a relatively complicated UI, it could be difficult to build a cognitive model that accurately captures the different cognitive tasks involved in all user interactions. Using a moderately complicated user authentication system as an example system and CogTool as a typical cognitive modeling tool, this paper aims to provide insights into the use of eye-tracking data for facilitating human cognitive modeling of cognitive tasks more effectively and accurately. We used visual scan paths extracted from an eye-tracking user study to facilitate the design of cognitive modeling tasks. This allowed us to reproduce some insecure human behavioral patterns observed in some previous lab-based user studies on the same system, and more importantly, we also found some unexpected new results about human behavior. The comparison between human cognitive models with and without eye-tracking data suggests that eye-tracking data can provide useful information to facilitate the process of human cognitive modeling as well as to achieve a better understanding of security-related human behaviors. In addition, our results demonstrated that cyber security research can benefit from a combination of eye-tracking and cognitive modeling to study human behavior related security problems

    Simulating human detection of phishing websites: An investigation into the applicability of ACT-R cognitive behaviour architecture model

    Get PDF
    The prevalence and effectiveness of phishing attacks, despite the presence of a vast array of technical defences, are due largely to the fact that attackers are ruthlessly targeting what is often referred to as the weakest link in the system – the human. This paper reports the results of an investigation into how end users behave when faced with phishing websites and how this behaviour exposes them to attack. Specifically, the paper presents a proof of concept computer model for simulating human behaviour with respect to phishing website detection based on the ACT-R cognitive architecture, and draws conclusions as to the applicability of this architecture to human behaviour modelling within a phishing detection scenario. Following the development of a high-level conceptual model of the phishing website detection process, the study draws upon ACT-R to model and simulate the cognitive processes involved in judging the validity of a representative webpage based primarily around the characteristics of the HTTPS padlock security indicator. The study concludes that despite the low-level nature of the architecture and its very basic user interface support, ACT-R possesses strong capabilities which map well onto the phishing use case, and that further work to more fully represent the range of human security knowledge and behaviours in an ACT-R model could lead to improved insights into how best to combine technical and human defences to reduce the risk to end users from phishing attacks

    Exploring attributes, sequences, and time in Recommender Systems: From classical to Point-of-Interest recommendation

    Full text link
    Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingenieria Informática. Fecha de lectura: 08-07-2021Since the emergence of the Internet and the spread of digital communications throughout the world, the amount of data stored on the Web has been growing exponentially. In this new digital era, a large number of companies have emerged with the purpose of ltering the information available on the web and provide users with interesting items. The algorithms and models used to recommend these items are called Recommender Systems. These systems are applied to a large number of domains, from music, books, or movies to dating or Point-of-Interest (POI), which is an increasingly popular domain where users receive recommendations of di erent places when they arrive to a city. In this thesis, we focus on exploiting the use of contextual information, especially temporal and sequential data, and apply it in novel ways in both traditional and Point-of-Interest recommendation. We believe that this type of information can be used not only for creating new recommendation models but also for developing new metrics for analyzing the quality of these recommendations. In one of our rst contributions we propose di erent metrics, some of them derived from previously existing frameworks, using this contextual information. Besides, we also propose an intuitive algorithm that is able to provide recommendations to a target user by exploiting the last common interactions with other similar users of the system. At the same time, we conduct a comprehensive review of the algorithms that have been proposed in the area of POI recommendation between 2011 and 2019, identifying the common characteristics and methodologies used. Once this classi cation of the algorithms proposed to date is completed, we design a mechanism to recommend complete routes (not only independent POIs) to users, making use of reranking techniques. In addition, due to the great di culty of making recommendations in the POI domain, we propose the use of data aggregation techniques to use information from di erent cities to generate POI recommendations in a given target city. In the experimental work we present our approaches on di erent datasets belonging to both classical and POI recommendation. The results obtained in these experiments con rm the usefulness of our recommendation proposals, in terms of ranking accuracy and other dimensions like novelty, diversity, and coverage, and the appropriateness of our metrics for analyzing temporal information and biases in the recommendations producedDesde la aparici on de Internet y la difusi on de las redes de comunicaciones en todo el mundo, la cantidad de datos almacenados en la red ha crecido exponencialmente. En esta nueva era digital, han surgido un gran n umero de empresas con el objetivo de ltrar la informaci on disponible en la red y ofrecer a los usuarios art culos interesantes. Los algoritmos y modelos utilizados para recomendar estos art culos reciben el nombre de Sistemas de Recomendaci on. Estos sistemas se aplican a un gran n umero de dominios, desde m usica, libros o pel culas hasta las citas o los Puntos de Inter es (POIs, en ingl es), un dominio cada vez m as popular en el que los usuarios reciben recomendaciones de diferentes lugares cuando llegan a una ciudad. En esta tesis, nos centramos en explotar el uso de la informaci on contextual, especialmente los datos temporales y secuenciales, y aplicarla de forma novedosa tanto en la recomendaci on cl asica como en la recomendaci on de POIs. Creemos que este tipo de informaci on puede utilizarse no s olo para crear nuevos modelos de recomendaci on, sino tambi en para desarrollar nuevas m etricas para analizar la calidad de estas recomendaciones. En una de nuestras primeras contribuciones proponemos diferentes m etricas, algunas derivadas de formulaciones previamente existentes, utilizando esta informaci on contextual. Adem as, proponemos un algoritmo intuitivo que es capaz de proporcionar recomendaciones a un usuario objetivo explotando las ultimas interacciones comunes con otros usuarios similares del sistema. Al mismo tiempo, realizamos una revisi on exhaustiva de los algoritmos que se han propuesto en el a mbito de la recomendaci o n de POIs entre 2011 y 2019, identi cando las caracter sticas comunes y las metodolog as utilizadas. Una vez realizada esta clasi caci on de los algoritmos propuestos hasta la fecha, dise~namos un mecanismo para recomendar rutas completas (no s olo POIs independientes) a los usuarios, haciendo uso de t ecnicas de reranking. Adem as, debido a la gran di cultad de realizar recomendaciones en el ambito de los POIs, proponemos el uso de t ecnicas de agregaci on de datos para utilizar la informaci on de diferentes ciudades y generar recomendaciones de POIs en una determinada ciudad objetivo. En el trabajo experimental presentamos nuestros m etodos en diferentes conjuntos de datos tanto de recomendaci on cl asica como de POIs. Los resultados obtenidos en estos experimentos con rman la utilidad de nuestras propuestas de recomendaci on en t erminos de precisi on de ranking y de otras dimensiones como la novedad, la diversidad y la cobertura, y c omo de apropiadas son nuestras m etricas para analizar la informaci on temporal y los sesgos en las recomendaciones producida
    corecore