18 research outputs found

    Optimization of Computer Ontologies for Ecourses in Information and Communication Technologies

    Get PDF
    A methodology is proposed for modifying computer ontologies (CO) for electronic courses (EC) in the field of information and communication technologies (ICT) for universities, schools, extracurricular institutions, as well as for the professional retraining of specialists. The methodology includes the modification of CO by representing the formal ontograph of CO in the form of a graph and using techniques for working with the graph to find optimal paths on the graph using applied software (SW). A genetic algorithm (GA) is involved in the search for the optimal CO. This will lead to the division of the ontograph into branches and the ability to calculate the best trajectory in a certain sense through the EC educational material, taking into account the syllabus. An example is considered for the ICT course syllabus in terms of a specific topic covering the design and use of databases. It is concluded that for the full implementation of this methodology, a tool is needed that automates this procedure for developing EC and/or electronic textbooks. An algorithm and a prototype of software tools are also proposed, integrating machine methods of working with CO and graphs

    Critical review of the e-loyalty literature: a purchase-centred framework

    Get PDF
    Over the last few years, the concept of online loyalty has been examined extensively in the literature, and it remains a topic of constant inquiry for both academics and marketing managers. The tremendous development of the Internet for both marketing and e-commerce settings, in conjunction with the growing desire of consumers to purchase online, has promoted two main outcomes: (a) increasing numbers of Business-to-Customer companies running businesses online and (b) the development of a variety of different e-loyalty research models. However, current research lacks a systematic review of the literature that provides a general conceptual framework on e-loyalty, which would help managers to understand their customers better, to take advantage of industry-related factors, and to improve their service quality. The present study is an attempt to critically synthesize results from multiple empirical studies on e-loyalty. Our findings illustrate that 62 instruments for measuring e-loyalty are currently in use, influenced predominantly by Zeithaml et al. (J Marketing. 1996;60(2):31-46) and Oliver (1997; Satisfaction: a behavioral perspective on the consumer. New York: McGraw Hill). Additionally, we propose a new general conceptual framework, which leads to antecedents dividing e-loyalty on the basis of the action of purchase into pre-purchase, during-purchase and after-purchase factors. To conclude, a number of managerial implementations are suggested in order to help marketing managers increase their customers’ e-loyalty by making crucial changes in each purchase stage

    Solving Isolated Nodes Problem in ZigBee Pro for Wireless Sensor Networks

    Get PDF
    Wireless sensor network based on the ZigBee protocol consists of many sensor devices. In some cases, the sensor nodes may turn to isolated node because random distribution, particularly when creating the network. In this research was suggested two cases to overcome on the isolated node problem, the first case had able to overcome this problem by distributing the isolated nodes on the router nodes that carry the least number of sensor nodes, it helps to minimize the computational overhead on router nodes too, while the second one is able to overcome this problem by calculating the distance between the isolated nodes and the routers and then adds these nodes to the nearest routers. Subsequently, this method helps to minimize the energy consumption. The results show our approach able to solve the problem of isolated nodes using these two methods and when compared between them turns out the second method is better In terms of energy consumption. In addition, we are able to make the network larger scale

    A risk index model for security incident prioritisation

    Get PDF
    With thousands of incidents identified by security appliances every day, the process of distinguishing which incidents are important and which are trivial is complicated. This paper proposes an incident prioritisation model, the Risk Index Model (RIM), which is based on risk assessment and the Analytic Hierarchy Process (AHP). The model uses indicators, such as criticality, maintainability, replaceability, and dependability as decision factors to calculate incidents’ risk index. The RIM was validated using the MIT DARPA LLDOS 1.0 dataset, and the results were compared against the combined priorities of the Common Vulnerability Scoring System (CVSS) v2 and Snort Priority. The experimental results have shown that 100% of incidents could be rated with RIM, compared to only 17.23% with CVSS. In addition, this study also improves the limitation of group priority in the Snort Priority (e.g. high, medium and low priority) by quantitatively ranking, sorting and listing incidents according to their risk index. The proposed study has also investigated the effect of applying weighted indicators at the calculation of the risk index, as well as the effect of calculating them dynamically. The experiments have shown significant changes in the resultant risk index as well as some of the top priority rankings

    A publish/subscribe approach for implementing GAG's distributed collaborative business processes with high data availability

    Get PDF
    International audienceWith the ever-increasing development of the Internet and the diversification of communication media, there is a growing interest in distributed business process models that focus on exchanged data (or artifact) to control and pilot processes. The Guarded Attribute Grammars (GAG) is one such model; it stands out from the others by the fact that it emphasizes the central place occupied by user decisions during the process execution: it is both data-driven and user-centric. In this paper we present an approach to implementing distributed collaborative business processes modeled using GAG in which communications are done by publish/subscribe with redirection of subscriptions (pub/sub-RS). Pub/sub-RS-which we propose-guarantees high data availability during the process execution, by ensuring that an actor, perceived as a subscriber, will always receive a data he needs to perform a task as soon as it is produced. Moreover, if the data is semi-structured, and is produced collaboratively and incrementally by several actors, its subscribers will be notified as soon as one of its components (a prefix) is produced at the same time they will be subscribed in a transparent way to the remaining components (the suffix).Avec le développement toujours croissant d'internet et la diversification des moyens de communication il est un intérêt croissant pour les modèles de processus métiers distribués qui mettent l'accent sur les données échangées (ou artefacts) pour contrôler et piloter les processus. Les grammaires attribuées avec gardes (GAG) est l'un de ces modèles; il se démarque des autres par le fait qu'il met l'emphase sur la place centrale qu'occupe les décisions des utilisateurs lors de l'exécution d'un processus: il est à la fois centré sur les données et sur l'utilisateur. Dans ce papier, nous présentons une approche de mise en œuvre de processus métiers collaboratifs distribués modélisés à l'aide des GAG dans lesquels les communications se font par publish/subscribe avec redirection de souscriptions (pub/sub-RS). Le pub/sub-RS (que nous proposons), garantit une haute disponibilité des données pendant l'exécution des processus en assurant qu'un acteur (vu comme un abonné), recevra toujours une donnée dont il a besoin pour effectuer une tâche dès qu'elle est produite. De plus, si la donnée est semi-structurée, et produite collaborativement et incrémentalement par plusieurs acteurs, les abonnés seront notifiés dès qu'une de ses composantes (un préfixe) est produite en même temps qu'ils seront abonnés de manière transparente à ses composantes résiduelles (le suffixe)

    Guarded Attribute Grammars and Publish/Subscribe for implementing distributed collaborative business processes with high data availability

    Get PDF
    With the ever-increasing development of the Internet and the diversification of communication media , business processes of companies are increasingly collaborative and distributed. This contrasts with traditional solutions deployed for their management which are usually centralized, based on the activity flow or on the exchanged documents. Moreover, the users who are usually the main actors in collaborations are often relegated to second place. Recently, a distributed, data-driven and user-centric approach called Guarded Attributed Grammar (GAG) has been proposed for the modeling of such processes; it thus provides an answer to most of these limitations. In this paper, we present an approach for implementing business processes modeled using GAG in which communications are done by publish/subscribe with redirection of subscription (pub/sub-RS). The pub/sub-RS-which we propose-guarantees high data availability during the process execution by ensuring that an actor, perceived as a subscriber, will always receive a data he needs to perform a task as soon as it is produced. Moreover, if the data is semi-stuctured, and produced collaboratively and incrementally by several actors, its subscribers will be notified as soon as one of its components (a prefix) is produced simultaneously, as they will be subscribed in a transparent way to the remaining components (the suffix).Avec le développement toujours croissant de l'Internet et la diversification des moyens de communication, les processus métiers sont de plus en plus collaboratifs et distribués. Ceci contraste avec les solutions traditionnelles déployées pour leur gestion qui sont habituellement centralisées, basées sur le flux d'activités ou sur les documents échangés. Bien plus, les utilisateurs qui sont généralement les acteurs principaux dans la collaboration y sont souvent relégués au second rang. Récemment, une approche distribuée, centrée sur l'utilisateur et pilotée par les données appelée Grammaires Attribuées Gardées (GAG), a été proposée pour la modélisation de tels processus; elle fournit donc une réponse à la plupart de ces limitations. Dans ce papier, nous présentons une approche de mise en œuvre de processus métiers modélisés à l'aide des GAG dans laquelle les communications se font par publish/subscribe avec redirection de souscriptions (pub/sub-RS). Le pub/sub-RS (que nous proposons) garantit une haute disponibilité des données pendant l'exécution d'un processus en assurant qu'un acteur (perçu comme un abonné) recevra toujours une donnée dont il a besoin pour effectuer une tâche dès qu'elle est produite. De plus, si la donnée est semi-structurée et produite collaborativement et incrémentalement par plusieurs acteurs, ses abonnés seront notifiés dès qu'une de ses composantes (un préfixe) est produite en même temps qu'ils seront abonnés de manière transparente aux composantes résiduelles (le suffixe)

    Caracterización del sistema institucional de ciencia, tecnología e innovación de la Universidad Libre Seccional Pereira

    Get PDF
    CD-T 303.483 R313; 63 pSe va a caracterizar la naturaleza del sistema institucional de Ciencia, Tecnología e Innovación (CTI) de la Universidad Libre Seccional Pereira (ULSP) para mejorar la eficacia social en la transferencia de conocimiento dentro del Sistema Departamental de CTI de Risaralda. La razón de este estudio permitirá identificar los factores que obstaculizan su desarrollo y establecer los que lo potencian. La finalidad es contribuir a la generación de conocimiento pertinente en las tres funciones sustantivas de la Universidad (docencia, investigación y proyección social), con el propósito de mejorar el funcionamiento sistémico tanto a nivel institucional como del territorial.Universidad Libre Seccional Pereir

    A study on the relationship between airport privatisation and airport efficiency: An application of using AHP/DEA methods

    Get PDF
    In order to deal with the competitive environment surrounding the air transport industry, civil aviation authorities have undertaken several approaches to improve airport efficiency, such as investing in the infrastructure and privatising airport ownership or governance. Among these methods, airport privatisation policy has been implemented for around 25 years in the U.K., closely followed by other European countries. By contrast, decision makers elsewhere, such as in the Asia-Pacific region, are now interested in privatisation and in doing so evaluate the impact of this process elsewhere. Focussing on the most popular method for assessing airport efficiency, with Data Envelopment Analysis (DEA) a unit can appear efficient simply because of its pattern of inputs and outputs rather than any inherent efficiency. But only using DEA may not provide useful results about the efficiency of airports as different decision makers may weight the relative importance of inputs and outputs differently (for example, airport managers, and airline companies). In this research, another aim is to develop and demonstrate the applicability of different analysis techniques within the AEES. For this reason, Analytic Hierarchy Process (AHP) analysis is adopted to calculate the importance of each variable. These results are then integrated into both DEA and DEA, Assurance Region (AR) models, to reflect the different importance of the metrics. In the context of air transportation, an integrated AHP/DEA and AHP/DEA-AR model are applied for the first time to evaluate airport efficiency. A sensitivity analysis with different variable sets is carried out. In conclusion, an AEES is established and the result shows that the approach by adopting AHP/DEA-AR model in particular can provide more accurate values of relative efficiency than using the traditional DEA approach. There are also different priorities between stakeholder groups and these can affect the efficiency scores of airports. However, the results for each of the different analysis techniques show that there is no statistically significant relationship between airport ownership and efficiency. Therefore, the primary aim of this research is to examine the relationship between airport privatisation and efficiency, through an Airport Efficiency Evaluation System (AEES). The study covers Europe and the Asia-Pacific region, reflecting different attitudes towards the role of government within airport management. Focussing on the most popular method for assessing airport efficiency, wit

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Nature-inspired algorithms for solving some hard numerical problems

    Get PDF
    Optimisation is a branch of mathematics that was developed to find the optimal solutions, among all the possible ones, for a given problem. Applications of optimisation techniques are currently employed in engineering, computing, and industrial problems. Therefore, optimisation is a very active research area, leading to the publication of a large number of methods to solve specific problems to its optimality. This dissertation focuses on the adaptation of two nature inspired algorithms that, based on optimisation techniques, are able to compute approximations for zeros of polynomials and roots of non-linear equations and systems of non-linear equations. Although many iterative methods for finding all the roots of a given function already exist, they usually require: (a) repeated deflations, that can lead to very inaccurate results due to the problem of accumulating rounding errors, (b) good initial approximations to the roots for the algorithm converge, or (c) the computation of first or second order derivatives, which besides being computationally intensive, it is not always possible. The drawbacks previously mentioned served as motivation for the use of Particle Swarm Optimisation (PSO) and Artificial Neural Networks (ANNs) for root-finding, since they are known, respectively, for their ability to explore high-dimensional spaces (not requiring good initial approximations) and for their capability to model complex problems. Besides that, both methods do not need repeated deflations, nor derivative information. The algorithms were described throughout this document and tested using a test suite of hard numerical problems in science and engineering. Results, in turn, were compared with several results available on the literature and with the well-known Durand–Kerner method, depicting that both algorithms are effective to solve the numerical problems considered.A Optimização é um ramo da matemática desenvolvido para encontrar as soluções óptimas, de entre todas as possíveis, para um determinado problema. Actualmente, são várias as técnicas de optimização aplicadas a problemas de engenharia, de informática e da indústria. Dada a grande panóplia de aplicações, existem inúmeros trabalhos publicados que propõem métodos para resolver, de forma óptima, problemas específicos. Esta dissertação foca-se na adaptação de dois algoritmos inspirados na natureza que, tendo como base técnicas de optimização, são capazes de calcular aproximações para zeros de polinómios e raízes de equações não lineares e sistemas de equações não lineares. Embora já existam muitos métodos iterativos para encontrar todas as raízes ou zeros de uma função, eles usualmente exigem: (a) deflações repetidas, que podem levar a resultados muito inexactos, devido ao problema da acumulação de erros de arredondamento a cada iteração; (b) boas aproximações iniciais para as raízes para o algoritmo convergir, ou (c) o cálculo de derivadas de primeira ou de segunda ordem que, além de ser computacionalmente intensivo, para muitas funções é impossível de se calcular. Estas desvantagens motivaram o uso da Optimização por Enxame de Partículas (PSO) e de Redes Neurais Artificiais (RNAs) para o cálculo de raízes. Estas técnicas são conhecidas, respectivamente, pela sua capacidade de explorar espaços de dimensão superior (não exigindo boas aproximações iniciais) e pela sua capacidade de modelar problemas complexos. Além disto, tais técnicas não necessitam de deflações repetidas, nem do cálculo de derivadas. Ao longo deste documento, os algoritmos são descritos e testados, usando um conjunto de problemas numéricos com aplicações nas ciências e na engenharia. Os resultados foram comparados com outros disponíveis na literatura e com o método de Durand–Kerner, e sugerem que ambos os algoritmos são capazes de resolver os problemas numéricos considerados
    corecore