16 research outputs found

    Predicting fraud behaviour in online betting

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Information Analysis and ManagementFraud isn’t a new issue, there are discussions about this matter since the beginning of commerce. With the advance of the Internet this technique gained strain and became a billion-dollar business. There are many different types of online financial fraud: account takeover; identity theft; chargeback; credit card transactions; etc. Online betting is one of the markets where fraud is increasing every day. In Portugal, the regulation of gambling and online betting was approved in April 2015. In one hand, this legislation made possible the exploration of this business in a controlled and regulated environment, but on the other hand it encouraged customers to develop new ways of fraud. Traditional data analysis used to detect fraud involved different domains such as economics, finance and law. The complexity of these investigations soon became obsolete. Being fraud an adaptive crime, different areas such as Data Mining and Machine Learning were developed to identify and prevent fraud. The main goal of this Project is to develop a predicting model, using a data mining approach and machine learning methods, able to identify and prevent online financial fraud on the Portuguese Betting Market, a new but already strong business

    Dataflow Programming and Acceleration of Computationally-Intensive Algorithms

    Get PDF
    The volume of unstructured textual information continues to grow due to recent technological advancements. This resulted in an exponential growth of information generated in various formats, including blogs, posts, social networking, and enterprise documents. Numerous Enterprise Architecture (EA) documents are also created daily, such as reports, contracts, agreements, frameworks, architecture requirements, designs, and operational guides. The processing and computation of this massive amount of unstructured information necessitate substantial computing capabilities and the implementation of new techniques. It is critical to manage this unstructured information through a centralized knowledge management platform. Knowledge management is the process of managing information within an organization. This involves creating, collecting, organizing, and storing information in a way that makes it easily accessible and usable. The research involved the development textual knowledge management system, and two use cases were considered for extracting textual knowledge from documents. The first case study focused on the safety-critical documents of a railway enterprise. Safety is of paramount importance in the railway industry. There are several EA documents including manuals, operational procedures, and technical guidelines that contain critical information. Digitalization of these documents is essential for analysing vast amounts of textual knowledge that exist in these documents to improve the safety and security of railway operations. A case study was conducted between the University of Huddersfield and the Railway Safety Standard Board (RSSB) to analyse EA safety documents using Natural language processing (NLP). A graphical user interface was developed that includes various document processing features such as semantic search, document mapping, text summarization, and visualization of key trends. For the second case study, open-source data was utilized, and textual knowledge was extracted. Several features were also developed, including kernel distribution, analysis offkey trends, and sentiment analysis of words (such as unique, positive, and negative) within the documents. Additionally, a heterogeneous framework was designed using CPU/GPU and FPGAs to analyse the computational performance of document mapping

    Optimized task scheduling based on hybrid symbiotic organisms search algorithms for cloud computing environment

    Get PDF
    In Cloud Computing model, users are charged according to the usage of resources and desired Quality of Service (QoS). Task scheduling algorithms are responsible for specifying adequate set of resources to execute user applications in the form of tasks, and schedule decisions of task scheduling algorithms are based on QoS requirements defined by the user. Task scheduling problem is an NP-Complete problem, due to the NP-Complete nature of task scheduling problems and huge search space presented by large scale problem instances, many of the existing solution algorithms incur high computational complexity and cannot effectively obtain global optimum solutions. Recently, Symbiotic Organisms Search (SOS) has been applied to various optimization problems and results obtained were found to be competitive with state-of-the-art metaheuristic algorithms. However, similar to the case other metaheuristic optimization algorithms, the efficiency of SOS algorithm deteriorates as the size of the search space increases. Moreover, SOS suffers from local optima entrapment and its static control parameters cannot maintain a balance between local and global search. In this study, Cooperative Coevolutionary Constrained Multiobjective Symbiotic Organisms Search (CC-CMSOS), Cooperative Coevolutionary Constrained Multi-objective Memetic Symbiotic Organisms Search (CC-CMMSOS), and Cooperative Coevolutionary Constrained Multi-objective Adaptive Benefit Factor Symbiotic Organisms Search (CC-CMABFSOS) algorithms are proposed to solve constrained multi-objective large scale task scheduling optimization problem on IaaS cloud computing environment. To address the issue of scalability, the concept of Cooperative Coevolutionary for enhancing SOS named CC-CMSOS make SOS more efficient for solving large scale task scheduling problems. CC-CMMSOS algorithm further improves the performance of SOS algorithm by hybridizing with Simulated Annealing (SA) to avoid entrapment in local optima for global convergence. Finally, CC-CMABFSOS algorithm adaptively turn SOS control parameters to balance the local and global search procedure for faster convergence speed. The performance of the proposed CC-CMSOS, CC-CMMSOS, and CC-CMABFSOS algorithms are evaluated on CloudSim simulator, using both standard workload traces and synthesized workloads for larger problem instances of up to 5000. Moreover, CC-CMSOS, CC-CMMSOS, and CC-CMABFSOS algorithms are compared with multi-objective optimization algorithms, namely, EMS-C, ECMSMOO, and BOGA. The CC-CMSOS, CC-CMMSOS, and CC-CMABFSOS algorithms obtained significant improved optimal trade-offs between execution time (makespan) and financial cost (cost) while meeting deadline constraints with no computational overhead. The performance improvements obtained by the proposed algorithms in terms of hypervolume ranges from 8.72% to 37.95% across the workloads. Therefore, the proposed algorithms have potentials to improve the performance of QoS delivery

    High tech automated bottling process for small to medium scale enterprises using PLC, scada and basic industry 4.0 concepts

    Get PDF
    The automation of industrial processes has been one of the greatest innovations in the industrial sector. It allows faster and accurate operations of production processes while producing more outputs than old manual production techniques. In the beverage industry, this innovation was also well embraced, especially to improve its bottling processes. However it has been proven that a continuous optimization of automation techniques using advanced and current trend of automation is the only way industrial companies will survive in a very competitive market. This becomes more challenging for small to medium scale enterprises (SMEs) which are not always keen in adopting new technologies by fear of overspending their little revenues. By doing so, SMEs are exposing themselves to limited growth and vulnerable lifecycle in this fast growing automation world. The main contribution of this study was to develop practical and affordable applications that will optimize the bottling process of a SME beverage plant by combining its existing production resources to basic principles of the current trend of automation, Industry 4.0 (I40). This research enabled the small beverage industry to achieve higher production rate, better delivery time and easy access of plant information through production forecast using linear regression, predictive maintenance using speed vibration sensor and decentralization of production monitoring via cloud applications. The existing plant Siemens S7-1200 programmable logic controller (PLC) and ZENON supervisory control and data acquisition (SCADA) system were used to program the optimized process with very few additional resources. This study also opened doors for automation in SMEs, in general, to use I40 in their production processes with available means and limited cost.School of ComputingM.Tech (Engineering, Electrical

    Big data reference architecture for industry 4.0: including economic and ethical Implications

    Get PDF
    El rápido progreso de la Industria 4.0 se consigue gracias a las innovaciones en varios campos, por ejemplo, la fabricación, el big data y la inteligencia artificial. La tesis explica la necesidad de una arquitectura del Big Data para implementar la Inteligencia Artificial en la Industria 4.0 y presenta una arquitectura cognitiva para la inteligencia artificial - CAAI - como posible solución, que se adapta especialmente a los retos de las pequeñas y medianas empresas. La tesis examina las implicaciones económicas y éticas de esas tecnologías y destaca tanto los beneficios como los retos para los países, las empresas y los trabajadores individuales. El "Cuestionario de la Industria 4.0 para las PYME" se realizó para averiguar los requisitos y necesidades de las pequeñas y medianas empresas. Así, la nueva arquitectura de la CAAI presenta un modelo de diseño de software y proporciona un conjunto de bloques de construcción de código abierto para apoyar a las empresas durante la implementación. Diferentes casos de uso demuestran la aplicabilidad de la arquitectura y la siguiente evaluación verifica la funcionalidad de la misma.The rapid progress in Industry 4.0 is achieved through innovations in several fields, e.g., manufacturing, big data, and artificial intelligence. The thesis motivates the need for a Big Data architecture to apply artificial intelligence in Industry 4.0 and presents a cognitive architecture for artificial intelligence – CAAI – as a possible solution, which is especially suited for the challenges of small and medium-sized enterprises. The work examines the economic and ethical implications of those technologies and highlights the benefits but also the challenges for countries, companies and individual workers. The "Industry 4.0 Questionnaire for SMEs" was conducted to gain insights into smaller and medium-sized companies’ requirements and needs. Thus, the new CAAI architecture presents a software design blueprint and provides a set of open-source building blocks to support companies during implementation. Different use cases demonstrate the applicability of the architecture and the following evaluation verifies the functionality of the architecture

    Power Quality in Electrified Transportation Systems

    Get PDF
    "Power Quality in Electrified Transportation Systems" has covered interesting horizontal topics over diversified transportation technologies, ranging from railways to electric vehicles and ships. Although the attention is chiefly focused on typical railway issues such as harmonics, resonances and reactive power flow compensation, the integration of electric vehicles plays a significant role. The book is completed by some additional significant contributions, focusing on the interpretation of Power Quality phenomena propagation in railways using the fundamentals of electromagnetic theory and on electric ships in the light of the latest standardization efforts

    Clinical decision support: Knowledge representation and uncertainty management

    Get PDF
    Programa Doutoral em Engenharia BiomédicaDecision-making in clinical practice is faced with many challenges due to the inherent risks of being a health care professional. From medical error to undesired variations in clinical practice, the mitigation of these issues seems to be tightly connected to the adherence to Clinical Practice Guidelines as evidence-based recommendations The deployment of Clinical Practice Guidelines in computational systems for clinical decision support has the potential to positively impact health care. However, current approaches to Computer-Interpretable Guidelines evidence a set of issues that leave them wanting. These issues are related with the lack of expressiveness of their underlying models, the complexity of knowledge acquisition with their tools, the absence of support to the clinical decision making process, and the style of communication of Clinical Decision Support Systems implementing Computer-Interpretable Guidelines. Such issues pose as obstacles that prevent these systems from showing properties like modularity, flexibility, adaptability, and interactivity. All these properties reflect the concept of living guidelines. The purpose of this doctoral thesis is, thus, to provide a framework that enables the expression of these properties. The modularity property is conferred by the ontological definition of Computer-Interpretable Guidelines and the assistance in guideline acquisition provided by an editing tool, allowing for the management of multiple knowledge patterns that can be reused. Flexibility is provided by the representation primitives defined in the ontology, meaning that the model is adjustable to guidelines from different categories and specialities. On to adaptability, this property is conferred by mechanisms of Speculative Computation, which allow the Decision Support System to not only reason with incomplete information but to adapt to changes of state, such as suddenly knowing the missing information. The solution proposed for interactivity consists in embedding Computer-Interpretable Guideline advice directly into the daily life of health care professionals and provide a set of reminders and notifications that help them to keep track of their tasks and responsibilities. All these solutions make the CompGuide framework for the expression of Clinical Decision Support Systems based on Computer-Interpretable Guidelines.A tomada de decisão na prática clínica enfrenta inúmeros desafios devido aos riscos inerentes a ser um profissional de saúde. Desde o erro medico até às variações indesejadas da prática clínica, a atenuação destes problemas parece estar intimamente ligada à adesão a Protocolos Clínicos, uma vez que estes são recomendações baseadas na evidencia. A operacionalização de Protocolos Clínicos em sistemas computacionais para apoio à decisão clínica apresenta o potencial de ter um impacto positivo nos cuidados de saúde. Contudo, as abordagens atuais a Protocolos Clínicos Interpretáveis por Maquinas evidenciam um conjunto de problemas que as deixa a desejar. Estes problemas estão relacionados com a falta de expressividade dos modelos que lhes estão subjacentes, a complexidade da aquisição de conhecimento utilizando as suas ferramentas, a ausência de suporte ao processo de decisão clínica e o estilo de comunicação dos Sistemas de Apoio à Decisão Clínica que implementam Protocolos Clínicos Interpretáveis por Maquinas. Tais problemas constituem obstáculos que impedem estes sistemas de apresentarem propriedades como modularidade, flexibilidade, adaptabilidade e interatividade. Todas estas propriedades refletem o conceito de living guidelines. O propósito desta tese de doutoramento é, portanto, o de fornecer uma estrutura que possibilite a expressão destas propriedades. A modularidade é conferida pela definição ontológica dos Protocolos Clínicos Interpretáveis por Maquinas e pela assistência na aquisição de protocolos fornecida por uma ferramenta de edição, permitindo assim a gestão de múltiplos padrões de conhecimento que podem ser reutilizados. A flexibilidade é atribuída pelas primitivas de representação definidas na ontologia, o que significa que o modelo é ajustável a protocolos de diferentes categorias e especialidades. Quanto à adaptabilidade, esta é conferida por mecanismos de Computação Especulativa que permitem ao Sistema de Apoio à Decisão não só raciocinar com informação incompleta, mas também adaptar-se a mudanças de estado, como subitamente tomar conhecimento da informação em falta. A solução proposta para a interatividade consiste em incorporar as recomendações dos Protocolos Clínicos Interpretáveis por Maquinas diretamente no dia a dia dos profissionais de saúde e fornecer um conjunto de lembretes e notificações que os auxiliam a rastrear as suas tarefas e responsabilidades. Todas estas soluções constituem a estrutura CompGuide para a expressão de Sistemas de Apoio à Decisão Clínica baseados em Protocolos Clínicos Interpretáveis por Máquinas.The work of the PhD candidate Tiago José Martins Oliveira is supported by a grant from FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) with the reference SFRH/BD/85291/ 2012

    Demand planning practices in the Gauteng clothing industry

    Get PDF
    The clothing industry is multifaceted and is characterised by garments with a short life cycle, unstable customer needs and varying fashion styles. This affects the accuracy of demand planning. In SA, the clothing industry has experienced a decline in the number of clothing manufacturers and manufacturing outputs as well as fluctuations in employment. This study investigates demand planning practices in the Gauteng clothing industry. A descriptive and exploratory study was conducted based on a semi-structured questionnaire. The structured data was descriptively analysed using SPSS and inferentially analysed using the Kruskal‒Wallis test as well as content analysis for the unstructured questions. The findings revealed that demand planning practices in the Gauteng clothing industry are conducted using the hierarchical and optimal demand planning approaches. The results also revealed that there are certain factors which affect the way demand planning is conducted in the clothing industry in Gauteng. These factors includes: scheduling, fashion clothes, point of sale system, imports, estimation, recession and lead time. Furthermore, the study revealed that there are differences in the factors affecting demand planning regarding the three key clothing stakeholders (fabric suppliers, clothing manufacturers and fashion designers). The study revealed that key demand planning practices employed in the Gauteng clothing industry are production planning, uncertainty prevention, forecasting and production machine capabilities. These practices are important attributes of the hierarchical and optimal demand planning approaches. The study recommends that the hierarchical demand planning approach is more effective when planning for basic clothes (which involved planning horizon of twelve months), while the optimal demand planning approach is effective when planning for fashion clothes (which involved planning horizon of six months). The study recommends that the Gauteng clothing industry should consider factors which affect demand planning when planning for customers' needs as they affect the level of productivity in the organisation.Entrepreneurship Supply Chain Transport Tourism and Logistics ManagementM. Com.(Logistics
    corecore