777 research outputs found

    Integration of evolutionary algorithm in an agent-oriented approach for an adaptive e-learning

    Get PDF
    This paper describes an agent- oriented approach that aims to create learning situations by solving problems. The proposed system is designed as a multi-agent that organizes interfaces, coordinators, sources of information and mobiles. The objective of this approach is to get learners to solve a problem that leads them to get engaged in several learning activities, chosen according to their level of knowledge and preferences in order to ensure adaptive learning and reduce the rate of learner abundance in an e-learning system. The search for learning activities procedure is based on evolutionary algorithms typically: genetic algorithm, to offer learners the optimal solution adapted to their profiles and ensuring a resolution of the proposed learning problem. In terms of results, we have adopted “immigration strategies” to improve the performance of the genetic algorithm. To show the effectiveness of the proposed approach we have made a comparative study with other artificial intelligence optimization methods. We conducted a real experiment with primary school learners in order to test the effectiveness of the proposed approach and to set up its functioning. The experiment results showed a high rate of success and engagement among the learners who followed the proposed adaptive learning scenario

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    NEMISA Digital Skills Conference (Colloquium) 2023

    Get PDF
    The purpose of the colloquium and events centred around the central role that data plays today as a desirable commodity that must become an important part of massifying digital skilling efforts. Governments amass even more critical data that, if leveraged, could change the way public services are delivered, and even change the social and economic fortunes of any country. Therefore, smart governments and organisations increasingly require data skills to gain insights and foresight, to secure themselves, and for improved decision making and efficiency. However, data skills are scarce, and even more challenging is the inconsistency of the associated training programs with most curated for the Science, Technology, Engineering, and Mathematics (STEM) disciplines. Nonetheless, the interdisciplinary yet agnostic nature of data means that there is opportunity to expand data skills into the non-STEM disciplines as well.College of Engineering, Science and Technolog

    Exploring Text Mining and Analytics for Applications in Public Security: An in-depth dive into a systematic literature review

    Get PDF
    Text mining and related analytics emerge as a technological approach to support human activities in extracting useful knowledge through texts in several formats. From a managerial point of view, it can help organizations in planning and decision-making processes, providing information that was not previously evident through textual materials produced internally or even externally. In this context, within the public/governmental scope, public security agencies are great beneficiaries of the tools associated with text mining, in several aspects, from applications in the criminal area to the collection of people's opinions and sentiments about the actions taken to promote their welfare. This article reports details of a systematic literature review focused on identifying the main areas of text mining application in public security, the most recurrent technological tools, and future research directions. The searches covered four major article bases (Scopus, Web of Science, IEEE Xplore, and ACM Digital Library), selecting 194 materials published between 2014 and the first half of 2021, among journals, conferences, and book chapters. There were several findings concerning the targets of the literature review, as presented in the results of this article

    A review of natural language processing in contact centre automation

    Get PDF
    Contact centres have been highly valued by organizations for a long time. However, the COVID-19 pandemic has highlighted their critical importance in ensuring business continuity, economic activity, and quality customer support. The pandemic has led to an increase in customer inquiries related to payment extensions, cancellations, and stock inquiries, each with varying degrees of urgency. To address this challenge, organizations have taken the opportunity to re-evaluate the function of contact centres and explore innovative solutions. Next-generation platforms that incorporate machine learning techniques and natural language processing, such as self-service voice portals and chatbots, are being implemented to enhance customer service. These platforms offer robust features that equip customer agents with the necessary tools to provide exceptional customer support. Through an extensive review of existing literature, this paper aims to uncover research gaps and explore the advantages of transitioning to a contact centre that utilizes natural language solutions as the norm. Additionally, we will examine the major challenges faced by contact centre organizations and offer reco

    The emerging landscape of Social Media Data Collection: anticipating trends and addressing future challenges

    Full text link
    [spa] Las redes sociales se han convertido en una herramienta poderosa para crear y compartir contenido generado por usuarios en todo internet. El amplio uso de las redes sociales ha llevado a generar una enorme cantidad de información, presentando una gran oportunidad para el marketing digital. A través de las redes sociales, las empresas pueden llegar a millones de consumidores potenciales y capturar valiosos datos de los consumidores, que se pueden utilizar para optimizar estrategias y acciones de marketing. Los beneficios y desafíos potenciales de utilizar las redes sociales para el marketing digital también están creciendo en interés entre la comunidad académica. Si bien las redes sociales ofrecen a las empresas la oportunidad de llegar a una gran audiencia y recopilar valiosos datos de los consumidores, el volumen de información generada puede llevar a un marketing sin enfoque y consecuencias negativas como la sobrecarga social. Para aprovechar al máximo el marketing en redes sociales, las empresas necesitan recopilar datos confiables para propósitos específicos como vender productos, aumentar la conciencia de marca o fomentar el compromiso y para predecir los comportamientos futuros de los consumidores. La disponibilidad de datos de calidad puede ayudar a construir la lealtad a la marca, pero la disposición de los consumidores a compartir información depende de su nivel de confianza en la empresa o marca que lo solicita. Por lo tanto, esta tesis tiene como objetivo contribuir a la brecha de investigación a través del análisis bibliométrico del campo, el análisis mixto de perfiles y motivaciones de los usuarios que proporcionan sus datos en redes sociales y una comparación de algoritmos supervisados y no supervisados para agrupar a los consumidores. Esta investigación ha utilizado una base de datos de más de 5,5 millones de colecciones de datos durante un período de 10 años. Los avances tecnológicos ahora permiten el análisis sofisticado y las predicciones confiables basadas en los datos capturados, lo que es especialmente útil para el marketing digital. Varios estudios han explorado el marketing digital a través de las redes sociales, algunos centrándose en un campo específico, mientras que otros adoptan un enfoque multidisciplinario. Sin embargo, debido a la naturaleza rápidamente evolutiva de la disciplina, se requiere un enfoque bibliométrico para capturar y sintetizar la información más actualizada y agregar más valor a los estudios en el campo. Por lo tanto, las contribuciones de esta tesis son las siguientes. En primer lugar, proporciona una revisión exhaustiva de la literatura sobre los métodos para recopilar datos personales de los consumidores de las redes sociales para el marketing digital y establece las tendencias más relevantes a través del análisis de artículos significativos, palabras clave, autores, instituciones y países. En segundo lugar, esta tesis identifica los perfiles de usuario que más mienten y por qué. Específicamente, esta investigación demuestra que algunos perfiles de usuario están más inclinados a cometer errores, mientras que otros proporcionan información falsa intencionalmente. El estudio también muestra que las principales motivaciones detrás de proporcionar información falsa incluyen la diversión y la falta de confianza en las medidas de privacidad y seguridad de los datos. Finalmente, esta tesis tiene como objetivo llenar el vacío en la literatura sobre qué algoritmo, supervisado o no supervisado, puede agrupar mejor a los consumidores que proporcionan sus datos en las redes sociales para predecir su comportamiento futuro

    Modern data analytics in the cloud era

    Get PDF
    Cloud Computing ist die dominante Technologie des letzten Jahrzehnts. Die Benutzerfreundlichkeit der verwalteten Umgebung in Kombination mit einer nahezu unbegrenzten Menge an Ressourcen und einem nutzungsabhängigen Preismodell ermöglicht eine schnelle und kosteneffiziente Projektrealisierung für ein breites Nutzerspektrum. Cloud Computing verändert auch die Art und Weise wie Software entwickelt, bereitgestellt und genutzt wird. Diese Arbeit konzentriert sich auf Datenbanksysteme, die in der Cloud-Umgebung eingesetzt werden. Wir identifizieren drei Hauptinteraktionspunkte der Datenbank-Engine mit der Umgebung, die veränderte Anforderungen im Vergleich zu traditionellen On-Premise-Data-Warehouse-Lösungen aufweisen. Der erste Interaktionspunkt ist die Interaktion mit elastischen Ressourcen. Systeme in der Cloud sollten Elastizität unterstützen, um den Lastanforderungen zu entsprechen und dabei kosteneffizient zu sein. Wir stellen einen elastischen Skalierungsmechanismus für verteilte Datenbank-Engines vor, kombiniert mit einem Partitionsmanager, der einen Lastausgleich bietet und gleichzeitig die Neuzuweisung von Partitionen im Falle einer elastischen Skalierung minimiert. Darüber hinaus führen wir eine Strategie zum initialen Befüllen von Puffern ein, die es ermöglicht, skalierte Ressourcen unmittelbar nach der Skalierung auszunutzen. Cloudbasierte Systeme sind von fast überall aus zugänglich und verfügbar. Daten werden häufig von zahlreichen Endpunkten aus eingespeist, was sich von ETL-Pipelines in einer herkömmlichen Data-Warehouse-Lösung unterscheidet. Viele Benutzer verzichten auf die Definition von strikten Schemaanforderungen, um Transaktionsabbrüche aufgrund von Konflikten zu vermeiden oder um den Ladeprozess von Daten zu beschleunigen. Wir führen das Konzept der PatchIndexe ein, die die Definition von unscharfen Constraints ermöglichen. PatchIndexe verwalten Ausnahmen zu diesen Constraints, machen sie für die Optimierung und Ausführung von Anfragen nutzbar und bieten effiziente Unterstützung bei Datenaktualisierungen. Das Konzept kann auf beliebige Constraints angewendet werden und wir geben Beispiele für unscharfe Eindeutigkeits- und Sortierconstraints. Darüber hinaus zeigen wir, wie PatchIndexe genutzt werden können, um fortgeschrittene Constraints wie eine unscharfe Multi-Key-Partitionierung zu definieren, die eine robuste Anfrageperformance bei Workloads mit unterschiedlichen Partitionsanforderungen bietet. Der dritte Interaktionspunkt ist die Nutzerinteraktion. Datengetriebene Anwendungen haben sich in den letzten Jahren verändert. Neben den traditionellen SQL-Anfragen für Business Intelligence sind heute auch datenwissenschaftliche Anwendungen von großer Bedeutung. In diesen Fällen fungiert das Datenbanksystem oft nur als Datenlieferant, während der Rechenaufwand in dedizierten Data-Science- oder Machine-Learning-Umgebungen stattfindet. Wir verfolgen das Ziel, fortgeschrittene Analysen in Richtung der Datenbank-Engine zu verlagern und stellen das Grizzly-Framework als DataFrame-zu-SQL-Transpiler vor. Auf dieser Grundlage identifizieren wir benutzerdefinierte Funktionen (UDFs) und maschinelles Lernen (ML) als wichtige Aufgaben, die von einer tieferen Integration in die Datenbank-Engine profitieren würden. Daher untersuchen und bewerten wir Ansätze für die datenbankinterne Ausführung von Python-UDFs und datenbankinterne ML-Inferenz.Cloud computing has been the groundbreaking technology of the last decade. The ease-of-use of the managed environment in combination with nearly infinite amount of resources and a pay-per-use price model enables fast and cost-efficient project realization for a broad range of users. Cloud computing also changes the way software is designed, deployed and used. This thesis focuses on database systems deployed in the cloud environment. We identify three major interaction points of the database engine with the environment that show changed requirements compared to traditional on-premise data warehouse solutions. First, software is deployed on elastic resources. Consequently, systems should support elasticity in order to match workload requirements and be cost-effective. We present an elastic scaling mechanism for distributed database engines, combined with a partition manager that provides load balancing while minimizing partition reassignments in the case of elastic scaling. Furthermore we introduce a buffer pre-heating strategy that allows to mitigate a cold start after scaling and leads to an immediate performance benefit using scaling. Second, cloud based systems are accessible and available from nearly everywhere. Consequently, data is frequently ingested from numerous endpoints, which differs from bulk loads or ETL pipelines in a traditional data warehouse solution. Many users do not define database constraints in order to avoid transaction aborts due to conflicts or to speed up data ingestion. To mitigate this issue we introduce the concept of PatchIndexes, which allow the definition of approximate constraints. PatchIndexes maintain exceptions to constraints, make them usable in query optimization and execution and offer efficient update support. The concept can be applied to arbitrary constraints and we provide examples of approximate uniqueness and approximate sorting constraints. Moreover, we show how PatchIndexes can be exploited to define advanced constraints like an approximate multi-key partitioning, which offers robust query performance over workloads with different partition key requirements. Third, data-centric workloads changed over the last decade. Besides traditional SQL workloads for business intelligence, data science workloads are of significant importance nowadays. For these cases the database system might only act as data delivery, while the computational effort takes place in data science or machine learning (ML) environments. As this workflow has several drawbacks, we follow the goal of pushing advanced analytics towards the database engine and introduce the Grizzly framework as a DataFrame-to-SQL transpiler. Based on this we identify user-defined functions (UDFs) and machine learning inference as important tasks that would benefit from a deeper engine integration and investigate approaches to push these operations towards the database engine

    Evaluating Copyright Protection in the Data-Driven Era: Centering on Motion Picture\u27s Past and Future

    Get PDF
    Since the 1910s, Hollywood has measured audience preferences with rough industry-created methods. In the 1940s, scientific audience research led by George Gallup started to conduct film audience surveys with traditional statistical and psychological methods. However, the quantity, quality, and speed were limited. Things dramatically changed in the internet age. The prevalence of digital data increases the instantaneousness, convenience, width, and depth of collecting audience and content data. Advanced data and AI technologies have also allowed machines to provide filmmakers with ideas or even make human-like expressions. This brings new copyright challenges in the data-driven era. Massive amounts of text and data are the premise of text and data mining (TDM), as well as the admission ticket to access machine learning technologies. Given the high and uncertain copyright violation risks in the data-driven creation process, whoever controls the copyrighted film materials can monopolize the data and AI technologies to create motion pictures in the data-driven era. Considering that copyright shall not be the gatekeeper to new technological uses that do not impair the original uses of copyrighted works in the existing markets, this study proposes to create a TDM and model training limitations or exceptions to copyrights and recommends the Singapore legislative model. Motion pictures, as public entertainment media, have inherently limited creative choices. Identifying data-driven works’ human original expression components is also challenging. This study proposes establishing a voluntarily negotiated license institution backed up by a compulsory license to enable other filmmakers to reuse film materials in new motion pictures. The film material’s degree of human original authorship certified by film artists’ guilds shall be a crucial factor in deciding the compulsory license’s royalty rate and terms to encourage retaining human artists. This study argues that international and domestic policymakers should enjoy broad discretion to qualify data-driven work’s copyright protection because data-driven work is a new category of work. It would be too late to wait until ubiquitous data-driven works block human creative freedom and floods of data-driven work copyright litigations overwhelm the judicial systems

    Integrating Big Data Analytics with U.S. SEC Financial Statement Datasets and the Critical Examination of the Altman Z’-Score Model

    Get PDF
    The main aim of this thesis is to document the process of developing Big Data analytical applications and their integration with financial statement datasets. These datasets are publicly available on the U.S. SEC (Security and Exchange Commission) website which contains the annual and quarterly reports of approximately 8000 companies. Through its Electronic Data Gathering, Analysis and Retrieval (EDGAR) system, the SEC receives several terabytes of data in the mandatory filings from its registrants. This vast amount of data can potentially provide a valuable resource for those parties (such as investors, analysts, regulators and researchers) who are interested in assessing the financial performance and position of companies. Traditionally, the quarterly and annual reports were submitted in standard PDF, HTML and Text files. The data from these files could be manually extracted and analysed, but this process (still used by some analysts and researchers) is costly and time-consuming. In 2009, the SEC mandated all listed companies to use a digital reporting format known as XBRL (eXtensible Business Reporting Language). The intention of this was to improve financial reporting in terms of transparency and efficiency. In order to take advantage of structured data contained in the XBRL format, a variety of methods such as novel extraction algorithms and data mining techniques have been developed. However, several limitations and issues have emerged. These include a lack of automated connectivity between the EDGAR web interface and the terms used in structured taxonomies, and the inability to provide access to multiple files in a single query. Given the challenging and complex nature of these issues, this research project used the financial statement datasets available on the SEC website to extract relevant financial information from the company’s annual reports. The novel aspect of this research is providing big data analytical applications using cloud technologies that can efficiently perform datasets integration and transformation into a format suitable for further analysis. The result of this is that the extracted financial data can be analysed to assess the performance of companies, and this facilitates the critical examination of widely used credit assessment models such as the Altman Z’-Score
    • …
    corecore