9 research outputs found

    Challenges and Opportunities in Applying Semantics to Improve Access Control in the Field of Internet of Things

    Get PDF
    The increased number of IoT devices results in continuously generated massive amounts of raw data. Parts of this data are private and highly sensitive as they reflect owner’s behavior, obligations, habits, and preferences. In this paper, we point out that flexible and comprehensive access control policies are “a must” in the IoT domain. The Semantic Web technologies can address many of the challenges that the IoT access control is facing with today. Therefore, we analyze the current state of the art in this area and identify the challenges and opportunities for improved access control in a semantically enriched IoT environment. Applying semantics to IoT access control opens a lot of opportunities, such as semantic inference and reasoning, easy data sharing, data trading, new approaches to authentication, security policies based on a natural language and enhances the interoperability using a common ontology

    Digital Teachers-Classrooms Without Borders Created with Digital Technologies

    Get PDF
    In today's world, the most important feature of the 21st century citizen is to have digital competence. The ability of citizens to maintain their daily lives and find jobs is directly related to their digital skills. Artificial intelligence technologies have become an indispensable part of our daily lives, although they are often not aware of it. These technologies serve on every platform through different devices and applications. Smart home appliances, autonomous cars or smartphone applications are examples of artificial intelligence technologies. Citizens have to use these latest technologies in order to survive and participate in employment. A citizen can be said to have digital competence if he or she has all of the skills specified in the European Digital Competence Framework. In order for European citizens to have this competence, it is extremely important to realize digital transformation, especially in the education sector. Because in the education sector, official and unofficial institutions and organizations that provide lifelong learning from kindergarten to doctorate, even out of school, are active. Today, learning has become more individual and more specific. Artificial intelligence-based learning-teaching approaches create learning opportunities for individuals at their own pace and capacity. In order for students to acquire these skills, teachers must first create artificial intelligence-based learning environments. In our project, we focused on developing teachers' digital skills and creating interactive learning environments created with artificial intelligence based on individual learning. In order for artificial intelligence to be used in education, teachers and those working in other teaching professions must have advanced digital skills. In our project, we planned a systematic training process from simple to difficult for teachers to gain skills in using artificial intelligence. In this context, teachers will improve their software and robotic design skills with trainings on "the use of web3.0 tools, the application of the Steam approach, Robotics-Coding, Virtual Reality". After gaining these skills, they will perform artificial intelligence applications. In this process, they will learn the software language "Plickers, Python, Machine Learnin, Canvas". As a result of our project trainings, the project results of “Sustainable Education Material, mobile application, e-learning platform and teacher guide created with new methodologies” will be produced. The eLearning platform will be based on artificial intelligence and will include different disciplines, reflecting the Steam approach. In order to realize all these, our project must include transnational mobility. Because each of our partners has expertise in one of the above-mentioned digital areas. In order to benefit from this expertise of our European partners, our project needs to be financed

    Knowledge Graph Based Recommender for Automatic Playlist Continuation

    No full text
    In this work, we present a state-of-the-art solution for automatic playlist continuation through a knowledge graph-based recommender system. By integrating representational learning with graph neural networks and fusing multiple data streams, the system effectively models user behavior, leading to accurate and personalized recommendations. We provide a systematic and thorough comparison of our results with existing solutions and approaches, demonstrating the remarkable potential of graph-based representation in improving recommender systems. Our experiments reveal substantial enhancements over existing approaches, further validating the efficacy of this novel approach. Additionally, through comprehensive evaluation, we highlight the robustness of our solution in handling dynamic user interactions and streaming data scenarios, showcasing its practical viability and promising prospects for next-generation recommender systems

    EU E-Health and North Macedonia: From Current Practice to Implementation

    Get PDF
    From 2020-2022 the Goce Delcev University, as a mono beneficiary grant recipient, will conduct Jean Monnet Project titled as EU E-Health and North Macedonia: From Current Practice to Implementation. This project is supported by the Erasmus+ Jean Monnet Action of the European Union and the Goce Delcev University in Shtip, Republic of North Macedonia Background of the project Traditional healthcare is changing in North Macedonia. Mobile health delivery, personalized medicine, and social media health applications are creating a new landscape of information and communication technologies with an aim to improve healthcare, or ‘eHealth’. This new landscape is taking shape against the backdrop of existing national laws and regulations in North Macedonia but also European Union (“EU”) Directives and Decisions. It is imperative that future eHealth developers, sellers and service providers – stakeholders in the area of eHealth - are aware of the restraints and requirements that these laws, regulations and decisions impose. Moreover, North Macedonia should share a commitment to the healthcare system strengthening with the Member States. Aspiration for the state to get in line with the tendency of transforming health services in order to meet the health challenges of the 21st century and to move towards EU Digital Transformation of Health and Care in the Digital Single Market. By raising the awareness of relevant rights for each stakeholder under eHealth laws, particularly the EU Policies, Strategies, Directives, Decisions, current EU legal framework and law-making mechanisms relevant to EU Digital Health Law, there is a greater likelihood that the transformation from traditional medical delivery to e-Health will be successful in the country of North Macedonia, especially on its trajectory to becoming a member state in the EU. Likewise, it will help ensure that stakeholders are compliant with relevant laws and regulations, delivery of health care, data protection and confidentiality, medical informatics and ethic

    PharmKE: Knowledge Extraction Platform for Pharmaceutical Texts Using Transfer Learning

    No full text
    Even though named entity recognition (NER) has seen tremendous development in recent years, some domain-specific use-cases still require tagging of unique entities, which is not well handled by pre-trained models. Solutions based on enhancing pre-trained models or creating new ones are efficient, but creating reliable labeled training for them to learn on is still challenging. In this paper, we introduce PharmKE, a text analysis platform tailored to the pharmaceutical industry that uses deep learning at several stages to perform an in-depth semantic analysis of relevant publications. The proposed methodology is used to produce reliably labeled datasets leveraging cutting-edge transfer learning, which are later used to train models for specific entity labeling tasks. By building models for the well-known text-processing libraries spaCy and AllenNLP, this technique is used to find Pharmaceutical Organizations and Drugs in texts from the pharmaceutical domain. The PharmKE platform also incorporates the NER findings to resolve co-references of entities and examine the semantic linkages in each phrase, creating a foundation for further text analysis tasks, such as fact extraction and question answering. Additionally, the knowledge graph created by DBpedia Spotlight for a specific pharmaceutical text is expanded using the identified entities. The obtained results with the proposed methodology result in about a 96% F1-score on the NER tasks, which is up to 2% better than those of the fine-tuned BERT and BioBERT models developed using the same dataset. The ultimate benefits of the platform are that pharmaceutical domain specialists may more easily identify the knowledge extracted from the input texts thanks to the platform’s visualization of the model findings. Likewise, the proposed techniques can be integrated into mobile and pervasive systems to give patients more relevant and comprehensive information from scanned medication guides. Similarly, it can provide preliminary insights to patients and even medical personnel on whether a drug from a different vendor is compatible with the patient’s prescription medication

    CafeteriaFCD Corpus: Food Consumption Data Annotated with Regard to Different Food Semantic Resources

    No full text
    Besides the numerous studies in the last decade involving food and nutrition data, this domain remains low resourced. Annotated corpuses are very useful tools for researchers and experts of the domain in question, as well as for data scientists for analysis. In this paper, we present the annotation process of food consumption data (recipes) with semantic tags from different semantic resources—Hansard taxonomy, FoodOn ontology, SNOMED CT terminology and the FoodEx2 classification system. FoodBase is an annotated corpus of food entities—recipes—which includes a curated version of 1000 instances, considered a gold standard. In this study, we use the curated version of FoodBase and two different approaches for annotating—the NCBO annotator (for the FoodOn and SNOMED CT annotations) and the semi-automatic StandFood method (for the FoodEx2 annotations). The end result is a new version of the golden standard of the FoodBase corpus, called the CafeteriaFCD (Cafeteria Food Consumption Data) corpus. This corpus contains food consumption data—recipes—annotated with semantic tags from the aforementioned four different external semantic resources. With these annotations, data interoperability is achieved between five semantic resources from different domains. This resource can be further utilized for developing and training different information extraction pipelines using state-of-the-art NLP approaches for tracing knowledge about food safety applications
    corecore