20 research outputs found

    The technology acceptance of a TV platform for the elderly living alone or in public nursing homes

    Get PDF
    In Mexico, many seniors are alone for most of the day or live in public nursing homes. Simple interaction with computer systems is required for older people. This is why we propose the exploration of a medium well known by seniors, such as the television (TV). The primary objective of this study is to improve the quality of life of seniors through an easier reminder system, using the television set. A technological platform was designed based on interactive television, through which seniors and their caregivers can have a better way to track their daily activities. Finally, an evaluation of the technology adoption was performed with 50 seniors living in two public nursing homes. The evaluation found that the elderly perceived the system as useful, easy to use, and they had a positive attitude and good intention to use it. This helped to generate initial evidence that the system supported them in achieving a better quality of life, by reminding them to take their medications and increasing their rate of attendance to their medical appointments

    iTVCare: A home care system for the elderly through interactive television

    Get PDF
    In the world, many older adults are living alone for most of the day. This work propose the use of the television set, a medium well known by seniors, to improve the way that seniors and their caregivers can track daily activities such as medication intakes reminders. Two evaluations were held, a heuristic usability assessment early on the design process and an evaluation of the technology adoption. Both evaluations generated initial evidence that the system supported elders in achieving a better quality of life

    Blockchain applications in education: a systematic literature review

    Get PDF
    Blockchain is one of the latest technologies attracting increasing attention from different actors in diverse fields, including the educational sector. The objective of this study is to offer an overview of the current state of the art related to blockchain in education that may serve as a reference for future initiatives in this field. For this, a systematic review of reference journals was carried out. Eleven databases were systematically searched and eligible papers that focused on blockchain in education that made significant contributions, and not only generic statements about the topic, were selected. As a result, 28 articles were analyzed. Lack of precision, and selection and analysis bias were then minimized by involving three researchers. The analysis of the selected papers provided invaluable insight and answered the research questions posed about the current state of the application of blockchain in education, about which of its characteristics can benefit this sector, and about the challenges that must be addressed. Blockchain may become a relevant technology in the educational field, and therefore many proofs of concept are being developed. However, there are still some relevant technological, regulatory and academic issues to be addressed to pave the way for the mainstream adoption of this technology

    NFTs for the issuance and validation of academic information that complies with the GDPR

    Get PDF
    The issuance and verification of academic certificates face significant challenges in the digital era. The proliferation of counterfeit credentials and the lack of a reliable, universally accepted system for issuing and validating them pose critical issues in the educational domain. Certificates, traditionally issued by centralized educational institutions using their proprietary systems, pose challenges for straightforward verification, generating uncertainty about the credibility of academic achievements. In addition to diplomas issued by academic entities, it is now necessary in virtually all professional fields to stay updated and obtain accreditation for certain skills or experiences, which is a determining factor in securing or enhancing employment. Yet, there is no platform available to consistently demonstrate these capabilities and experiences. This article introduces a novel model for issuing and verifying academic information using non-fungible tokens (NFTs) supported by blockchain technologies, focused on compliance with the General Data Protection Regulation (GDPR). It describes a model that grants control to the data subject, enabling the management of information access while adhering to key GDPR principles. Simultaneously, it remains compatible with existing systems within organizations, and is flexible in certifying various types of academic information. The implications of this model are discussed, emphasizing the importance of addressing privacy in blockchain-based applications.Agencia Estatal de Investigación | Ref. TED2021-130828B-I0

    Wikipedia-based hybrid document representation for textual news classification

    Get PDF
    Automatic classification of news articles is a relevant problem due to the large amount of news generated every day, so it is crucial that these news are classified to allow for users to access to information of interest quickly and effectively. On the one hand, traditional classification systems represent documents as bag-of-words (BoW), which are oblivious to two problems of language: synonymy and polysemy. On the other hand, several authors propose the use of a bag-of-concepts (BoC) representation of documents, which tackles synonymy and polysemy. This paper shows the benefits of using a hybrid representation of documents to the classification of textual news, leveraging the advantages of both approaches-the traditional BoW representation and a BoC approach based on Wikipedia knowledge. To evaluate the proposal, we used three of the most relevant algorithms in the state-of-the art-SVM, Random Forest and Naïve Bayes-and two corpora: the Reuters-21578 corpus and a purpose-built corpus, Reuters-27000. Results obtained show that the performance of the classification algorithm depends on the dataset used, and also demonstrate that the enrichment of the BoW representation with the concepts extracted from documents through the semantic annotator adds useful information to the classifier and improves their performance. Experiments conducted show performance increases up to 4.12% when classifying the Reuters-21578 corpus with the SVM algorithm and up to 49.35% when classifying the corpus Reuters-27000 with the Random Forest algorithm.Atlantic Research Center for Information and Communication TechnologiesXunta de Galicia | Ref. R2014/034 (RedPlir)Xunta de Galicia | Ref. R2014/029 (TELGalicia

    Wikipedia-based hybrid document representation for textual news classification

    Get PDF
    The sheer amount of news items that are published every day makes worth the task of automating their classification. The common approach consists in representing news items by the frequency of the words they contain and using supervised learning algorithms to train a classifier. This bag-of-words (BoW) approach is oblivious to three aspects of natural language: synonymy, polysemy, and multiword terms. More sophisticated representations based on concepts—or units of meaning—have been proposed, following the intuition that document representations that better capture the semantics of text will lead to higher performance in automatic classification tasks. The reality is that, when classifying news items, the BoW representation has proven to be really strong, with several studies reporting it to perform above different ‘flavours’ of bag of concepts (BoC). In this paper, we propose a hybrid classifier that enriches the traditional BoW representation with concepts extracted from text—leveraging Wikipedia as background knowledge for the semantic analysis of text (WikiBoC). We benchmarked the proposed classifier, comparing it with BoW and several BoC approaches: Latent Dirichlet Allocation (LDA), Explicit Semantic Analysis, and word embeddings (doc2vec). We used two corpora: the well-known Reuters-21578, composed of newswire items, and a new corpus created ex professo for this study: the Reuters-27000. Results show that (1) the performance of concept-based classifiers is very sensitive to the corpus used, being higher in the more “concept-friendly” Reuters-27000; (2) the Hybrid-WikiBoC approach proposed offers performance increases over BoW up to 4.12 and 49.35% when classifying Reuters-21578 and Reuters-27000 corpora, respectively; and (3) for average performance, the proposed Hybrid-WikiBoC outperforms all the other classifiers, achieving a performance increase of 15.56% over the best state-of-the-art approach (LDA) for the largest training sequence. Results indicate that concepts extracted with the help of Wikipedia add useful information that improves classification performance for news items.Atlantic Research Center for Information and Communication TechnologiesXunta de Galicia | Ref. R2014/034 (RedPlir)Xunta de Galicia | Ref. R2014/029 (TELGalicia

    Heuristic evaluation of an IoMT system for remote health monitoring in senior care

    Get PDF
    This paper presents the usability assessment of the design of an Internet of Medical Things (IoMT) system for older adults; the evaluation, using heuristics, was held early on the design process to assess potential problems with the system and was found to be an efficient method to find issues with the application design and led to significant usability improvements on the IoMT platform

    Conversational Agents for depression screening: a systematic review

    Get PDF
    Objective: This work explores the advances in conversational agents aimed at the detection of mental health disorders, and specifically the screening of depression. The focus is put on those based on voice interaction, but other approaches are also tackled, such as text-based interaction or embodied avatars. Methods: PRISMA was selected as the systematic methodology for the analysis of existing literature, which was retrieved from Scopus, PubMed, IEEE Xplore, APA PsycINFO, Cochrane, and Web of Science. Relevant research addresses the detection of depression using conversational agents, and the selection criteria utilized include their effectiveness, usability, personalization, and psychometric properties. Results: Of the 993 references initially retrieved, 36 were finally included in our work. The analysis of these studies allowed us to identify 30 conversational agents that claim to detect depression, specifically or in combination with other disorders such as anxiety or stress disorders. As a general approach, screening was implemented in the conversational agents taking as a reference standardized or psychometrically validated clinical tests, which were also utilized as a golden standard for their validation. The implementation of questionnaires such as Patient Health Questionnaire or the Beck Depression Inventory, which are used in 65% of the articles analyzed, stand out. Conclusions: The usefulness of intelligent conversational agents allows screening to be administered to different types of profiles, such as patients (33% of relevant proposals) and caregivers (11%), although in many cases a target profile is not clearly of (66% of solutions analyzed). This study found 30 standalone conversational agents, but some proposals were explored that combine several approaches for a more enriching data acquisition. The interaction implemented in most relevant conversational agents is textbased, although the evolution is clearly towards voice integration, which in turns enhances their psychometric characteristics, as voice interaction is perceived as more natural and less invasive.Agencia Estatal de Investigación | Ref. PID2020-115137RB-I0

    SAgric-IoT: an IoT-based platform and deep learning for greenhouse monitoring

    Get PDF
    The Internet of Things (IoT) and convolutional neural networks (CNN) integration is a growing topic of interest for researchers as a technology that will contribute to transforming agriculture. IoT will enable farmers to decide and act based on data collected from sensor nodes regarding field conditions and not purely based on experience, thus minimizing the wastage of supplies (seeds, water, pesticide, and fumigants). On the other hand, CNN complements monitoring systems with tasks such as the early detection of crop diseases or predicting the number of consumable resources and supplies (water, fertilizers) needed to increase productivity. This paper proposes SAgric-IoT, a technology platform based on IoT and CNN for precision agriculture, to monitor environmental and physical variables and provide early disease detection while automatically controlling the irrigation and fertilization in greenhouses. The results show SAgric-IoT is a reliable IoT platform with a low packet loss level that considerably reduces energy consumption and has a disease identification detection accuracy and classification process of over 90%

    Implementing scripted conversations by means of smart assistants

    Get PDF
    Financiado para publicación en acceso aberto: Universidade de Vigo/CISUGSmart assistants are among the most popular technological devices at home. With a built-in voice-based user interface, they provide access to a broad portfolio of online services and information, and constitute the central element of state-of-the-art home automation systems. This work discusses the challenges addressed and the solutions adopted for the design and implementation of scripted conversations by means of off-the-shelf smart assistants. Scripted conversations play a fundamental role in many application fields, such as call center facilities, retail customer services, rapid prototyping, role-based training or the management of neuropsychiatric disorders. To illustrate this proposal, an actual implementation of the phone version of the Montreal cognitive assessment test as an Amazon's Alexa skill is described as a proof-of-concept.Agencia Estatal de Investigo | Ref. PID2020-115137RB-I00Ministerio de Ciencia, Innovación y Universidades | Ref. FPU19/0198
    corecore