4,817 research outputs found

    Research trend of epigenetics and depression: adolescents' research needs to strengthen

    Get PDF
    ObjectiveWith its high prevalence, depression's pathogenesis remains unclear. Recent attention has turned to the interplay between depression and epigenetic modifications. However, quantitative bibliometric analyses are lacking. This study aims to visually analyze depression epigenetics trends, utilizing bibliometric tools, while comprehensively reviewing its epigenetic mechanisms.MethodsUtilizing the Web of Science core dataset, we collected depression and epigenetics-related studies. Employing VOSViewer software, we visualized data on authors, countries, journals, and keywords. A ranking table highlighted field leaders.ResultsAnalysis encompassed 3,469 depression epigenetics studies published from January 2002 to June 2023. Key findings include: (1) Gradual publication growth, peaking in 2021; (2) The United States and its research institutions leading contributions; (3) Need for enhanced collaborations, spanning international and interdisciplinary efforts; (4) Keyword clustering revealed five main themes—early-life stress, microRNA, genetics, DNA methylation, and histone acetylation—highlighting research hotspots; (5) Limited focus on adolescent depression epigenetics, warranting increased attention.ConclusionTaken together, this study revealed trends and hotspots in depression epigenetics research, underscoring global collaboration, interdisciplinary fusion, and multi-omics data's importance. It discussed in detail the potential of epigenetic mechanisms in depression diagnosis and treatment, advocating increased focus on adolescent research in this field. Insights aid researchers in shaping their investigative paths toward understanding depression's epigenetic mechanisms and antidepressant interventions

    A systematic literature review on source code similarity measurement and clone detection: techniques, applications, and challenges

    Full text link
    Measuring and evaluating source code similarity is a fundamental software engineering activity that embraces a broad range of applications, including but not limited to code recommendation, duplicate code, plagiarism, malware, and smell detection. This paper proposes a systematic literature review and meta-analysis on code similarity measurement and evaluation techniques to shed light on the existing approaches and their characteristics in different applications. We initially found over 10000 articles by querying four digital libraries and ended up with 136 primary studies in the field. The studies were classified according to their methodology, programming languages, datasets, tools, and applications. A deep investigation reveals 80 software tools, working with eight different techniques on five application domains. Nearly 49% of the tools work on Java programs and 37% support C and C++, while there is no support for many programming languages. A noteworthy point was the existence of 12 datasets related to source code similarity measurement and duplicate codes, of which only eight datasets were publicly accessible. The lack of reliable datasets, empirical evaluations, hybrid methods, and focuses on multi-paradigm languages are the main challenges in the field. Emerging applications of code similarity measurement concentrate on the development phase in addition to the maintenance.Comment: 49 pages, 10 figures, 6 table

    Automatic Caption Generation for Aerial Images: A Survey

    Get PDF
    Aerial images have attracted attention from researcher community since long time. Generating a caption for an aerial image describing its content in comprehensive way is less studied but important task as it has applications in agriculture, defence, disaster management and many more areas. Though different approaches were followed for natural image caption generation, generating a caption for aerial image remains a challenging task due to its special nature. Use of emerging techniques from Artificial Intelligence (AI) and Natural Language Processing (NLP) domains have resulted in generation of accepted quality captions for aerial images. However lot needs to be done to fully utilize potential of aerial image caption generation task. This paper presents detail survey of the various approaches followed by researchers for aerial image caption generation task. The datasets available for experimentation, criteria used for performance evaluation and future directions are also discussed

    A Survey on Event-based News Narrative Extraction

    Full text link
    Narratives are fundamental to our understanding of the world, providing us with a natural structure for knowledge representation over time. Computational narrative extraction is a subfield of artificial intelligence that makes heavy use of information retrieval and natural language processing techniques. Despite the importance of computational narrative extraction, relatively little scholarly work exists on synthesizing previous research and strategizing future research in the area. In particular, this article focuses on extracting news narratives from an event-centric perspective. Extracting narratives from news data has multiple applications in understanding the evolving information landscape. This survey presents an extensive study of research in the area of event-based news narrative extraction. In particular, we screened over 900 articles that yielded 54 relevant articles. These articles are synthesized and organized by representation model, extraction criteria, and evaluation approaches. Based on the reviewed studies, we identify recent trends, open challenges, and potential research lines.Comment: 37 pages, 3 figures, to be published in the journal ACM CSU

    Egocentric vision-based passive dietary intake monitoring

    Get PDF
    Egocentric (first-person) perception captures and reveals how people perceive their surroundings. This unique perceptual view enables passive and objective monitoring of human-centric activities and behaviours. In capturing egocentric visual data, wearable cameras are used. Recent advances in wearable technologies have enabled wearable cameras to be lightweight, accurate, and with long battery life, making long-term passive monitoring a promising solution for healthcare and human behaviour understanding. In addition, recent progress in deep learning has provided an opportunity to accelerate the development of passive methods to enable pervasive and accurate monitoring, as well as comprehensive modelling of human-centric behaviours. This thesis investigates and proposes innovative egocentric technologies for passive dietary intake monitoring and human behaviour analysis. Compared to conventional dietary assessment methods in nutritional epidemiology, such as 24-hour dietary recall (24HR) and food frequency questionnaires (FFQs), which heavily rely on subjects’ memory to recall the dietary intake, and trained dietitians to collect, interpret, and analyse the dietary data, passive dietary intake monitoring can ease such burden and provide more accurate and objective assessment of dietary intake. Egocentric vision-based passive monitoring uses wearable cameras to continuously record human-centric activities with a close-up view. This passive way of monitoring does not require active participation from the subject, and records rich spatiotemporal details for fine-grained analysis. Based on egocentric vision and passive dietary intake monitoring, this thesis proposes: 1) a novel network structure called PAR-Net to achieve accurate food recognition by mining discriminative food regions. PAR-Net has been evaluated with food intake images captured by wearable cameras as well as those non-egocentric food images to validate its effectiveness for food recognition; 2) a deep learning-based solution for recognising consumed food items as well as counting the number of bites taken by the subjects from egocentric videos in an end-to-end manner; 3) in light of privacy concerns in egocentric data, this thesis also proposes a privacy-preserved solution for passive dietary intake monitoring, which uses image captioning techniques to summarise the image content and subsequently combines image captioning with 3D container reconstruction to report the actual food volume consumed. Furthermore, a novel framework that integrates food recognition, hand tracking and face recognition has also been developed to tackle the challenge of assessing individual dietary intake in food sharing scenarios with the use of a panoramic camera. Extensive experiments have been conducted. Tested with both laboratory (captured in London) and field study data (captured in Africa), the above proposed solutions have proven the feasibility and accuracy of using the egocentric camera technologies with deep learning methods for individual dietary assessment and human behaviour analysis.Open Acces

    A Critical Evaluation of Evaluations for Long-form Question Answering

    Full text link
    Long-form question answering (LFQA) enables answering a wide range of questions, but its flexibility poses enormous challenges for evaluation. We perform the first targeted study of the evaluation of long-form answers, covering both human and automatic evaluation practices. We hire domain experts in seven areas to provide preference judgments over pairs of answers, along with free-form justifications for their choices. We present a careful analysis of experts' evaluation, which focuses on new aspects such as the comprehensiveness of the answer. Next, we examine automatic text generation metrics, finding that no existing metrics are predictive of human preference judgments. However, some metrics correlate with fine-grained aspects of answers (e.g., coherence). We encourage future work to move away from a single "overall score" of the answer and adopt a multi-faceted evaluation, targeting aspects such as factuality and completeness. We publicly release all of our annotations and code to spur future work into LFQA evaluation.Comment: ACL 2023 Camera Ready, Code available at https://github.com/carriex/lfqa_eva

    Computational sarcasm detection and understanding in online communication

    Get PDF
    The presence of sarcasm in online communication has motivated an increasing number of computational investigations of sarcasm across the scientific community. In this thesis, we build upon these investigations. Pointing out their limitations, we bring four contributions that span two research directions: sarcasm detection and sarcasm understanding. Sarcasm detection is the task of building computational models optimised for recognising sarcasm in a given text. These models are often built in a supervised learning paradigm, relying on datasets of texts labelled for sarcasm. We bring two contributions in this direction. First, we question the effectiveness of previous methods used to label texts for sarcasm. We argue that the labels they produce might not coincide with the sarcastic intention of the authors of the texts that they are labelling. In response, we suggest a new method, and we use it to build iSarcasm, a novel dataset of sarcastic and non-sarcastic tweets. We show that previous models achieve considerably lower performance on iSarcasm than on previous datasets, while human annotators achieve a considerably higher performance, compared to models, pointing out the need for more effective models. Therefore, as a second contribution, we organise a competition that invites the community to create such models. Sarcasm understanding is the task of explicating the phenomena that are subsumed under the umbrella of sarcasm through computational investigation. We bring two contributions in this direction. First, we conduct an alaysis into the socio-demographic ecology of sarcastic exchanges between human interlocutors. We find that the effectiveness of such exchanges is influenced by the socio-demographic similarity between the interlocutors, with factors such as English language nativeness, age, and gender, being particualry influential. We suggest that future social analysis tools should account for these factors. Second, we challenge the motivation of a recent endeavour of the community; mainly, that of augmenting dialogue systems with the ability to generate sarcastic responses. Through a series of social experiments, we provide guidelines for dialogue systems concerning the appropriateness of generating sarcastic responses, and the formulation of such responses. Through our work, we aim to encourage the community to consider computational investigations of sarcasm interdisciplinarily, at the intersection of natural language processing and computational social science

    Current issues of the management of socio-economic systems in terms of globalization challenges

    Get PDF
    The authors of the scientific monograph have come to the conclusion that the management of socio-economic systems in the terms of global challenges requires the use of mechanisms to ensure security, optimise the use of resource potential, increase competitiveness, and provide state support to economic entities. Basic research focuses on assessment of economic entities in the terms of global challenges, analysis of the financial system, migration flows, logistics and product exports, territorial development. The research results have been implemented in the different decision-making models in the context of global challenges, strategic planning, financial and food security, education management, information technology and innovation. The results of the study can be used in the developing of directions, programmes and strategies for sustainable development of economic entities and regions, increasing the competitiveness of products and services, decision-making at the level of ministries and agencies that regulate the processes of managing socio-economic systems. The results can also be used by students and young scientists in the educational process and conducting scientific research on the management of socio-economic systems in the terms of global challenges

    Hybrid human-AI driven open personalized education

    Get PDF
    Attaining those skills that match labor market demand is getting increasingly complicated as prerequisite knowledge, skills, and abilities are evolving dynamically through an uncontrollable and seemingly unpredictable process. Furthermore, people's interests in gaining knowledge pertaining to their personal life (e.g., hobbies and life-hacks) are also increasing dramatically in recent decades. In this situation, anticipating and addressing the learning needs are fundamental challenges to twenty-first century education. The need for such technologies has escalated due to the COVID-19 pandemic, where online education became a key player in all types of training programs. The burgeoning availability of data, not only on the demand side but also on the supply side (in the form of open/free educational resources) coupled with smart technologies, may provide a fertile ground for addressing this challenge. Therefore, this thesis aims to contribute to the literature about the utilization of (open and free-online) educational resources toward goal-driven personalized informal learning, by developing a novel Human-AI based system, called eDoer. In this thesis, we discuss all the new knowledge that was created in order to complete the system development, which includes 1) prototype development and qualitative user validation, 2) decomposing the preliminary requirements into meaningful components, 3) implementation and validation of each component, and 4) a final requirement analysis followed by combining the implemented components in order develop and validate the planned system (eDoer). All in all, our proposed system 1) derives the skill requirements for a wide range of occupations (as skills and jobs are typical goals in informal learning) through an analysis of online job vacancy announcements, 2) decomposes skills into learning topics, 3) collects a variety of open/free online educational resources that address those topics, 4) checks the quality of those resources and topic relevance using our developed intelligent prediction models, 5) helps learners to set their learning goals, 6) recommends personalized learning pathways and learning content based on individual learning goals, and 7) provides assessment services for learners to monitor their progress towards their desired learning objectives. Accordingly, we created a learning dashboard focusing on three Data Science related jobs and conducted an initial validation of eDoer through a randomized experiment. Controlling for the effects of prior knowledge as assessed by the pretest, the randomized experiment provided tentative support for the hypothesis that learners who engaged with personal eDoer recommendations attain higher scores on the posttest than those who did not. The hypothesis that learners who received personalized content in terms of format, length, level of detail, and content type, would achieve higher scores than those receiving non-personalized content was not supported as a statistically significant result
    • …
    corecore