5 research outputs found

    The evolving SARS-CoV-2 epidemic in Africa: Insights from rapidly expanding genomic surveillance

    Get PDF
    INTRODUCTION Investment in Africa over the past year with regard to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) sequencing has led to a massive increase in the number of sequences, which, to date, exceeds 100,000 sequences generated to track the pandemic on the continent. These sequences have profoundly affected how public health officials in Africa have navigated the COVID-19 pandemic. RATIONALE We demonstrate how the first 100,000 SARS-CoV-2 sequences from Africa have helped monitor the epidemic on the continent, how genomic surveillance expanded over the course of the pandemic, and how we adapted our sequencing methods to deal with an evolving virus. Finally, we also examine how viral lineages have spread across the continent in a phylogeographic framework to gain insights into the underlying temporal and spatial transmission dynamics for several variants of concern (VOCs). RESULTS Our results indicate that the number of countries in Africa that can sequence the virus within their own borders is growing and that this is coupled with a shorter turnaround time from the time of sampling to sequence submission. Ongoing evolution necessitated the continual updating of primer sets, and, as a result, eight primer sets were designed in tandem with viral evolution and used to ensure effective sequencing of the virus. The pandemic unfolded through multiple waves of infection that were each driven by distinct genetic lineages, with B.1-like ancestral strains associated with the first pandemic wave of infections in 2020. Successive waves on the continent were fueled by different VOCs, with Alpha and Beta cocirculating in distinct spatial patterns during the second wave and Delta and Omicron affecting the whole continent during the third and fourth waves, respectively. Phylogeographic reconstruction points toward distinct differences in viral importation and exportation patterns associated with the Alpha, Beta, Delta, and Omicron variants and subvariants, when considering both Africa versus the rest of the world and viral dissemination within the continent. Our epidemiological and phylogenetic inferences therefore underscore the heterogeneous nature of the pandemic on the continent and highlight key insights and challenges, for instance, recognizing the limitations of low testing proportions. We also highlight the early warning capacity that genomic surveillance in Africa has had for the rest of the world with the detection of new lineages and variants, the most recent being the characterization of various Omicron subvariants. CONCLUSION Sustained investment for diagnostics and genomic surveillance in Africa is needed as the virus continues to evolve. This is important not only to help combat SARS-CoV-2 on the continent but also because it can be used as a platform to help address the many emerging and reemerging infectious disease threats in Africa. In particular, capacity building for local sequencing within countries or within the continent should be prioritized because this is generally associated with shorter turnaround times, providing the most benefit to local public health authorities tasked with pandemic response and mitigation and allowing for the fastest reaction to localized outbreaks. These investments are crucial for pandemic preparedness and response and will serve the health of the continent well into the 21st century

    What have you read? based Multi-Document Summarization

    No full text
    Due to the tremendous amount of data available today, extracting essential information from such a large volume of data is quite tough. Particularly in the case of text documents, which need a significant amount of time from the user to read the material and extract useful information. The major problem is identifying the user's relevant documents, removing the most significant pieces of information, determining document relevancy, excluding extraneous information, reducing details, and generating a compact, consistent report. For all these issues, we proposed a novel technique that solves the problem of extracting important information from a huge amount of text data and using previously read documents to generate summaries of new documents. Our technique is more focused on extracting topics (also known as topic signatures) from the previously read documents and then selecting the sentences that are more relevant to these topics based on update summary generation. Besides this, the concept of overlapping value is used that digs out the meaningful words and word similarities. Another thing that makes our work better is the Dice Coefficient which measures the intersection of words between document sets and helps to eliminate redundancy. The summary generated is based on more diverse and highly representative sentences with an average length. Empirically, we have observed that our proposed novel technique performed better with baseline competitors on the real-world TAC2008 dataset

    What have you read? based Multi-Document Summarization

    No full text
    Due to the tremendous amount of data available today, extracting essential information from such a large volume of data is quite tough. Particularly in the case of text documents, which need a significant amount of time from the user to read the material and extract useful information. The major problem is identifying the user's relevant documents, removing the most significant pieces of information, determining document relevancy, excluding extraneous information, reducing details, and generating a compact, consistent report. For all these issues, we proposed a novel technique that solves the problem of extracting important information from a huge amount of text data and using previously read documents to generate summaries of new documents. Our technique is more focused on extracting topics (also known as topic signatures) from the previously read documents and then selecting the sentences that are more relevant to these topics based on update summary generation. Besides this, the concept of overlapping value is used that digs out the meaningful words and word similarities. Another thing that makes our work better is the Dice Coefficient which measures the intersection of words between document sets and helps to eliminate redundancy. The summary generated is based on more diverse and highly representative sentences with an average length. Empirically, we have observed that our proposed novel technique performed better with baseline competitors on the real-world TAC2008 dataset

    Grapharizer: A Graph-Based Technique for Extractive Multi-Document Summarization

    No full text
    In the age of big data, there is increasing growth of data on the Internet. It becomes frustrating for users to locate the desired data. Therefore, text summarization emerges as a solution to this problem. It summarizes and presents the users with the gist of the provided documents. However, summarizer systems face challenges, such as poor grammaticality, missing important information, and redundancy, particularly in multi-document summarization. This study involves the development of a graph-based extractive generic MDS technique, named Grapharizer (GRAPH-based summARIZER), focusing on resolving these challenges. Grapharizer addresses the grammaticality problems of the summary using lemmatization during pre-processing. Furthermore, synonym mapping, multi-word expression mapping, and anaphora and cataphora resolution, contribute positively to improving the grammaticality of the generated summary. Challenges, such as redundancy and proper coverage of all topics, are dealt with to achieve informativity and representativeness. Grapharizer is a novel approach which can also be used in combination with different machine learning models. The system was tested on DUC 2004 and Recent News Article datasets against various state-of-the-art techniques. Use of Grapharizer with machine learning increased accuracy by up to 23.05% compared with different baseline techniques on ROUGE scores. Expert evaluation of the proposed system indicated the accuracy to be more than 55%

    AI-Based Wormhole Attack Detection Techniques in Wireless Sensor Networks

    No full text
    The popularity of wireless sensor networks for establishing different communication systems is increasing daily. A wireless network consists of sensors prone to various security threats. These sensor nodes make a wireless network vulnerable to denial-of-service attacks. One of them is a wormhole attack that uses a low latency link between two malicious sensor nodes and affects the routing paths of the entire network. This attack is brutal as it is resistant to many cryptographic schemes and hard to observe within the network. This paper provides a comprehensive review of the literature on the subject of the detection and mitigation of wormhole attacks in wireless sensor networks. The existing surveys are also explored to find gaps in the literature. Several existing schemes based on different methods are also evaluated critically in terms of throughput, detection rate, low energy consumption, packet delivery ratio, and end-to-end delay. As artificial intelligence and machine learning have massive potential for the efficient management of sensor networks, this paper provides AI- and ML-based schemes as optimal solutions for the identified state-of-the-art problems in wormhole attack detection. As per the author’s knowledge, this is the first in-depth review of AI- and ML-based techniques in wireless sensor networks for wormhole attack detection. Finally, our paper explored the open research challenges for detecting and mitigating wormhole attacks in wireless networks
    corecore