89 research outputs found

    A Marketplace for Efficient and Secure Caching for IoT Applications in 5G Networks

    Get PDF
    As the communication industry is progressing towards fifth generation (5G) of cellular networks, the traffic it carries is also shifting from high data rate traffic from cellular users to a mixture of high data rate and low data rate traffic from Internet of Things (IoT) applications. Moreover, the need to efficiently access Internet data is also increasing across 5G networks. Caching contents at the network edge is considered as a promising approach to reduce the delivery time. In this paper, we propose a marketplace for providing a number of caching options for a broad range of applications. In addition, we propose a security scheme to secure the caching contents with a simultaneous potential of reducing the duplicate contents from the caching server by dividing a file into smaller chunks. We model different caching scenarios in NS-3 and present the performance evaluation of our proposal in terms of latency and throughput gains for various chunk sizes

    Daily weather direct readout microprocessor study

    Get PDF
    The work completed included a study of the requirements and hardware and software implementation techniques for NIMBUS ESMR and TWERLE direct readout applications using microprocessors. Many microprocessors were studied for this application. Because of the available Interdata development capabilities, it was concluded that future implementations be on an Interdata microprocessor which was found adequate for the task

    SqORAM: Read-Optimized Sequential Write-Only Oblivious RAM

    Full text link
    Oblivious RAM protocols (ORAMs) allow a client to access data from an untrusted storage device without revealing the access patterns. Typically, the ORAM adversary can observe both read and write accesses. Write-only ORAMs target a more practical, {\em multi-snapshot adversary} only monitoring client writes -- typical for plausible deniability and censorship-resilient systems. This allows write-only ORAMs to achieve significantly-better asymptotic performance. However, these apparent gains do not materialize in real deployments primarily due to the random data placement strategies used to break correlations between logical and physical namespaces, a required property for write access privacy. Random access performs poorly on both rotational disks and SSDs (often increasing wear significantly, and interfering with wear-leveling mechanisms). In this work, we introduce SqORAM, a new locality-preserving write-only ORAM that preserves write access privacy without requiring random data access. Data blocks close to each other in the logical domain land in close proximity on the physical media. Importantly, SqORAM maintains this data locality property over time, significantly increasing read throughput. A full Linux kernel-level implementation of SqORAM is 100x faster than non locality-preserving solutions for standard workloads and is 60-100% faster than the state-of-the-art for typical file system workloads

    Cleaning Web pages for effective Web content mining.

    Get PDF
    Web pages usually contain many noisy blocks, such as advertisements, navigation bar, copyright notice and so on. These noisy blocks can seriously affect web content mining because contents contained in noise blocks are irrelevant to the main content of the web page. Eliminating noisy blocks before performing web content mining is very important for improving mining accuracy and efficiency. A few existing approaches detect noisy blocks with exact same contents, but are weak in detecting near-duplicate blocks, such as navigation bars. In this thesis, given a collection of web pages in a web site, a new system, WebPageCleaner, which eliminates noisy blocks from these web pages so as to improve the accuracy and efficiency of web content mining, is proposed. WebPageCleaner detects both noisy blocks with exact same contents as well as those with near-duplicate contents. It is based on the observation that noisy blocks usually share common contents, and appear frequently on a given web site. WebPageCleaner consists of three modules: block extraction, block importance retrieval, and cleaned files generation. A vision-based technique is employed for extracting blocks from web pages. Blocks get their importance degree according to their block features such as block position, and level of similarity of block contents to each other. A collection of cleaned files with high importance degree are generated finally and used for web content mining. The proposed technique is evaluated using Naive Bayes text classification. Experiments show that WebPageCleaner is able to lead to a more efficient and accurate web page classification results than existing approaches.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .L5. Source: Masters Abstracts International, Volume: 45-01, page: 0359. Thesis (M.Sc.)--University of Windsor (Canada), 2006

    Is YouTube a quality source of information on sarcopenia?

    Get PDF
    Background: While sarcopenia is a prevalent disorder that affects muscle mass and quality, patients have limited knowledge of it. On the other hand, patients often use social media to obtain health-specific information. Therefore, the aim of this study was to investigate the YouTube videos about sarcopenia in terms of the knowledge value of what they present and to identify which of them can be considered as the quality sources of such information. Methods: The descriptive study included 53 videos retrieved by searching the keywords ‘sarcopenia’, ‘loss of muscle strength’, ‘sarcopenia treatment,’ ‘sarcopenia physiotherapy,’ and’sarcopenia rehabilitation’ on YouTube. The instructive characteristics of the videos were assessed with the Global Quality Scale, by which three quality groups were identified: poor-, moderate-, and high-quality videos. The DISCERN score was utilized to determine reliability. The sources of upload were identified as physicians, non-physician health personnel, health-related websites, universities and academic organizations, patients, and independent users. Finally, the lengths of videos, the number of views, likes, dislikes, and comments, and the DISCERN scores of the videos were compared using group comparisons. Results: The results suggested that there were 18 poor-quality, 16 moderate-quality, and 19 high-quality videos. Considering the sources of upload, physicians had the highest ratio in the high-quality group (83.3%). The lengths of videos and the DISCERN scores showed significant differences (p < 0.01). The numbers of views, likes, dislikes, and comments were similar in both quality and source groups. Conclusion: Most parts of the videos uploaded by physicians and academic organizations were included in the high-quality group. Overall, according to the results of the study, it can be asserted that high quality may be related to reliability. Furthermore, healthcare professionals and academics should consider using YouTube for increasing knowledge and raising awareness of patients about sarcopenia. © 2020, European Geriatric Medicine Society

    Dissatisfaction Theory

    Get PDF
    I propose a new theory of semantic presupposition, which I call dissatisfaction theory. I first briefly review a cluster of problems − known collectively as the proviso problem − for most extant theories of presupposition, arguing that the main pragmatic response to them faces a serious challenge. I avoid these problems by adopting two changes in perspective on presupposition. First, I propose a theory of projection according to which presuppositions project unless they are locally entailed. Second, I reject the standard assumption that presuppositions are contents which must be entailed by the input context; instead, I propose that presuppositions are contents which are marked as backgrounded. I show that, together, these commitments allow us to avoid the proviso problem altogether, and generally make plausible predictions about presupposition projection out of connectives and attitude predicates. I close by sketching a two-dimensional implementation of my theory which allows us to make further welcome predictions about attitude predicates and quantifiers

    Identifying reputation collectors in community question answering (CQA) sites: Exploring the dark side of social media

    Get PDF
    YesThis research aims to identify users who are posting as well as encouraging others to post low-quality and duplicate contents on community question answering sites. The good guys called Caretakers and the bad guys called Reputation Collectors are characterised by their behaviour, answering pattern and reputation points. The proposed system is developed and analysed over publicly available Stack Exchange data dump. A graph based methodology is employed to derive the characteristic of Reputation Collectors and Caretakers. Results reveal that Reputation Collectors are primary sources of low-quality answers as well as answers to duplicate questions posted on the site. The Caretakers answer limited questions of challenging nature and fetches maximum reputation against those questions whereas Reputation Collectors answers have so many low-quality and duplicate questions to gain the reputation point. We have developed algorithms to identify the Caretakers and Reputation Collectors of the site. Our analysis finds that 1.05% of Reputation Collectors post 18.88% of low quality answers. This study extends previous research by identifying the Reputation Collectors and 2 how they collect their reputation points
    • …
    corecore