175,968 research outputs found

    A study of students’ information searching strategies

    Get PDF
    Concerns have been expressed with respect to students’ ability to search for information using electronic search engines and databases. This research adopted a structured method comprising a combination of questionnaire surveys, an observational study and a ‘sense making’ interview to assess the information searching skills of a group of 14 students undertaking their final year dissertation studies on undergraduate programmes within the Department of Civil and Building Engineering at Loughborough University. The findings reveal that the participants encountered problems with each type of search engine used (Google, Metalib, the Library OPAC system, and individual databases) and lacked knowledge of how to use advanced search strategies. All the participants formulated queries using simple words or free text and there was no evidence of consideration of structured word searching using systematically selected keywords. The results indicate priority areas for additional tuition in information literacy

    Identity And Influence Control For Cipher Schema For Defining Strings

    Get PDF
    A Hierarchical Clustering Technique Has Been Proposed To Help Augment Search Semantics And Also To Satisfy Interest In Searching For Fast Encrypted Text In A Big Data Environment. Additionally, We Assess Research Efficiency And Security Under Two Common Threat Models. One Of The Challenges Is That The Relationship Between Documents Will Usually Be Hidden During File Encryption, Which Can Greatly Degrade Search Accuracy Performance. In Addition, The Level Of Data In Data Centers Witnessed Impressive Growth. This Makes It More Difficult To Design Ciphertext Search Diagrams That Can Provide Efficient And Reliable Online Information Retrieval On A Large Amount Of Encrypted Data The Experimental Platform Must Evaluate The Efficiency, Accuracy, And Security Of The Search Classification. The Experiment Result Shows That The Proposed Architecture Not Only Correctly Solves The Search Problem Through Multi-Keyword Ranking, But It Also Makes A Noticeable Difference In Search Efficiency, Ranking Security, As Well As Relevancy Between Retrieved Documents. Within The Research Phase, This Method Can Achieve Straight Line Complexity In The Face Of The Exponential Increase In The Size Of A Document Set. Due To Insufficient Sorting Mechanism, Users Have To Take Some Time To Determine What They Need When Bulk Documents Retain The Keyword For The Query. Therefore, Conservation Techniques Are Used To Perform The Sorting Mechanism. In Order To Validate Search Engine Results, A Structure Known As The Minimum Sub-Hash Tree Has Been Created In This Document. Moreover, The Proposed Method Comes With An Advantage Over The Standard Method Within The Scope Of Privacy And Relevance Of The Recovered Documents

    The quantitative measure and statistical distribution of fame

    Full text link
    Fame and celebrity play an ever-increasing role in our culture. However, despite the cultural and economic importance of fame and its gradations, there exists no consensus method for quantifying the fame of an individual, or of comparing that of two individuals. We argue that, even if fame is difficult to measure with precision, one may develop useful metrics for fame that correlate well with intuition and that remain reasonably stable over time. Using datasets of recently deceased individuals who were highly renowned, we have evaluated several internet-based methods for quantifying fame. We find that some widely-used internet-derived metrics, such as search engine results, correlate poorly with human subject judgments of fame. However other metrics exist that agree well with human judgments and appear to offer workable, easily accessible measures of fame. Using such a metric we perform a preliminary investigation of the statistical distribution of fame, which has some of the power law character seen in other natural and social phenomena such as landslides and market crashes. In order to demonstrate how such findings can generate quantitative insight into celebrity culture, we assess some folk ideas regarding the frequency distribution and apparent clustering of celebrity deaths.Comment: 17 pages, 6 figure

    Skill Rating by Bayesian Inference

    Get PDF
    Systems Engineering often involves computer modelling the behaviour of proposed systems and their components. Where a component is human, fallibility must be modelled by a stochastic agent. The identification of a model of decision-making over quantifiable options is investigated using the game-domain of Chess. Bayesian methods are used to infer the distribution of players’ skill levels from the moves they play rather than from their competitive results. The approach is used on large sets of games by players across a broad FIDE Elo range, and is in principle applicable to any scenario where high-value decisions are being made under pressure

    Enhanced Trustworthy and High-Quality Information Retrieval System for Web Search Engines

    Get PDF
    The WWW is the most important source of information. But, there is no guarantee for information correctness and lots of conflicting information is retrieved by the search engines and the quality of provided information also varies from low quality to high quality. We provide enhanced trustworthiness in both specific (entity) and broad (content) queries in web searching. The filtering of trustworthiness is based on 5 factors – Provenance, Authority, Age, Popularity, and Related Links. The trustworthiness is calculated based on these 5 factors and it is stored thereby increasing the performance in retrieving trustworthy websites. The calculated trustworthiness is stored only for static websites. Quality is provided based on policies selected by the user. Quality based ranking of retrieved trusted information is provided using WIQA (Web Information Quality Assessment) Framework

    Deriving query suggestions for site search

    Get PDF
    Modern search engines have been moving away from simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features are now integral parts of web search engines. However, generating good query modification suggestions remains a challenging issue. Query log analysis is one of the major strands of work in this direction. Although much research has been performed on query logs collected on the web as a whole, query log analysis to enhance search on smaller and more focused collections has attracted less attention, despite its increasing practical importance. In this article, we report on a systematic study of different query modification methods applied to a substantial query log collected on a local website that already uses an interactive search engine. We conducted experiments in which we asked users to assess the relevance of potential query modification suggestions that have been constructed using a range of log analysis methods and different baseline approaches. The experimental results demonstrate the usefulness of log analysis to extract query modification suggestions. Furthermore, our experiments demonstrate that a more fine-grained approach than grouping search requests into sessions allows for extraction of better refinement terms from query log files. © 2013 ASIS&T

    An optimization method for nacelle design

    Get PDF
    A multi-objective optimiZation method is demonstrated using an evolutionary genetic algorithm. The applicability of this method to preliminary nacelle design is demonstrated by coupling it with a response surface model of a wide range of nacelle designs. These designs were modelled using computational fluid dynamics and a Kriging interpolation was carried out on the results. The NSGA-II algorithm was tested and verified on established multi-dimensional problems. Optimisation on the nacelle model provided 3-dimensional Pareto surfaces of optimal designs at both cruise and off-design conditions. In setting up this methodology several adaptations to the basic NSGA-II algorithm were tested including constraint handling, weighted objective functions and initial sample size. The influence of these operators is demonstrated in terms of the hyper volume of the determined Pareto set
    • 

    corecore