38,317 research outputs found
Seahawk: moving beyond HTML in Web-based bioinformatics analysis
<p>Abstract</p> <p>Background</p> <p>Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis.</p> <p>Results</p> <p>We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format.</p> <p>Conclusion</p> <p>As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, <it>ad hoc </it>analysis workflow creation without the assistance of a programmer.</p
CYCLOSA: Decentralizing Private Web Search Through SGX-Based Browser Extensions
By regularly querying Web search engines, users (unconsciously) disclose
large amounts of their personal data as part of their search queries, among
which some might reveal sensitive information (e.g. health issues, sexual,
political or religious preferences). Several solutions exist to allow users
querying search engines while improving privacy protection. However, these
solutions suffer from a number of limitations: some are subject to user
re-identification attacks, while others lack scalability or are unable to
provide accurate results. This paper presents CYCLOSA, a secure, scalable and
accurate private Web search solution. CYCLOSA improves security by relying on
trusted execution environments (TEEs) as provided by Intel SGX. Further,
CYCLOSA proposes a novel adaptive privacy protection solution that reduces the
risk of user re- identification. CYCLOSA sends fake queries to the search
engine and dynamically adapts their count according to the sensitivity of the
user query. In addition, CYCLOSA meets scalability as it is fully
decentralized, spreading the load for distributing fake queries among other
nodes. Finally, CYCLOSA achieves accuracy of Web search as it handles the real
query and the fake queries separately, in contrast to other existing solutions
that mix fake and real query results
Recommended from our members
Application of Natural Language Processing and Evidential Analysis to Web-Based Intelligence Information Acquisition
The quality of decisions made in business and government relates directly to the quality of the information used to formulate the decision. This information may be retrieved from an organization's knowledge base (Intranet) or from the World Wide Web. Intelligence services Intranet held information can be efficiently manipulated by technologies based upon either semantics such as ontologies, or statistics such as meaning-based computing. These technologies require complex processing of large amount of textual information. However, they cannot currently be effectively applied to Web-based search due to various obstacles, such as lack of semantic tagging. A new approach proposed in this paper supports Web-based search for intelligence information utilizing evidence-based natural language processing (NLP). This approach combines traditional NLP methods for filtering of Web-search results, Grounded Theory to test the completeness of the evidence, and Evidential Analysis to test the quality of gathered information. The enriched information derived from the Web-search will be transferred to the intelligence services knowledge base for handling by an effective Intranet search system thus increasing substantially the information for intelligence analysis. The paper will show that the quality of retrieved information is significantly enhanced by the discovery of previously unknown facts derived from known facts
- …