2,902 research outputs found
Policy issues in interconnecting networks
To support the activities of the Federal Research Coordinating Committee (FRICC) in creating an interconnected set of networks to serve the research community, two workshops were held to address the technical support of policy issues that arise when interconnecting such networks. The workshops addressed the required and feasible technologies and architectures that could be used to satisfy the desired policies for interconnection. The results of the workshop are documented
Dealing with uncertain entities in ontology alignment using rough sets
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Ontology alignment facilitates exchange of knowledge among heterogeneous data sources. Many approaches to ontology alignment use multiple similarity measures to map entities between ontologies. However, it remains a key challenge in dealing with uncertain entities for which the employed ontology alignment measures produce conflicting results on similarity of the mapped entities. This paper presents OARS, a rough-set based approach to ontology alignment which achieves a high degree of accuracy in situations where uncertainty arises because of the conflicting results generated by different similarity measures. OARS employs a combinational approach and considers both lexical and structural similarity measures. OARS is extensively evaluated with the benchmark ontologies of the ontology alignment evaluation initiative (OAEI) 2010, and performs best in the aspect of recall in comparison with a number of alignment systems while generating a comparable performance in precision
Nurturing the Accumulation of Innovations: Lessons from the Internet
The innovations that became the foundation for the Internet originate from two eras that illustrate two distinct models for accumulating innovations over the long haul. The pre-commercial era illustrates the operation of several useful non-market institutional arrangements. It also illustrates a potential drawback to government sponsorship – in this instance, truncation of exploratory activity. The commercial era illustrates a rather different set of lessons. It highlights the extraordinary power of market-oriented and widely distributed investment and adoption, which illustrates the power of market experimentation to foster innovative activity. It also illustrates a few of the conditions necessary to unleash value creation from such accumulated lessons, such as standards development and competition, and nurturing legal and regulatory policies.
Worldnet
The expanding use of powerful workstations coupled to ubiquitous networks is transforming scientific and engineering research and the the ways organizations around the world do business. By the year 2000, few enterprises will be able to succeed without mastery of this technology, which will be embodied in an information infrastructure based on a worldwide network. A recurring theme in all the discussions of what might be possible within the emerging Worldnet is people and machines working together in new ways across distance and time. A review is presented of the basic concepts on which the architecture of Worldnet must be built: coordination of action, authentication, privacy, and naming. Worldnet must provide additional functions to support the ongoing processes of suppliers and consumers: help services, aids for designing and producing subsystems, spinning off new machines, and resistance to attack. This discussion begins to reveal the constituent elements of a theory for Worldnet, a theory focused on what people will do with computers rather than on what computers do
Analyzing Social and Stylometric Features to Identify Spear phishing Emails
Spear phishing is a complex targeted attack in which, an attacker harvests
information about the victim prior to the attack. This information is then used
to create sophisticated, genuine-looking attack vectors, drawing the victim to
compromise confidential information. What makes spear phishing different, and
more powerful than normal phishing, is this contextual information about the
victim. Online social media services can be one such source for gathering vital
information about an individual. In this paper, we characterize and examine a
true positive dataset of spear phishing, spam, and normal phishing emails from
Symantec's enterprise email scanning service. We then present a model to detect
spear phishing emails sent to employees of 14 international organizations, by
using social features extracted from LinkedIn. Our dataset consists of 4,742
targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack
emails sent to 5,912 non victims; and publicly available information from their
LinkedIn profiles. We applied various machine learning algorithms to this
labeled data, and achieved an overall maximum accuracy of 97.76% in identifying
spear phishing emails. We used a combination of social features from LinkedIn
profiles, and stylometric features extracted from email subjects, bodies, and
attachments. However, we achieved a slightly better accuracy of 98.28% without
the social features. Our analysis revealed that social features extracted from
LinkedIn do not help in identifying spear phishing emails. To the best of our
knowledge, this is one of the first attempts to make use of a combination of
stylometric features extracted from emails, and social features extracted from
an online social network to detect targeted spear phishing emails.Comment: Detection of spear phishing using social media feature
Natural language processing
Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems
- …