66,386 research outputs found
A New Scheme for Minimizing Malicious Behavior of Mobile Nodes in Mobile Ad Hoc Networks
The performance of Mobile Ad hoc networks (MANET) depends on the cooperation
of all active nodes. However, supporting a MANET is a cost-intensive activity
for a mobile node. From a single mobile node perspective, the detection of
routes as well as forwarding packets consume local CPU time, memory,
network-bandwidth, and last but not least energy. We believe that this is one
of the main factors that strongly motivate a mobile node to deny packet
forwarding for others, while at the same time use their services to deliver its
own data. This behavior of an independent mobile node is commonly known as
misbehaving or selfishness. A vast amount of research has already been done for
minimizing malicious behavior of mobile nodes. However, most of them focused on
the methods/techniques/algorithms to remove such nodes from the MANET. We
believe that the frequent elimination of such miss-behaving nodes never allowed
a free and faster growth of MANET. This paper provides a critical analysis of
the recent research wok and its impact on the overall performance of a MANET.
In this paper, we clarify some of the misconceptions in the understating of
selfishness and miss-behavior of nodes. Moreover, we propose a mathematical
model that based on the time division technique to minimize the malicious
behavior of mobile nodes by avoiding unnecessary elimination of bad nodes. Our
proposed approach not only improves the resource sharing but also creates a
consistent trust and cooperation (CTC) environment among the mobile nodes. The
simulation results demonstrate the success of the proposed approach that
significantly minimizes the malicious nodes and consequently maximizes the
overall throughput of MANET than other well known schemes.Comment: 10 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS July 2009, ISSN 1947 5500, Impact Factor 0.42
Recommended from our members
A classification of emerging and traditional grid systems
The grid has evolved in numerous distinct phases. It started in the early â90s as a model of metacomputing in which supercomputers share resources; subsequently, researchers added the ability to share data. This is usually referred to as the first-generation grid. By the late â90s, researchers had outlined the framework for second-generation grids, characterized by their use of grid middleware systems to âglueâ different grid technologies together. Third-generation grids originated in the early millennium when Web technology was combined with second-generation grids. As a result, the invisible grid, in which grid complexity is fully hidden through resource virtualization, started receiving attention. Subsequently, grid researchers identified the requirement for semantically rich knowledge grids, in which middleware technologies are more intelligent and autonomic. Recently, the necessity for grids to support and extend the ambient intelligence vision has emerged. In AmI, humans are surrounded by computing technologies that are unobtrusively embedded in their surroundings.
However, third-generation gridsâ current architecture doesnât meet the requirements of next-generation grids (NGG) and service-oriented knowledge utility (SOKU).4 A few years ago, a group of independent experts, arranged by the European Commission, identified these shortcomings as a way to identify potential European grid research priorities for 2010 and beyond. The experts envision grid systemsâ information, knowledge, and processing capabilities as a set of utility services.3 Consequently, new grid systems are emerging to materialize these visions. Here, we review emerging grids and classify them to motivate further research and help establish a solid foundation in this rapidly evolving area
GRIDKIT: Pluggable overlay networks for Grid computing
A `second generation' approach to the provision of Grid middleware is now emerging which is built on service-oriented architecture and web services standards and technologies. However, advanced Grid applications have significant demands that are not addressed by present-day web services platforms. As one prime example, current platforms do not support the rich diversity of communication `interaction types' that are demanded by advanced applications (e.g. publish-subscribe, media streaming, peer-to-peer interaction). In the paper we describe the Gridkit middleware which augments the basic service-oriented architecture to address this particular deficiency. We particularly focus on the communications infrastructure support required to support multiple interaction types in a unified, principled and extensible manner-which we present in terms of the novel concept of pluggable overlay networks
Smart PIN: utility-based replication and delivery of multimedia content to mobile users in wireless networks
Next generation wireless networks rely on heterogeneous connectivity technologies to support various rich media services such as personal information storage, file sharing and multimedia streaming. Due to usersâ mobility and dynamic characteristics of wireless networks, data availability in collaborating devices is a critical issue. In this context Smart PIN was proposed as a personal information network which focuses on performance of delivery and cost efficiency. Smart PIN uses a novel data replication scheme based on individual and overall system utility to best balance the requirements for static data and multimedia content delivery with variable device availability due to user mobility. Simulations show improved results in comparison with other general purpose data replication schemes in terms of data availability
Quality assessment technique for ubiquitous software and middleware
The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future
- âŚ