3,851 research outputs found
Enhancing Data Security by Making Data Disappear in a P2P Systems
This paper describes the problem of securing data by making it disappear
after some time limit, making it impossible for it to be recovered by an
unauthorized party. This method is in response to the need to keep the data
secured and to protect the privacy of archived data on the servers, Cloud and
Peer-to-Peer architectures. Due to the distributed nature of these
architectures, it is impossible to destroy the data completely. So, we store
the data by applying encryption and then manage the key, which is easier to do
as the key is small and it can be hidden in the DHT (Distributed hash table).
Even if the keys in the DHT and the encrypted data were compromised, the data
would still be secure. This paper describes existing solutions, points to their
limitations and suggests improvements with a new secure architecture. We
evaluated and executed this architecture on the Java platform and proved that
it is more secure than other architectures.Comment: 18 page
Inference Under Convex Cone Alternatives for Correlated Data
In this research, inferential theory for hypothesis testing under general
convex cone alternatives for correlated data is developed. While there exists
extensive theory for hypothesis testing under smooth cone alternatives with
independent observations, extension to correlated data under general convex
cone alternatives remains an open problem. This long-pending problem is
addressed by (1) establishing that a "generalized quasi-score" statistic is
asymptotically equivalent to the squared length of the projection of the
standard Gaussian vector onto the convex cone and (2) showing that the
asymptotic null distribution of the test statistic is a weighted chi-squared
distribution, where the weights are "mixed volumes" of the convex cone and its
polar cone. Explicit expressions for these weights are derived using the
volume-of-tube formula around a convex manifold in the unit sphere.
Furthermore, an asymptotic lower bound is constructed for the power of the
generalized quasi-score test under a sequence of local alternatives in the
convex cone. Applications to testing under order restricted alternatives for
correlated data are illustrated.Comment: 31 page
Enhancing the photomixing efficiency of optoelectronic devices in the terahertz regime
A method to reduce the transit time of majority of carriers in photomixers
and photo detectors to ps is proposed. Enhanced optical fields associated
with surface plasmon polaritons, coupled with velocity overshoot phenomenon
results in net decrease of transit time of carriers. As an example, model
calculations demonstrating (or 2800 and 31.8 W at 1
and 5 THz respectively) improvement in THz power generation efficiency of a
photomixer based on Low Temperature grown GaAs are presented. Due to minimal
dependence on the carrier recombination time, it is anticipated that the
proposed method paves the way for enhancing the speed and efficiency of
photomixers and detectors covering UV to far infrared communications
wavelengths (300 to 1600 nm).Comment: 5 pages, 4 figure
On large-sample estimation and testing via quadratic inference functions for correlated data
Hansen (1982) proposed a class of "generalized method of moments" (GMMs) for
estimating a vector of regression parameters from a set of score functions.
Hansen established that, under certain regularity conditions, the estimator
based on the GMMs is consistent, asymptotically normal and asymptotically
efficient. In the generalized estimating equation framework, extending the
principle of the GMMs to implicitly estimate the underlying correlation
structure leads to a "quadratic inference function" (QIF) for the analysis of
correlated data. The main objectives of this research are to (1) formulate an
appropriate estimated covariance matrix for the set of extended score functions
defining the inference functions; (2) develop a unified large-sample
theoretical framework for the QIF; (3) derive a generalization of the QIF test
statistic for a general linear hypothesis problem involving correlated data
while establishing the asymptotic distribution of the test statistic under the
null and local alternative hypotheses; (4) propose an iteratively reweighted
generalized least squares algorithm for inference in the QIF framework; and (5)
investigate the effect of basis matrices, defining the set of extended score
functions, on the size and power of the QIF test through Monte Carlo simulated
experiments.Comment: 32 pages, 2 figure
Recommended from our members
Using agent based simulation to empirically examine complexity in carbon footprint business process
Through the critical analysis of the extant literature, it is observed that Simulation is widely used as a research method in Natural Sciences, Engineering and Social Sciences, in addition to argumentation and formalisation as the third way of carrying out research. Simulation is not so widely used in Business and Management research as it ought to have been, though this is changing for the better with the technological advances in computers and their computational power. These technological advances enhance the capability of theoretical research models, in defining a problem and their use in empirically examining a solution to the problem in simulated reality, like never before. Management journal searches for “Simulation and Complexity Theory” returned nil or zero returns, which explain that this combination is not popular in management research, though they are used individually more often. The major objective of this paper is to analyse some of the conceptual (or theoretical) and methodological (or empirical) contributions that Agent Based Simulation and Complexity Theory can make to the business and management community in their business process related research In view of this, some basic ideas are discussed of using Agent Based Simulation as a method in Business and Management Studies research and how an Agent Based Model can be applied to a business process as complex as Carbon Footprint. It is in this context that the use of Complexity as the base theory to empirically examine a business process is discussed. Throughout this article, our research on complex adaptive systems (e.g., Accounting Information System) in continuously changing organisations managing complex business processes (e.g., Carbon Footprint business process) is considered as the basis for illustrating some of the concepts. Through this article, avenues for further management research using these tools and methodology are suggested
- …