2,278 research outputs found

    Data Mining in Social Networks

    Get PDF
    The objective of the study is to examine the idea of Big Data and its applications in data mining. The data in the universe is expanding step by step every year and turns into large data. These significant data can be determined to utilize a few data mining undertakings. In short, Big Data can be called as an “asset” and data mining is a technique that is employed to give useful results. This paper implements an HACE algorithm that analysis the structure of big data and presents an efficient data mining technique. This framework model incorporates a mixture of information sources, mining techniques, customer interest, security, and data protection system. The study also analyzes and presents the challenges and issues faced in the big data model

    Multilateral Transparency for Security Markets Through DLT

    Get PDF
    For decades, changing technology and policy choices have worked to fragment securities markets, rendering them so dark that neither ownership nor real-time price of securities are generally visible to all parties multilaterally. The policies in the U.S. National Market System and the EU Market in Financial Instruments Directive— together with universal adoption of the indirect holding system— have pushed Western securities markets into a corner from which escape to full transparency has seemed either impossible or prohibitively expensive. Although the reader has a right to skepticism given the exaggerated promises surrounding blockchain in recent years, we demonstrate in this paper that distributed ledger technology (DLT) contains the potential to convert fragmented securities markets back to multilateral transparency. Leading markets generally lack transparency in two ways that derive from their basic structure: (1) multiple platforms on which trades in the same security are matched have separate bid/ask queues and are not consolidated in real time (fragmented pricing), and (2) highspeed transfers of securities are enabled by placing ownership of the securities in financial institutions, thus preventing transparent ownership (depository or street name ownership). The distributed nature of DLT allows multiple copies of the same pricing queue to be held simultaneously by a large number of order-matching platforms, curing the problem of fragmented pricing. This same distributed nature of DLT would allow the issuers of securities to be nodes in a DLT network, returning control over securities ownership and transfer to those issuers and thus, restoring transparent ownership through direct holding with the issuer. A serious objection to DLT is that its latency is very high—with each Bitcoin blockchain transaction taking up to ten minutes. To remedy this, we first propose a private network without cumbersome proof-of-work cryptography. Second, we introduce into our model the quickly evolving technology of “lightning networks,” which are advanced two-layer off-chain networks conducting high-speed transacting with only periodic memorialization in the permanent DLT network. Against the background of existing securities trading and settlement, this Article demonstrates that a DLT network could bring multilateral transparency and thus represent the next step in evolution for markets in their current configuration

    Implementation of Captcha as Graphical Passwords For Multi Security

    Get PDF
    To validate human users, passwords play a vital role in computer security. Graphical passwords offer more security than text-based passwords, this is due to the reason that the user replies on graphical passwords. Normal users choose regular or unforgettable passwords which can be easy to guess and are prone to Artificial Intelligence problems. Many harder to guess passwords involve more mathematical or computational complications. To counter these hard AI problems a new Captcha technology known as, Captcha as Graphical Password (CaRP), from a novel family of graphical password systems has been developed. CaRP is both a Captcha and graphical password scheme in one. CaRP mainly helps in hard AI problems and security issues like online guess attacks, relay attacks, and shoulder-surfing attacks if combined with dual view technologies. Pass-points, a new methodology from CaRP, addresses the image hotspot problem in graphical password systems which lead to weak passwords. CaRP also implements a combination of images or colors with text which generates session passwords, that helps in authentication because with session passwords every time a new password is generated and is used only once. To counter shoulder surfing, CaRP provides cheap security and usability and thus improves online security. CaRP is not a panacea; however, it gives protection and usability to some online applications for improving online security

    Ubiquitous Computing

    Get PDF
    The aim of this book is to give a treatment of the actively developed domain of Ubiquitous computing. Originally proposed by Mark D. Weiser, the concept of Ubiquitous computing enables a real-time global sensing, context-aware informational retrieval, multi-modal interaction with the user and enhanced visualization capabilities. In effect, Ubiquitous computing environments give extremely new and futuristic abilities to look at and interact with our habitat at any time and from anywhere. In that domain, researchers are confronted with many foundational, technological and engineering issues which were not known before. Detailed cross-disciplinary coverage of these issues is really needed today for further progress and widening of application range. This book collects twelve original works of researchers from eleven countries, which are clustered into four sections: Foundations, Security and Privacy, Integration and Middleware, Practical Applications

    A digital signature and watermarking based authentication system for JPEG2000 images

    Get PDF
    In this thesis, digital signature based authentication system was introduced, which is able to protect JPEG2000 images in different flavors, including fragile authentication and semi-fragile authentication. The fragile authentication is to protect the image at code-stream level, and the semi-fragile is to protect the image at the content level. The semi-fragile can be further classified into lossy and lossless authentication. With lossless authentication, the original image can be recovered after verification. The lossless authentication and the new image compression standard, JPEG2000 is mainly discussed in this thesis

    Large-scale image collection cleansing, summarization and exploration

    Get PDF
    A perennially interesting topic in the research field of large scale image collection organization is how to effectively and efficiently conduct the tasks of image cleansing, summarization and exploration. The primary objective of such an image organization system is to enhance user exploration experience with redundancy removal and summarization operations on large-scale image collection. An ideal system is to discover and utilize the visual correlation among the images, to reduce the redundancy in large-scale image collection, to organize and visualize the structure of large-scale image collection, and to facilitate exploration and knowledge discovery. In this dissertation, a novel system is developed for exploiting and navigating large-scale image collection. Our system consists of the following key components: (a) junk image filtering by incorporating bilingual search results; (b) near duplicate image detection by using a coarse-to-fine framework; (c) concept network generation and visualization; (d) image collection summarization via dictionary learning for sparse representation; and (e) a multimedia practice of graffiti image retrieval and exploration. For junk image filtering, bilingual image search results, which are adopted for the same keyword-based query, are integrated to automatically identify the clusters for the junk images and the clusters for the relevant images. Within relevant image clusters, the results are further refined by removing the duplications under a coarse-to-fine structure. The duplicate pairs are detected with both global feature (partition based color histogram) and local feature (CPAM and SIFT Bag-of-Word model). The duplications are detected and removed from the data collection to facilitate further exploration and visual correlation analysis. After junk image filtering and duplication removal, the visual concepts are further organized and visualized by the proposed concept network. An automatic algorithm is developed to generate such visual concept network which characterizes the visual correlation between image concept pairs. Multiple kernels are combined and a kernel canonical correlation analysis algorithm is used to characterize the diverse visual similarity contexts between the image concepts. The FishEye visualization technique is implemented to facilitate the navigation of image concepts through our image concept network. To better assist the exploration of large scale data collection, we design an efficient summarization algorithm to extract representative examplars. For this collection summarization task, a sparse dictionary (a small set of the most representative images) is learned to represent all the images in the given set, e.g., such sparse dictionary is treated as the summary for the given image set. The simulated annealing algorithm is adopted to learn such sparse dictionary (image summary) by minimizing an explicit optimization function. In order to handle large scale image collection, we have evaluated both the accuracy performance of the proposed algorithms and their computation efficiency. For each of the above tasks, we have conducted experiments on multiple public available image collections, such as ImageNet, NUS-WIDE, LabelMe, etc. We have observed very promising results compared to existing frameworks. The computation performance is also satisfiable for large-scale image collection applications. The original intention to design such a large-scale image collection exploration and organization system is to better service the tasks of information retrieval and knowledge discovery. For this purpose, we utilize the proposed system to a graffiti retrieval and exploration application and receive positive feedback
    • …
    corecore