271 research outputs found

    Analysis of Multi-Threading and Cache Memory Latency Masking on Processor Performance Using Thread Synchronization Technique

    Get PDF
    Multithreading is a process in which a single processor executes multiple threads concurrently. This enables the processor to divide tasks into separate threads and run them simultaneously, thereby increasing the utilization of available system resources and enhancing performance. When multiple threads share an object and one or more of them modify it, unpredictable outcomes may occur. Threads that exhibit poor locality of memory reference, such as database applications, often experience delays while waiting for a response from the memory hierarchy. This observation suggests how to better manage pipeline contention. To assess the impact of memory latency on processor performance, a dual-core MT machine with four thread contexts per core is utilized. These specific benchmarks are chosen to allow the workload to include programs with both favorable and unfavorable cache locality. To eliminate the issue of wasting the wake-up signals, this work proposes an approach that involves storing all the wake-up calls. It asserts the wake-up calls to the consumer and the producer can store the wake-up call in a variable.   An assigned value in working system (or kernel) storage that each process can check is a semaphore. Semaphore is a variable that reads, and update operations automatically in bit mode. It cannot be actualized in client mode since a race condition may persistently develop when two or more processors endeavor to induce to the variable at the same time. This study includes code to measure the time taken to execute both functions and plot the graph. It should be noted that sending multiple requests to a website simultaneously could trigger a flag, ultimately blocking access to the data. This necessitates some computation on the collected statistics. The execution time is reduced to one third when using threads compared to executing the functions sequentially. This exemplifies the power of multithreading

    Privacy preserving algorithms for newly emergent computing environments

    Get PDF
    Privacy preserving data usage ensures appropriate usage of data without compromising sensitive information. Data privacy is a primary requirement since customers' data is an asset to any organization and it contains customers' private information. Data seclusion cannot be a solution to keep data private. Data sharing as well as keeping data private is important for different purposes, e.g., company welfare, research, business etc. A broad range of industries where data privacy is mandatory includes healthcare, aviation industry, education system, federal law enforcement, etc.In this thesis dissertation we focus on data privacy schemes in emerging fields of computer science, namely, health informatics, data mining, distributed cloud, biometrics, and mobile payments. Linking and mining medical records across different medical service providers are important to the enhancement of health care quality. Under HIPAA regulation keeping medical records private is important. In real-world health care databases, records may well contain errors. Linking the error-prone data and preserving data privacy at the same time is very difficult. We introduce a privacy preserving Error-Tolerant Linking Algorithm to enable medical records linkage for error-prone medical records. Mining frequent sequential patterns such as, patient path, treatment pattern, etc., across multiple medical sites helps to improve health care quality and research. We propose a privacy preserving sequential pattern mining scheme across multiple medical sites. In a distributed cloud environment resources are provided by users who are geographically distributed over a large area. Since resources are provided by regular users, data privacy and security are main concerns. We propose a privacy preserving data storage mechanism among different users in a distributed cloud. Managing secret key for encryption is difficult in a distributed cloud. To protect secret key in a distributed cloud we propose a multilevel threshold secret sharing mechanism. Biometric authentication ensures user identity by means of user's biometric traits. Any individual's biometrics should be protected since biometrics are unique and can be stolen or misused by an adversary. We present a secure and privacy preserving biometric authentication scheme using watermarking technique. Mobile payments have become popular with the extensive use of mobile devices. Mobile applications for payments needs to be very secure to perform transactions and at the same time needs to be efficient. We design and develop a mobile application for secure mobile payments. To secure mobile payments we focus on user's biometric authentication as well as secure bank transaction. We propose a novel privacy preserving biometric authentication algorithm for secure mobile payments

    MITRA: Robust Architecture for Distributed Metadata Indexing

    Get PDF
    In the post-exascale era storage systems, a fundamental challenge faced by the research community is the efficient and scalable access to the stored information while meeting the high-performance requirements of big data applications. In this dissertation, we studied the limitations in the existing state-of-the-art architectures and proposed a system to address the challenges of scalability and high performance. Our proposed solution, called MITRA, supports several scientific formats, i.e., Hierarchical Data Format (HDF), network Common Data Form (netCDF), and Comma-Separated Values (CSV), and is composed of several software components that work together to provide high I/O throughput to user applications. The key novelty of MITRA lies in supporting a variety of file formats, generation and indexing of metadata for scientific datasets, and optimizing data lookup time while providing scalability of storage subsystem with the increasing amount of data. MITRA generates and manages indices using a relational database which can be effectively accessed using conventional application programming interfaces (APIs). We evaluated the performance of MITRA and compare it with the traditional approaches for its ingestion speed, content processing, lookup time, and scalability for the generated indices. Our evaluation reveals that the rich metadata indices of MITRA improve system lookup by reducing the search space for the metadata that is not present in indices. Moreover, MITRA outperforms the existing approach in terms of scalability as indices grow in size by balancing the load between available hardware resources

    COSPO/CENDI Industry Day Conference

    Get PDF
    The conference's objective was to provide a forum where government information managers and industry information technology experts could have an open exchange and discuss their respective needs and compare them to the available, or soon to be available, solutions. Technical summaries and points of contact are provided for the following sessions: secure products, protocols, and encryption; information providers; electronic document management and publishing; information indexing, discovery, and retrieval (IIDR); automated language translators; IIDR - natural language capabilities; IIDR - advanced technologies; IIDR - distributed heterogeneous and large database support; and communications - speed, bandwidth, and wireless

    Enabling Hyperscale Web Services

    Full text link
    Modern web services such as social media, online messaging, web search, video streaming, and online banking often support billions of users, requiring data centers that scale to hundreds of thousands of servers, i.e., hyperscale. In fact, the world continues to expect hyperscale computing to drive more futuristic applications such as virtual reality, self-driving cars, conversational AI, and the Internet of Things. This dissertation presents technologies that will enable tomorrow’s web services to meet the world’s expectations. The key challenge in enabling hyperscale web services arises from two important trends. First, over the past few years, there has been a radical shift in hyperscale computing due to an unprecedented growth in data, users, and web service software functionality. Second, modern hardware can no longer support this growth in hyperscale trends due to a decline in hardware performance scaling. To enable this new hyperscale era, hardware architects must become more aware of hyperscale software needs and software researchers can no longer expect unlimited hardware performance scaling. In short, systems researchers can no longer follow the traditional approach of building each layer of the systems stack separately. Instead, they must rethink the synergy between the software and hardware worlds from the ground up. This dissertation establishes such a synergy to enable futuristic hyperscale web services. This dissertation bridges the software and hardware worlds, demonstrating the importance of that bridge in realizing efficient hyperscale web services via solutions that span the systems stack. The specific goal is to design software that is aware of new hardware constraints and architect hardware that efficiently supports new hyperscale software requirements. This dissertation spans two broad thrusts: (1) a software and (2) a hardware thrust to analyze the complex hyperscale design space and use insights from these analyses to design efficient cross-stack solutions for hyperscale computation. In the software thrust, this dissertation contributes uSuite, the first open-source benchmark suite of web services built with a new hyperscale software paradigm, that is used in academia and industry to study hyperscale behaviors. Next, this dissertation uses uSuite to study software threading implications in light of today’s hardware reality, identifying new insights in the age-old research area of software threading. Driven by these insights, this dissertation demonstrates how threading models must be redesigned at hyperscale by presenting an automated approach and tool, uTune, that makes intelligent run-time threading decisions. In the hardware thrust, this dissertation architects both commodity and custom hardware to efficiently support hyperscale software requirements. First, this dissertation characterizes commodity hardware’s shortcomings, revealing insights that influenced commercial CPU designs. Based on these insights, this dissertation presents an approach and tool, SoftSKU, that enables cheap commodity hardware to efficiently support new hyperscale software paradigms, improving the efficiency of real-world web services that serve billions of users, saving millions of dollars, and meaningfully reducing the global carbon footprint. This dissertation also presents a hardware-software co-design, uNotify, that redesigns commodity hardware with minimal modifications by using existing hardware mechanisms more intelligently to overcome new hyperscale overheads. Next, this dissertation characterizes how custom hardware must be designed at hyperscale, resulting in industry-academia benchmarking efforts, commercial hardware changes, and improved software development. Based on this characterization’s insights, this dissertation presents Accelerometer, an analytical model that estimates gains from hardware customization. Multiple hyperscale enterprises and hardware vendors use Accelerometer to make well-informed hardware decisions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169802/1/akshitha_1.pd

    Graphics Processing Unit Accelerated Coarse-Grained Protein-Protein Docking

    Get PDF
    Graphics processing unit (GPU) architectures are increasingly used for general purpose computing, providing the means to migrate algorithms from the SISD paradigm, synonymous with CPU architectures, to the SIMD paradigm. Generally programmable commodity multi-core hardware can result in significant speed-ups for migrated codes. Because of their computational complexity, molecular simulations in particular stand to benefit from GPU acceleration. Coarse-grained molecular models provide reduced complexity when compared to the traditional, computationally expensive, all-atom models. However, while coarse-grained models are much less computationally expensive than the all-atom approach, the pairwise energy calculations required at each iteration of the algorithm continue to cause a computational bottleneck for a serial implementation. In this work, we describe a GPU implementation of the Kim-Hummer coarse-grained model for protein docking simulations, using a Replica Exchange Monte-Carlo (REMC) method. Our highly parallel implementation vastly increases the size- and time scales accessible to molecular simulation. We describe in detail the complex process of migrating the algorithm to a GPU as well as the effect of various GPU approaches and optimisations on algorithm speed-up. Our benchmarking and profiling shows that the GPU implementation scales very favourably compared to a CPU implementation. Small reference simulations benefit from a modest speedup of between 4 to 10 times. However, large simulations, containing many thousands of residues, benefit from asynchronous GPU acceleration to a far greater degree and exhibit speed-ups of up to 1400 times. We demonstrate the utility of our system on some model problems. We investigate the effects of macromolecular crowding, using a repulsive crowder model, finding our results to agree with those predicted by scaled particle theory. We also perform initial studies into the simulation of viral capsids assembly, demonstrating the crude assembly of capsid pieces into a small fragment. This is the first implementation of REMC docking on a GPU, and the effectuate speed-ups alter the tractability of large scale simulations: simulations that otherwise require months or years can be performed in days or weeks using a GPU

    Information Flow Control in Spring Web Applications

    Get PDF
    Companies rely extensively on frameworks and APIs when developing their systems, as these mechanisms are quite advantageous. Two of the most conspicuous benefits are their ease of use and workload reduction, allowing for shorter and more responsive development cycles. However, most frameworks do not provide security properties such as data confidentiality as other tools do. A prime example is a Spring. It is the most heavily used Java web development framework, hosting a vast array of functionalities, ranging from data layer functionalities (c.f. hibernate and JPA), security providers, and metrics providers to provide statistical data on the application itself as well as a layer for REST communication. However, to achieve such advanced functionalities, Spring resorts to bytecode manipulation and generation during its startup period, hindering the use of other formal analysis tools that use similar processes in their execution. In a broader sense, we provide a comprehensive approach for the static analysis of spring-based web applications. We introduce hooks in the Spring pipeline, making feasible the formal analysis and manipulation of the complete, run-time-generated appli- cation bytecode through a well-defined interface. The hooks provide not only access to the entire web application’s bytecode but also allow for the replacement of the applica- tion’s component, enabling more complex analysis requiring the instrumentation of the application. To address data confidentiality-related issues in web applications developed with this framework, we propose integrating information flow control tools in the framework’s pipeline. Namely, we combine Spring with Snitch, a tool for hybrid information flow control in Java bytecode that will be used as a case-study.As empresas apoiam-se cada vez mais em frameworks e APIs quando desenvolvem os seus sistemas, pois estas ferramentas fornecem grandes vantagens. Duas das maiores vantages destes sistemas são a sua fácil utilização/integração nos sistemas bem como a quantidade de trabalho que reduzem ao desenvolvedor, permitindo assim períodos de desenvolvimento mais curtos e responsivos. Ainda assim, a mrioria das frameworks não têm como lidar com propriedades de segurança fundamentais como confidencialidade dos dados. Um dos exemplos mais conhecidos é o Spring. É a framework mais usada em Java para desenvolvimento web, oferecendo um vasto leque de funcionalidades, variando entre uma camada que lida com dados (eg: hibernate e JPA), uma camada gestora de segurança nas aplicações, uma camada estatística que permite analisar a performance do sistema e também uma camada para comunicação REST. Para alcançar estas funcionalidades, que não são triviais, o Spring recorre a mecanismos de manipulação de bytecode e geração de código durante o seu período de inicialização, perturbando o uso de ferramentas de análise formais que recorrem a processos semelhantes na sua execução. Em geral, nós fornecemos uma nova forma de lidar com análise formal em aplicações web Spring. Aqui introduzimos hooks no processo de inicialização do Spring, tornando possível que a análise formal e a manipulação de todo o bytecode gerado da aplicação a partir duma interface cuidadosamente definida. Os hooks fornecidos fornecem acesso ao bytecode da aplicação na sua totalidade bem como permitem a substituição do componente da aplicação, permitindo assim a análise complexa e formal por parte da ferramenta que pode requerer instrumentação da aplicação. Para lidar com problemas relacionados com confidencialidade dos dados em aplicações web desenvolvidas com a framework, propomos a integração de ferramentas de controlo do fluxo de informação na prórpia framework. Assim, juntamos Spring e Snitch, uma ferramenta que analisa bytecode para verificar a segurança do fluxo de informação híbrida

    Securing Fog Federation from Behavior of Rogue Nodes

    Get PDF
    As the technological revolution advanced information security evolved with an increased need for confidential data protection on the internet. Individuals and organizations typically prefer outsourcing their confidential data to the cloud for processing and storage. As promising as the cloud computing paradigm is, it creates challenges; everything from data security to time latency issues with data computation and delivery to end-users. In response to these challenges CISCO introduced the fog computing paradigm in 2012. The intent was to overcome issues such as time latency and communication overhead and to bring computing and storage resources close to the ground and the end-users. Fog computing was, however, considered an extension of cloud computing and as such, inherited the same security and privacy challenges encountered by traditional cloud computing. These challenges accelerated the research community\u27s efforts to find practical solutions. In this dissertation, we present three approaches for individual and organizational data security and protection while that data is in storage in fog nodes or in the cloud. We also consider the protection of these data while in transit between fog nodes and the cloud, and against rogue fog nodes, man-in-the-middle attacks, and curious cloud service providers. The techniques described successfully satisfy each of the main security objectives of confidentiality, integrity, and availability. Further we study the impact of rogue fog nodes on end-user devices. These approaches include a new concept, the Fog-Federation (FF): its purpose to minimize communication overhead and time latency between the Fog Nodes (FNs) and the Cloud Service Provider (CSP) during the time the system is unavailable as a rogue Fog Node (FN) is being ousted. Further, we considered the minimization of data in danger of breach by rogue fog nodes. We demonstrate the efficiency and feasibility of each approach by implementing simulations and analyzing security and performance
    corecore