45 research outputs found

    Analysis and Design Security Primitives Based on Chaotic Systems for eCommerce

    Get PDF
    Security is considered the most important requirement for the success of electronic commerce, which is built based on the security of hash functions, encryption algorithms and pseudorandom number generators. Chaotic systems and security algorithms have similar properties including sensitivity to any change or changes in the initial parameters, unpredictability, deterministic nature and random-like behaviour. Several security algorithms based on chaotic systems have been proposed; unfortunately some of them were found to be insecure and/or slow. In view of this, designing new secure and fast security algorithms based on chaotic systems which guarantee integrity, authentication and confidentiality is essential for electronic commerce development. In this thesis, we comprehensively explore the analysis and design of security primitives based on chaotic systems for electronic commerce: hash functions, encryption algorithms and pseudorandom number generators. Novel hash functions, encryption algorithms and pseudorandom number generators based on chaotic systems for electronic commerce are proposed. The securities of the proposed algorithms are analyzed based on some well-know statistical tests in this filed. In addition, a new one-dimensional triangle-chaotic map (TCM) with perfect chaotic behaviour is presented. We have compared the proposed chaos-based hash functions, block cipher and pseudorandom number generator with well-know algorithms. The comparison results show that the proposed algorithms are better than some other existing algorithms. Several analyses and computer simulations are performed on the proposed algorithms to verify their characteristics, confirming that these proposed algorithms satisfy the characteristics and conditions of security algorithms. The proposed algorithms in this thesis are high-potential for adoption in e-commerce applications and protocols

    A Low Computational Cost RGB Color Image Encryption Scheme Process based on PWLCM Confusion, Z/nZ Diffusion and ECBC Avalanche Effect

    Get PDF
    In this work, three sub-processes are serially integrated into just one process in order to construct a robust new image encryption scheme for all types of images, especially color images. This integration architecture aims to create a robust avalanche effect property while respecting the constraints of confusion and diffusion that have been identified by Claude Shannon as properties required of a secure encryption scheme. The performance of the proposed encryption scheme is measured and discussed with several analyses, including computational cost analysis, key space analysis, randomness metrics  analysis, histogram analysis, adjacent pixel correlation, and entropy analysis. The experimental results demonstrated and validated the performance and robustness of the proposed scheme

    A Low Computational Cost RGB Color Image Encryption Scheme Process based on PWLCM Confusion, Z/nZ Diffusion and ECBC Avalanche Effect

    Get PDF
    In this work, three sub-processes are serially integrated into just one process in order to construct a robust new image encryption scheme for all types of images, especially color images. This integration architecture aims to create a robust avalanche effect property while respecting the constraints of confusion and diffusion that have been identified by Claude Shannon as properties required of a secure encryption scheme. The performance of the proposed encryption scheme is measured and discussed with several analyses, including computational cost analysis, key space analysis, randomness metrics  analysis, histogram analysis, adjacent pixel correlation, and entropy analysis. The experimental results demonstrated and validated the performance and robustness of the proposed scheme

    Energy cost minimization with job security guarantee in Internet data center

    Get PDF
    With the proliferation of various big data applications and resource demand from Internet data centers (IDCs), the energy cost has been skyrocketing, and it attracts a great deal of attention and brings many energy optimization management issues. However, the security problem for a wide range of applications, which has been overlooked, is another critical concern and even ranked as the greatest challenge in IDC. In this paper, we propose an energy cost minimization (ECM) algorithm with job security guarantee for IDC in deregulated electricity markets. Randomly arriving jobs are routed to a FIFO queue, and a heuristic algorithm is devised to select security levels for guaranteeing job risk probability constraint. Then, the energy optimization problem is formulated by taking the temporal diversity of electricity price into account. Finally, an online energy cost minimization algorithm is designed to solve the problem by Lyapunov optimization framework which offers provable energy cost optimization and delay guarantee. This algorithm can aggressively and adaptively seize the timing of low electricity price to process workloads and defer delay-tolerant workloads execution when the price is high. Based on the real-life electricity price, simulation results prove the feasibility and effectiveness of proposed algorithm

    Digital Design of New Chaotic Ciphers for Ethernet Traffic

    Get PDF
    Durante los últimos años, ha habido un gran desarrollo en el campo de la criptografía, y muchos algoritmos de encriptado así como otras funciones criptográficas han sido propuestos.Sin embargo, a pesar de este desarrollo, hoy en día todavía existe un gran interés en crear nuevas primitivas criptográficas o mejorar las ya existentes. Algunas de las razones son las siguientes:• Primero, debido el desarrollo de las tecnologías de la comunicación, la cantidad de información que se transmite está constantemente incrementándose. En este contexto, existen numerosas aplicaciones que requieren encriptar una gran cantidad de datos en tiempo real o en un intervalo de tiempo muy reducido. Un ejemplo de ello puede ser el encriptado de videos de alta resolución en tiempo real. Desafortunadamente, la mayoría de los algoritmos de encriptado usados hoy en día no son capaces de encriptar una gran cantidad de datos a alta velocidad mientras mantienen altos estándares de seguridad.• Debido al gran aumento de la potencia de cálculo de los ordenadores, muchos algoritmos que tradicionalmente se consideraban seguros, actualmente pueden ser atacados por métodos de “fuerza bruta” en una cantidad de tiempo razonable. Por ejemplo, cuando el algoritmo de encriptado DES (Data Encryption Standard) fue lanzado por primera vez, el tamaño de la clave era sólo de 56 bits mientras que, hoy en día, el NIST (National Institute of Standards and Technology) recomienda que los algoritmos de encriptado simétricos tengan una clave de, al menos, 112 bits. Por otro lado, actualmente se está investigando y logrando avances significativos en el campo de la computación cuántica y se espera que, en el futuro, se desarrollen ordenadores cuánticos a gran escala. De ser así, se ha demostrado que algunos algoritmos que se usan actualmente como el RSA (Rivest Shamir Adleman) podrían ser atacados con éxito.• Junto al desarrollo en el campo de la criptografía, también ha habido un gran desarrollo en el campo del criptoanálisis. Por tanto, se están encontrando nuevas vulnerabilidades y proponiendo nuevos ataques constantemente. Por consiguiente, es necesario buscar nuevos algoritmos que sean robustos frente a todos los ataques conocidos para sustituir a los algoritmos en los que se han encontrado vulnerabilidades. En este aspecto, cabe destacar que algunos algoritmos como el RSA y ElGamal están basados en la suposición de que algunos problemas como la factorización del producto de dos números primos o el cálculo de logaritmos discretos son difíciles de resolver. Sin embargo, no se ha descartado que, en el futuro, se puedan desarrollar algoritmos que resuelvan estos problemas de manera rápida (en tiempo polinomial).• Idealmente, las claves usadas para encriptar los datos deberían ser generadas de manera aleatoria para ser completamente impredecibles. Dado que las secuencias generadas por generadores pseudoaleatorios, PRNGs (Pseudo Random Number Generators) son predecibles, son potencialmente vulnerables al criptoanálisis. Por tanto, las claves suelen ser generadas usando generadores de números aleatorios verdaderos, TRNGs (True Random Number Generators). Desafortunadamente, los TRNGs normalmente generan los bits a menor velocidad que los PRNGs y, además, las secuencias generadas suelen tener peores propiedades estadísticas, lo que hace necesario que pasen por una etapa de post-procesado. El usar un TRNG de baja calidad para generar claves, puede comprometer la seguridad de todo el sistema de encriptado, como ya ha ocurrido en algunas ocasiones. Por tanto, el diseño de nuevos TRNGs con buenas propiedades estadísticas es un tema de gran interés.En resumen, es claro que existen numerosas líneas de investigación en el ámbito de la criptografía de gran importancia. Dado que el campo de la criptografía es muy amplio, esta tesis se ha centra en tres líneas de investigación: el diseño de nuevos TRNGs, el diseño de nuevos cifradores de flujo caóticos rápidos y seguros y, finalmente, la implementación de nuevos criptosistemas para comunicaciones ópticas Gigabit Ethernet a velocidades de 1 Gbps y 10 Gbps. Dichos criptosistemas han estado basados en los algoritmos caóticos propuestos, pero se han adaptado para poder realizar el encriptado en la capa física, manteniendo el formato de la codificación. De esta forma, se ha logrado que estos sistemas sean capaces no sólo de encriptar los datos sino que, además, un atacante no pueda saber si se está produciendo una comunicación o no. Los principales aspectos cubiertos en esta tesis son los siguientes:• Estudio del estado del arte, incluyendo los algoritmos de encriptado que se usan actualmente. En esta parte se analizan los principales problemas que presentan los algoritmos de encriptado standard actuales y qué soluciones han sido propuestas. Este estudio es necesario para poder diseñar nuevos algoritmos que resuelvan estos problemas.• Propuesta de nuevos TRNGs adecuados para la generación de claves. Se exploran dos diferentes posibilidades: el uso del ruido generado por un acelerómetro MEMS (Microelectromechanical Systems) y el ruido generado por DNOs (Digital Nonlinear Oscillators). Ambos casos se analizan en detalle realizando varios análisis estadísticos a secuencias obtenidas a distintas frecuencias de muestreo. También se propone y se implementa un algoritmo de post-procesado simple para mejorar la aleatoriedad de las secuencias generadas. Finalmente, se discute la posibilidad de usar estos TRNGs como generadores de claves. • Se proponen nuevos algoritmos de encriptado que son rápidos, seguros y que pueden implementarse usando una cantidad reducida de recursos. De entre todas las posibilidades, esta tesis se centra en los sistemas caóticos ya que, gracias a sus propiedades intrínsecas como la ergodicidad o su comportamiento similar al comportamiento aleatorio, pueden ser una buena alternativa a los sistemas de encriptado clásicos. Para superar los problemas que surgen cuando estos sistemas son digitalizados, se proponen y estudian diversas estrategias: usar un sistema de multi-encriptado, cambiar los parámetros de control de los sistemas caóticos y perturbar las órbitas caóticas.• Se implementan los algoritmos propuestos. Para ello, se usa una FPGA Virtex 7. Las distintas implementaciones son analizadas y comparadas, teniendo en cuenta diversos aspectos tales como el consumo de potencia, uso de área, velocidad de encriptado y nivel de seguridad obtenido. Uno de estos diseños, se elige para ser implementado en un ASIC (Application Specific Integrate Circuit) usando una tecnología de 0,18 um. En cualquier caso, las soluciones propuestas pueden ser también implementadas en otras plataformas y otras tecnologías.• Finalmente, los algoritmos propuestos se adaptan y aplican a comunicaciones ópticas Gigabit Ethernet. En particular, se implementan criptosistemas que realizan el encriptado al nivel de la capa física para velocidades de 1 Gbps y 10 Gbps. Para realizar el encriptado en la capa física, los algoritmos propuestos en las secciones anteriores se adaptan para que preserven el formato de la codificación, 8b/10b en el caso de 1 Gb Ethernet y 64b/10b en el caso de 10 Gb Ethernet. En ambos casos, los criptosistemas se implementan en una FPGA Virtex 7 y se diseña un set experimental, que incluye dos módulos SFP (Small Form-factor Pluggable) capaces de transmitir a una velocidad de hasta 10.3125 Gbps sobre una fibra multimodo de 850 nm. Con este set experimental, se comprueba que los sistemas de encriptado funcionan correctamente y de manera síncrona. Además, se comprueba que el encriptado es bueno (pasa todos los test de seguridad) y que el patrón del tráfico de datos está oculto.<br /

    Investigation of Some Self-Optimizing Control Problems for Net-Zero Energy Buildings

    Get PDF
    Green buildings are sustainable buildings designed to be environmentally responsible and resource efficient. The Net-Zero Energy Building (NZEB) concept is anchored on two pillars: reducing the energy consumption and enhancing the local energy generation. In other words, efficient operation of the existing building equipment and efficient power generation of building integrated renewable energy sources are two important factors of NZEB development. The heating, ventilation and air conditioning (HVAC) systems are an important class of building equipment that is responsible for large portion of building energy usage, while the building integrated photovoltaic (BIPV) system is well received as the key technology for local generation of clean power. Building system operation is a low-investment practice that aims low operation and maintenance cost. However, building HVAC and BIPV are systems subject to complicated intrinsic processes and highly variable environmental conditions and occupant behavior. Control, optimization and monitoring of such systems desire simple and effective approaches that require the least amount of model information and the use of smallest number but most robust sensor measurements. Self-optimizing control strategies promise a competitive platform for control, optimization and control integrated monitoring for building systems, and especially for the development of cost-effective NZEB. This dissertation study endorses this statement with three aspects of work relevant to building HVAC and BIPV, which could contribute several small steps towards the ramification of the self-optimizing control paradigm. This dissertation study applies self-optimizing control techniques to improve the energy efficiency of NZEB from two aspects. First, regarding the building HVAC efficiency, the dither based extremum seeking control (DESC) scheme is proposed for energy efficient operation of the chilled-water system typically used in the commercial building ventilation and air conditioning (VAC) systems. To evaluate the effectiveness of the proposed control strategy, Modelica based dynamic simulation model of chilled water chiller-tower plant is developed, which consists of a screw chiller and a mechanical-draft counter-flow wet cooling tower. The steady-state performance of the cooling tower model is validated with the experimental data in a classic paper and good agreement is observed. The DESC scheme takes the total power consumption of the chiller compressor and the tower fan as feedback, and uses the fan speed setting as the control input. The inner loop controllers for the chiller operation include two proportional-integral (PI) control loops for regulating the evaporator superheat and the chilled water temperature. Simulation was conducted on the whole dynamic simulation model with different environment conditions. The simulation results demonstrated the effectiveness of the proposed ESC strategy under abrupt changes of ambient conditions and load changes. The potential for energy savings of these cases are also evaluated. The back-calculation based anti-windup ESC is also simulated for handling the integral windup problem due to actuator saturation. Second, both maximum power point tracking (MPPT) and control integrated diagnostics are investigated for BIPV with two different extremum seeking control strategies, which both would contribute to the reduction of the cost of energy (COE). In particular, the Adaptive Extremum Seeking Control (AESC) is applied for PV MPPT, which is based on a PV model with known model structure but unknown nonlinear characteristics for the current-voltage relation. The nonlinear uncertainty is approximated by a radial basis function neural network (RBFNN). A Lyapunov based inverse optimal design technique is applied to achieve parameter estimation and gradient based extremum seeking. Simulation study is performed for scenarios of temperature change, irradiance change and combined change of temperature and irradiance. Successful results are observed for all cases. Furthermore, the AESC simulation is compared to the DESC simulation, and AESC demonstrates much faster transient responses under various scenarios of ambient changes. Many of the PV degradation mechanisms are reflected as the change of the internal resistance. A scheme of detecting the change of PV internal shunt resistance is proposed using the available signals in the DESC based MPPT with square-wave dither. The impact of the internal resistance on the transient characteristics of step responses is justified by using the small-signal transfer function analysis. Simulation study is performed for both the single-string and multi-string PV examples, and both cases have demonstrated successful results. Monotonic relationship between integral error indices and the shunt internal resistance is clearly observed. In particular, for the multi-string, the inter-channel coupling is weak, which indicates consistent monitoring for multi-string operation. The proposed scheme provides the online monitoring ability of the internal resistance condition without any additional sensor, which benefits further development of PV degradation detection techniques

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
    corecore