15 research outputs found

    WARDOG: Awareness detection watchbog for Botnet infection on the host device

    Get PDF
    Botnets constitute nowadays one of the most dangerous security threats worldwide. High volumes of infected machines are controlled by a malicious entity and perform coordinated cyber-attacks. The problem will become even worse in the era of the Internet of Things (IoT) as the number of insecure devices is going to be exponentially increased. This paper presents WARDOG – an awareness and digital forensic system that informs the end-user of the botnet’s infection, exposes the botnet infrastructure, and captures verifiable data that can be utilized in a court of law. The responsible authority gathers all information and automatically generates a unitary documentation for the case. The document contains undisputed forensic information, tracking all involved parties and their role in the attack. The deployed security mechanisms and the overall administration setting ensures non-repudiation of performed actions and enforces accountability. The provided properties are verified through theoretic analysis. In simulated environment, the effectiveness of the proposed solution, in mitigating the botnet operations, is also tested against real attack strategies that have been captured by the FORTHcert honeypots, overcoming state-of-the-art solutions. Moreover, a preliminary version is implemented in real computers and IoT devices, highlighting the low computational/communicational overheads of WARDOG in the field

    Enabling Efficient Communications Over Millimeter Wave Massive MIMO Channels Using Hybrid Beamforming

    Get PDF
    The use of massive multiple-input multiple-output (MIMO) over millimeter wave (mmWave) channels is the new frontier for fulfilling the exigent requirements of next-generation wireless systems and solving the wireless network impending crunch. Massive MIMO systems and mmWave channels offer larger numbers of antennas, higher carrier frequencies, and wider signaling bandwidths. Unleashing the full potentials of these tremendous degrees of freedom (dimensions) hinges on the practical deployment of those technologies. Hybrid analog and digital beamforming is considered as a stepping-stone to the practical deployment of mmWave massive MIMO systems since it significantly reduces their operating and implementation costs, energy consumption, and system design complexity. The prevalence of adopting mmWave and massive MIMO technologies in next-generation wireless systems necessitates developing agile and cost-efficient hybrid beamforming solutions that match the various use-cases of these systems. In this thesis, we propose hybrid precoding and combining solutions that are tailored to the needs of these specific cases and account for the main limitations of hybrid processing. The proposed solutions leverage the sparsity and spatial correlation of mmWave massive MIMO channels to reduce the feedback overhead and computational complexity of hybrid processing. Real-time use-cases of next-generation wireless communication, including connected cars, virtual-reality/augmented-reality, and high definition video transmission, require high-capacity and low-latency wireless transmission. On the physical layer level, this entails adopting near capacity-achieving transmission schemes with very low computational delay. Motivated by this, we propose low-complexity hybrid precoding and combining schemes for massive MIMO systems with partially and fully-connected antenna array structures. Leveraging the disparity in the dimensionality of the analog and the digital processing matrices, we develop a two-stage channel diagonalization design approach in order to reduce the computational complexity of the hybrid precoding and combining while maintaining high spectral efficiency. Particularly, the analog processing stage is designed to maximize the antenna array gain in order to avoid performing computationally intensive operations such as matrix inversion and singular value decomposition in high dimensions. On the other hand, the low-dimensional digital processing stage is designed to maximize the spectral efficiency of the systems. Computational complexity analysis shows that the proposed schemes offer significant savings compared to prior works where asymptotic computational complexity reductions ranging between 80%80\% and 98%98\%. Simulation results validate that the spectral efficiency of the proposed schemes is near-optimal where in certain scenarios the signal-to-noise-ratio (SNR) gap to the optimal fully-digital spectral efficiency is less than 11 dB. On the other hand, integrating mmWave and massive MIMO into the cellular use-cases requires adopting hybrid beamforming schemes that utilize limited channel state information at the transmitter (CSIT) in order to adapt the transmitted signals to the current channel. This is so mainly because obtaining perfect CSIT in frequency division duplexing (FDD) architecture, which dominates the cellular systems, poses serious concerns due to its large training and excessive feedback overhead. Motivated by this, we develop low-overhead hybrid precoding algorithms for selecting the baseband digital and radio frequency (RF) analog precoders from statistically skewed DFT-based codebooks. The proposed algorithms aim at maximizing the spectral efficiency based on minimizing the chordal distance between the optimal unconstrained precoder and the hybrid beamformer and maximizing the signal to interference noise ratio for the single-user and multi-user cases, respectively. Mathematical analysis shows that the proposed algorithms are asymptotically optimal as the number of transmit antennas goes to infinity and the mmWave channel has a limited number of paths. Moreover, it shows that the performance gap between the lower and upper bounds depends heavily on how many DFT columns are aligned to the largest eigenvectors of the transmit antenna array response of the mmWave channel or equivalently the transmit channel covariance matrix when only the statistical channel knowledge is available at the transmitter. Further, we verify the performance of the proposed algorithms numerically where the obtained results illustrate that the spectral efficiency of the proposed algorithms can approach that of the optimal precoder in certain scenarios. Furthermore, these results illustrate that the proposed hybrid precoding schemes have superior spectral efficiency performance while requiring lower (or at most comparable) channel feedback overhead in comparison with the prior art

    Strongly Interacting Quantum Systems out of Equilibrium

    Get PDF
    The main topic of this thesis is the study of many-body effects in strongly correlated one- or quasi one-dimensional condensed matter systems. These systems are characterized by strong quantum and thermal fluctuations, which make mean-field methods fail and demand for a fully numerical approach. Fortunately, a numerical method exist, which allows to treat unusually large one -dimensional system at very high precision. This method is the density-matrix renormalization group method (DMRG), introduced by Steve White in 1992. Originally limited to the study of static problems, time-dependent DMRG has been developed allowing one to investigate non-equilibrium phenomena in quantum mechanics. In this thesis I present the solution of three conceptionally different problems, which have been addressed using mostly the Krylov-subspace version of the time-dependent DMRG. My findings are directly relevant to recent experiments with ultracold atoms, also carried out at LMU in the group of Prof. Bloch. The first project aims the ultimate goal of atoms in optical lattices, namely, the possibility to act as a quantum simulator of more complicated condensed matter system. The underline idea is to simulate a magnetic model using ultracold bosonic atoms of two different hyperfine states in an optical superlattice. The system, which is captured by a two-species Bose-Hubbard model, realizes in a certain parameter range the physics of a spin-1/2 Heisenberg chain, where the spin exchange constant is given by second order processes. Tuning of the superlattice parameters allows one to controlling the effect of fast first order processes versus the slower second order ones. The analysis is motivated by recent experiments, %\cite{Folling2007,Trotzky2008} where coherent two-particle dynamics with ultracold bosonic atoms in isolated double wells were detected. My project investigates the coherent many-particle dynamics, which takes place after coupling the double well. I provide the theoretical background for the next step, the observation of coherent many-particle dynamics after coupling the double wells. The tunability between the Bose-Hubbard model and the Heisenberg model in this setup could be used to study experimentally the differences in equilibration processes for non-integrable and Bethe ansatz integrable models. It turns out that the relaxation in the Heisenberg model is connected to a phase averaging effect, which is in contrast to the typical scattering driven thermalization in nonintegrable models In the second project I study a many-body generalization of the original Landau-Zener formula. This formula gives the transition probability between the two states of a quantum mechanical two-level system, where the offset between the two levels is varying linearly in time. In a recent experiment this framework has been extended to a many-body system consisting of pairwise tunnel-coupled one-dimensional Bose liquids. It was found that the tunnel coupling between the tubes and the intertube interactions strongly modify the original Landau-Zener picture. After a introduction to the two-level and the three-level Landau-Zener problem I present my own results for the quantum dynamics of the microscopic model and the comparison to the experimental results. I have calculated both Landau-Zener sweeps as well as the time-evolution after sudden quenches of the energy offset. A major finding is that for sufficiently large initial density quenches can be efficiently used to create quasi-thermal states of arbitrary temperatures. The third project is more mathematical and connects the fields of quantum computation and of quantum information. Here, the main purpose is to analyse systematically the effects of decoherence on maximally entangled multi-partite states, which arise typically during quantum computation processes. The bigger the number of entangled qubits the more fragile is its entanglement under the influence decoherence. As starting point I consider first two entangled qubits, whereby one qubit interacts with an arbitrary environment. For this particular case I have derived a factorization law for the disentanglement. Next, I calculate the decrease of entanglement of two , three and four entangled qubits, general WW- and general GHZGHZ-state, coupled to a global spin-1/21/2 bath or several independent spin-1/21/2 baths , one for each qubit. Although there is no appropriate entanglement measure for three and more qubits, it turns out that this decrease is directly related to the increase of entanglement between the central system and the bath. This implies the formation of a much bigger multipartite entangled network. Thus, using the von Neumann entropy and the Wootters concurrence, I derive a simple upper bound for the bath-induced entanglement breaking power of the initially maximally entangled multi-partite states

    Supercomputing Frontiers

    Get PDF
    This open access book constitutes the refereed proceedings of the 7th Asian Conference Supercomputing Conference, SCFA 2022, which took place in Singapore in March 2022. The 8 full papers presented in this book were carefully reviewed and selected from 21 submissions. They cover a range of topics including file systems, memory hierarchy, HPC cloud platform, container image configuration workflow, large-scale applications, and scheduling

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms

    Trustworthiness in Mobile Cyber Physical Systems

    Get PDF
    Computing and communication capabilities are increasingly embedded in diverse objects and structures in the physical environment. They will link the ‘cyberworld’ of computing and communications with the physical world. These applications are called cyber physical systems (CPS). Obviously, the increased involvement of real-world entities leads to a greater demand for trustworthy systems. Hence, we use "system trustworthiness" here, which can guarantee continuous service in the presence of internal errors or external attacks. Mobile CPS (MCPS) is a prominent subcategory of CPS in which the physical component has no permanent location. Mobile Internet devices already provide ubiquitous platforms for building novel MCPS applications. The objective of this Special Issue is to contribute to research in modern/future trustworthy MCPS, including design, modeling, simulation, dependability, and so on. It is imperative to address the issues which are critical to their mobility, report significant advances in the underlying science, and discuss the challenges of development and implementation in various applications of MCPS

    2nd International Conference on Numerical and Symbolic Computation

    Get PDF
    The Organizing Committee of SYMCOMP2015 – 2nd International Conference on Numerical and Symbolic Computation: Developments and Applications welcomes all the participants and acknowledge the contribution of the authors to the success of this event. This Second International Conference on Numerical and Symbolic Computation, is promoted by APMTAC - Associação Portuguesa de Mecânica Teórica, Aplicada e Computacional and it was organized in the context of IDMEC/IST - Instituto de Engenharia Mecânica. With this ECCOMAS Thematic Conference it is intended to bring together academic and scientific communities that are involved with Numerical and Symbolic Computation in the most various scientific area

    A Secure High-Order Lanczos-Based Orthogonal Tensor SVD for Big Data Reduction in Cloud Environment

    No full text

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal
    corecore