12 research outputs found

    An Scalable matrix computing unit architecture for FPGA and SCUMO user design interface

    Get PDF
    High dimensional matrix algebra is essential in numerous signal processing and machine learning algorithms. This work describes a scalable square matrix-computing unit designed on the basis of circulant matrices. It optimizes data flow for the computation of any sequence of matrix operations removing the need for data movement for intermediate results, together with the individual matrix operations' performance in direct or transposed form (the transpose matrix operation only requires a data addressing modification). The allowed matrix operations are: matrix-by-matrix addition, subtraction, dot product and multiplication, matrix-by-vector multiplication, and matrix by scalar multiplication. The proposed architecture is fully scalable with the maximum matrix dimension limited by the available resources. In addition, a design environment is also developed, permitting assistance, through a friendly interface, from the customization of the hardware computing unit to the generation of the final synthesizable IP core. For N x N matrices, the architecture requires N ALU-RAM blocks and performs O(N*N), requiring N*N +7 and N +7 clock cycles for matrix-matrix and matrix-vector operations, respectively. For the tested Virtex7 FPGA device, the computation for 500 x 500 matrices allows a maximum clock frequency of 346 MHz, achieving an overall performance of 173 GOPS. This architecture shows higher performance than other state-of-the-art matrix computing units

    Overlay virtualized wireless sensor networks for application in industrial internet of things : a review

    Get PDF
    Abstract: In recent times, Wireless Sensor Networks (WSNs) are broadly applied in the Industrial Internet of Things (IIoT) in order to enhance the productivity and efficiency of existing and prospective manufacturing industries. In particular, an area of interest that concerns the use of WSNs in IIoT is the concept of sensor network virtualization and overlay networks. Both network virtualization and overlay networks are considered contemporary because they provide the capacity to create services and applications at the edge of existing virtual networks without changing the underlying infrastructure. This capability makes both network virtualization and overlay network services highly beneficial, particularly for the dynamic needs of IIoT based applications such as in smart industry applications, smart city, and smart home applications. Consequently, the study of both WSN virtualization and overlay networks has become highly patronized in the literature, leading to the growth and maturity of the research area. In line with this growth, this paper provides a review of the development made thus far concerning virtualized sensor networks, with emphasis on the application of overlay networks in IIoT. Principally, the process of virtualization in WSN is discussed along with its importance in IIoT applications. Different challenges in WSN are also presented along with possible solutions given by the use of virtualized WSNs. Further details are also presented concerning the use of overlay networks as the next step to supporting virtualization in shared sensor networks. Our discussion closes with an exposition of the existing challenges in the use of virtualized WSN for IIoT applications. In general, because overlay networks will be contributory to the future development and advancement of smart industrial and smart city applications, this review may be considered by researchers as a reference point for those particularly interested in the study of this growing field

    Switchable wideband receiver frontend for 5G and satellite applications

    Get PDF
    Modern day communication architectures provides the requirement for interconnected devices offering very high data rate (more than 10 Gbps), low latency, and support for multiple service integration across existing communication generations with wideband spectrum coverage. An integrated satellite and 5G architecture switchable receiver frontend is presented in this thesis, consisting of a single pole double throw (SPDT) switch and two low noise amplifiers (LNAs) spanning X-band and K/Ka-band frequencies. The independent X-band LNA (8-12 GHz) has a gain of 38 dB at a centre design frequency of 9.8 GHz, while the K/Ka-band (23-28 GHz) has a gain of 29 GHz at a centre design frequency of 25.4 GHz. Both LNAs are a three-stage cascaded design with separated gate and drain lines for each transistor stage. The broadband high isolation single pole double throw (SPDT) switch based on a 0.15 ÎŒm gate length Indium Gallium Arsenide (InGaAs) pseudomorphic high electron transistor (pHEMT) is designed to operate at the frequency range of DC-50 GHz with less than 3 dB insertion loss and more than 40 dB isolation. The switch is designed to improve the overall stability of the system and the gain. A gain of about 25 dB is achieved at 9.8 GHz when the X-band arm is turned on and the K/Ka-band is turned off. A gain of about 23 dB is achieved at 25.4 GHz when the K/Ka-band arm is turned on and the X-band arm is off. This presented switchable receiver frontend is suitable for radar applications, 5G mobile applications, and future broadband receivers in the millimetre wave frequency range

    Countering internet packet classifiers to improve user online privacy

    Get PDF
    Internet traffic classification or packet classification is the act of classifying packets using the extracted statistical data from the transmitted packets on a computer network. Internet traffic classification is an essential tool for Internet service providers to manage network traffic, provide users with the intended quality of service (QoS), and perform surveillance. QoS measures prioritize a network\u27s traffic type over other traffic based on preset criteria; for instance, it gives higher priority or bandwidth to video traffic over website browsing traffic. Internet packet classification methods are also used for automated intrusion detection. They analyze incoming traffic patterns and identify malicious packets used for denial of service (DoS) or similar attacks. Internet traffic classification may also be used for website fingerprinting attacks in which an intruder analyzes encrypted traffic of a user to find behavior or usage patterns and infer the user\u27s online activities. Protecting users\u27 online privacy against traffic classification attacks is the primary motivation of this work. This dissertation shows the effectiveness of machine learning algorithms in identifying user traffic by comparing 11 state-of-art classifiers and proposes three anonymization methods for masking generated user network traffic to counter the Internet packet classifiers. These methods are equalized packet length, equalized packet count, and equalized inter-arrival times of TCP packets. This work compares the results of these anonymization methods to show their effectiveness in reducing machine learning algorithms\u27 performance for traffic classification. The results are validated using newly generated user traffic. Additionally, a novel model based on a generative adversarial network (GAN) is introduced to automate countering the adversarial traffic classifiers. This model, which is called GAN tunnel, generates pseudo traffic patterns imitating the distributions of the real traffic generated by actual applications and encapsulates the actual network packets into the generated traffic packets. The GAN tunnel\u27s performance is tested against random forest and extreme gradient boosting (XGBoost) traffic classifiers. These classifiers are shown not being able of detecting the actual source application of data exchanged in the GAN tunnel in the tested scenarios in this thesis

    Behavioral analysis in cybersecurity using machine learning: a study based on graph representation, class imbalance and temporal dissection

    Get PDF
    The main goal of this thesis is to improve behavioral cybersecurity analysis using machine learning, exploiting graph structures, temporal dissection, and addressing imbalance problems.This main objective is divided into four specific goals: OBJ1: To study the influence of the temporal resolution on highlighting micro-dynamics in the entity behavior classification problem. In real use cases, time-series information could be not enough for describing the entity behavior classification. For this reason, we plan to exploit graph structures for integrating both structured and unstructured data in a representation of entities and their relationships. In this way, it will be possible to appreciate not only the single temporal communication but the whole behavior of these entities. Nevertheless, entity behaviors evolve over time and therefore, a static graph may not be enoughto describe all these changes. For this reason, we propose to use a temporal dissection for creating temporal subgraphs and therefore, analyze the influence of the temporal resolution on the graph creation and the entity behaviors within. Furthermore, we propose to study how the temporal granularity should be used for highlighting network micro-dynamics and short-term behavioral changes which can be a hint of suspicious activities. OBJ2: To develop novel sampling methods that work with disconnected graphs for addressing imbalanced problems avoiding component topology changes. Graph imbalance problem is a very common and challenging task and traditional graph sampling techniques that work directly on these structures cannot be used without modifying the graph’s intrinsic information or introducing bias. Furthermore, existing techniques have shown to be limited when disconnected graphs are used. For this reason, novel resampling methods for balancing the number of nodes that can be directly applied over disconnected graphs, without altering component topologies, need to be introduced. In particular, we propose to take advantage of the existence of disconnected graphs to detect and replicate the most relevant graph components without changing their topology, while considering traditional data-level strategies for handling the entity behaviors within. OBJ3: To study the usefulness of the generative adversarial networks for addressing the class imbalance problem in cybersecurity applications. Although traditional data-level pre-processing techniques have shown to be effective for addressing class imbalance problems, they have also shown downside effects when highly variable datasets are used, as it happens in cybersecurity. For this reason, new techniques that can exploit the overall data distribution for learning highly variable behaviors should be investigated. In this sense, GANs have shown promising results in the image and video domain, however, their extension to tabular data is not trivial. For this reason, we propose to adapt GANs for working with cybersecurity data and exploit their ability in learning and reproducing the input distribution for addressing the class imbalance problem (as an oversampling technique). Furthermore, since it is not possible to find a unique GAN solution that works for every scenario, we propose to study several GAN architectures with several training configurations to detect which is the best option for a cybersecurity application. OBJ4: To analyze temporal data trends and performance drift for enhancing cyber threat analysis. Temporal dynamics and incoming new data can affect the quality of the predictions compromising the model reliability. This phenomenon makes models get outdated without noticing. In this sense, it is very important to be able to extract more insightful information from the application domain analyzing data trends, learning processes, and performance drifts over time. For this reason, we propose to develop a systematic approach for analyzing how the data quality and their amount affect the learning process. Moreover, in the contextof CTI, we propose to study the relations between temporal performance drifts and the input data distribution for detecting possible model limitations, enhancing cyber threat analysis.Programa de Doctorado en Ciencias y Tecnologías Industriales (RD 99/2011) Industria Zientzietako eta Teknologietako Doktoretza Programa (ED 99/2011

    Demand-based optimization for adaptive multi-beam satellite communication systems

    Get PDF
    Satellite operators use multiple spot beams of high throughput satellite systems to provide internet services to broadband users. However, in recent years, new mobile broadband users with diverse demand requisites are growing, and satellite operators are obliged to provide services agreed in the Service Level Agreements(SLA) to remote rural locations, mid-air aeroplanes and mid-ocean ships. Furthermore, the expected demand is spatio-temporal which varies along the geographical location of the mobile users with time and hence, creating more dynamic, non uniformly distributed, and time sensitive demand profiles. However, the current satellite systems are only designed to perform similarly irrespective of the changes in demand profiles. Hence, a practical approach to meet such heterogeneous demand is to design adaptive systems by exploiting the advancements in recently developed technologies such as precoding, active antenna array, digital beamforming networks, digital transparent payload and onboard signal processing. Accordingly, in this work, we investigate and develop advanced demand-based resource optimization modules that fit future payload capabilities and satisfy the satellite operators’ interests. Furthermore, instead of boosting the satellite throughput (capacity maximization), the goal is to optimize the available resources such that the satellite offered capacity on the ground continuously matches the geographic distribution of the traffic demand and follows its variations in time. However, we can introduce adaptability at multiple levels of the transmission chain of the satellite system, either with long term flexibility (optimization over frequency, time, power, beam pattern and footprint) or short term flexibility (optimization over user scheduling). These techniques can be optimized as either standalone or in parallel or even jointly for maximum demand satisfaction. However, in the scope of this thesis, we have designed real time optimizations only for some of the radio resource schemes. Firstly, we explore beam densification, where by increasing the number of beams, we improve the antenna gain values at the high demand hot-spot regions. However, such increase in the number of beams also increase the interbeam interference and badly affects SINR performance. Hence, in the first part of Chapter 2 of this thesis, we focus on finding an optimal number of beams for given high demand hot-spot region of a demand distribution profile. Also, steering the beams towards high demand regions, further increase the demand satisfaction. However, the positioning of the beams need to be carefully planned. On one hand, closely placed beams result in poor SINR performance. On the other hand, beams that are placed far away will have poor antenna gain values for the users away from the beam centers. Hence, in the second part of Chapter 2, we focus on finding optimized beam positions for maximum demand satisfaction in high demand hot-spot regions. Also, we propose a dynamic frequency-color coding strategy for efficient spectrum and interference management in demand-driven adaptive systems. Another solution is the proposed so-called Adaptive Multi-beam Pattern and Footprint (AMPF) design, where we fix the number of beams and based on the demand profile, we configure adaptive beam shapes and sizes along with their positions. Such an approach shall distribute the total demand across all the beams more evenly avoiding overloaded or underused beams. Such optimization was attempted in Chapter 3 using cluster analysis. Furthermore, demand satisfaction at both beam and user level was achieved by carefully performing demand driven user scheduling. On one hand, scheduling most orthogonal users at the same time may yield better capacity but may not provide demand satisfaction. This is majorly because users with high demand need to be scheduled more often in comparison to users with low demand irrespective of channel orthogonality. On the other hand, scheduling users with high demand which are least orthogonal, create strong interbeam interference and affect precoding performance. Accordingly, two demand driven scheduling algorithms (Weighted Semi-orthogonal scheduling (WSOS) and Interference-aware demand-based user scheduling) are discussed in Chapter 4. Lastly, in Chapter 5, we verified the impact of parallel implementation of two different demand based optimization techniques such as AMPF design and WSOS user scheduling. Evidently, numerical results presented throughout this thesis validate the effectiveness of the proposed demand based optimization techniques in terms of demand matching performance compared to the conventional non-demand based approaches

    Indoor Positioning and Navigation

    Get PDF
    In recent years, rapid development in robotics, mobile, and communication technologies has encouraged many studies in the field of localization and navigation in indoor environments. An accurate localization system that can operate in an indoor environment has considerable practical value, because it can be built into autonomous mobile systems or a personal navigation system on a smartphone for guiding people through airports, shopping malls, museums and other public institutions, etc. Such a system would be particularly useful for blind people. Modern smartphones are equipped with numerous sensors (such as inertial sensors, cameras, and barometers) and communication modules (such as WiFi, Bluetooth, NFC, LTE/5G, and UWB capabilities), which enable the implementation of various localization algorithms, namely, visual localization, inertial navigation system, and radio localization. For the mapping of indoor environments and localization of autonomous mobile sysems, LIDAR sensors are also frequently used in addition to smartphone sensors. Visual localization and inertial navigation systems are sensitive to external disturbances; therefore, sensor fusion approaches can be used for the implementation of robust localization algorithms. These have to be optimized in order to be computationally efficient, which is essential for real-time processing and low energy consumption on a smartphone or robot
    corecore