28 research outputs found

    Wide-band and Wide-Angle Scanning Phased Array Antenna for Mobile Communication Systems

    Get PDF
    A wide-band phased array antenna with wide-angle scanning capability for mobile communication system is proposed in this article. An air cavity is properly embedded into the substrate under each array element. This method is very simple and can efficiently enhance wide-angle scanning performance by improving the wide-angle scanning impedance matching (WAIM) and realizing the beam-width of elements in the array. Besides, the operating bandwidth is also extended with the proposed approach. The wide-angle scanning capability is analyzed and verified by both the linear array antennas (on large ground planes in detail) and the planar array. Two 1×81\times 8 linear arrays and one 8×88\times 8 planar array are demonstrated, which achieve the beam scanning of around ±60° with a realized gain reduction under 3.5 dB in the wide operating bandwidth (37.4%). Furthermore, the beam in the E-plane can scan over ±70° with a realized gain reduction under less than 3 dB. Two linear array prototypes with wide-angle scanning capacity in two planes are fabricated and characterized, yielding good performance within overall operational bandwidth. The measured results align very well with the simulated. The proposed wideband phased array with large scanning coverage is a promising candidate for 5G mobile communications

    Performance Analysis of Quickreduct, Quick Relative Reduct Algorithm and a New Proposed Algorithm

    Get PDF
    Feature Selection is a process of selecting a subset of relevant features from a huge dataset that satisfy method dependent criteria and thus minimize the cardinality and ensure that the accuracy and precision is not affected ,hence approximating the original class distribution of data from a given set of selected features. Feature selection and feature extraction are the two problems that we face when we want to select the best and important attributes from a given dataset Feature selection is a step in data mining that is done prior to other steps and is found to be very useful and effective in removing unimportant attributes so that the storage efficiency and accuracy of the dataset can be increased. From a huge pool of data available we want to extract useful and relevant information. The problem is not the unavailability of data, it is the quality of data that we lack in. We have Rough Sets Theory which is very useful in extracting relevant attributes and help to increase the importance of the information system we have. Rough set theory works on the principle of classifying similar objects into classes with respect to some features and those features may collectively and shortly be termed as reducts

    A Join Index for XML Data Warehouses

    Get PDF
    XML data warehouses form an interesting basis for decision-support applications that exploit complex data. However, native-XML database management systems (DBMSs) currently bear limited performances and it is necessary to research for ways to optimize them. In this paper, we propose a new join index that is specifically adapted to the multidimensional architecture of XML warehouses. It eliminates join operations while preserving the information contained in the original warehouse. A theoretical study and experimental results demonstrate the efficiency of our join index. They also show that native XML DBMSs can compete with XML-compatible, relational DBMSs when warehousing and analyzing XML data.Comment: 2008 International Conference on Information Resources Management (Conf-IRM 08), Niagra Falls : Canada (2008

    A 1 x 8 Linear Ultra-Wideband Phased Array With Connected Dipoles and Hyperbolic Microstrip Baluns

    Get PDF
    A 1×8 linear single polarized ultra-wideband connected dipole phased array with wide angle scan range is proposed. The dipoles in the array are connected with each other in E-plane to improve the impedance matching on the low end of the frequency band. The frequency band and the scan range in E-plane is 2~9 GHz for broadside radiation, 2~8 GHz for 30° scan, 2~7 GHz for 45° scan, and 2~6.5 GHz for 60° scan. The VSWR is better than 2.0 across the frequency band from 2 to 9 GHz for broadside radiation and the cross-polarization level is below -10 dB. A hyperbolic microstrip balun is used as an impedance transformer to connect the 50 Ω SMA connector to a 150 Ω broadband dipole in an array. The structure of this antenna is totally planar and low profile, thus it is made easy to integrate with the PCB boards. To eliminate the surface wave blindness, no other dielectric layer is used in the array. The proposed balun supports common mode (CM) current and the radiation of this CM current cancels the radiation of the dipole in some frequency for a certain scan angle, this results in feed blindness. Adding H-plane PEC walls decreases the feed blindness frequency in the design

    DTCM: Deep Transformer Capsule Mutual Distillation for Multivariate Time Series Classification

    Get PDF
    This paper proposes a dual-network-based feature extractor, perceptive capsule network (PCapN), for multivariate time series classification (MTSC), including a local feature network (LFN) and a global relation network (GRN). The LFN has two heads (i.e., Head_A and Head_B), each containing two squash CNN blocks and one dynamic routing block to extract the local features from the data and mine the connections among them. The GRN consists of two capsule-based transformer blocks and one dynamic routing block to capture the global patterns of each variable and correlate the useful information of multiple variables. Unfortunately, it is difficult to directly deploy PCapN on mobile devices due to its strict requirement for computing resources. So, this paper designs a lightweight capsule network (LCapN) to mimic the cumbersome PCapN. To promote knowledge transfer from PCapN to LCapN, this paper proposes a deep transformer capsule mutual (DTCM) distillation method. It is targeted and offline, using one- and two-way operations to supervise the knowledge distillation process for the dual-network-based student and teacher models. Experimental results show that the proposed PCapN and DTCM achieve excellent performance on UEA2018 datasets regarding top-1 accuracy

    Voting Systems with Trust Mechanisms in Cyberspace: Vulnerabilities and Defenses

    Get PDF
    With the popularity of voting systems in cyberspace, there is growing evidence that current voting systems can be manipulated by fake votes. This problem has attracted many researchers working on guarding voting systems in two areas: relieving the effect of dishonest votes by evaluating the trust of voters, and limiting the resources that can be used by attackers, such as the number of voters and the number of votes. In this paper, we argue that powering voting systems with trust and limiting attack resources are not enough. We present a novel attack named as Reputation Trap (RepTrap). Our case study and experiments show that this new attack needs much less resources to manipulate the voting systems and has a much higher success rate compared with existing attacks. We further identify the reasons behind this attack and propose two defense schemes accordingly. In the first scheme, we hide correlation knowledge from attackers to reduce their chance to affect the honest voters. In the second scheme, we introduce robustness-of-evidence, a new metric, in trust calculation to reduce their effect on honest voters. We conduct extensive experiments to validate our approach. The results show that our defense schemes not only can reduce the success rate of attacks but also significantly increase the amount of resources an adversary needs to launch a successful attack

    Bittm: A core biterms-based topic model for targeted analysis

    Get PDF
    While most of the existing topic models perform a full analysis on a set of documents to discover all topics, it is noticed recently that in many situations users are interested in fine-grained topics related to some specific aspects only. As a result, targeted analysis (or focused analysis) has been proposed to address this problem. Given a corpus of documents from a broad area, targeted analysis discovers only topics related with user-interested aspects that are expressed by a set of user-provided query keywords. Existing approaches for targeted analysis suffer from problems such as topic loss and topic suppression because of their inherent assumptions and strategies. Moreover, existing approaches are not designed to address computation efficiency, while targeted analysis is supposed to provide responses to user queries as soon as possible. In this paper, we propose a core BiTerms-based Topic Model (BiTTM). By modelling topics from core biterms that are potentially relevant to the target query, on one hand, BiTTM captures the context information across documents to alleviate the problem of topic loss or suppression; on the other hand, our proposed model enables the efficient modelling of topics related to specific aspects. Our experiments on nine real-world datasets demonstrate BiTTM outperforms existing approaches in terms of both effectiveness and efficiency
    corecore