867 research outputs found

    Retrieval of Free Space Radiation Patterns through Measured Data in a Non-Anechoic Environment

    Get PDF
    Antenna pattern measurements are usually carried out in an anechoic chamber. However, a good anechoic chamber is very expensive to construct. Previous research has attempted to compensate for the effects of extraneous fields measured in a non-anechoic environment to obtain a free space pattern that would be measured in an anechoic chamber. Existing compensation techniques are like the Test Zone Field compensation method, the Fast-Fourier-Transform-based method, the Matrix Pencil method, and the Antenna Pattern Comparison technique. This work illustrates and extends a deconvolution methodology which allows the antenna measurement under a non-anechoic test environment and retrieves the free space radiation pattern of an antenna through this measured data; this allows for easier and more affordable antenna measurements. In this work, we modeled the extraneous fields as the system impulse response of the test environment and utilized a reference antenna to extract the impulse response. Then, we used it to remove the extraneous fields for a desired antenna measured under the same environment and retrieved the ideal pattern. The advantage of this process is that it does not require calculating the time delay to gate out the reflections; therefore, it is independent of the bandwidth of the antenna, and there is no requirement for prior knowledge of the test environment. This work contributes to the field not by proposing a new methodology for pattern reconstruction but by showing that the deconvolution methodology can analytically remove the effects of extraneous fields in antenna pattern measurements and by extending this method to antenna pattern measurements under three-dimensional environments. Also, a discussion of the parameters that affect the deconvolution methodology is given in this work. Extensive simulation examples with different environmental settings and with different antennas are presented in this work to demonstrate the applicability of the deconvolution method

    Phosphorylated AKT1 is associated with poor prognosis in esophageal squamous cell carcinoma

    Get PDF
    BACKGROUND: The epidermal growth factor receptor (EGFR) signaling pathway is important in regulating biological behaviors in many malignancies. We explored whether expression and activation of EGFR and several components on its downstream pathways have prognostic significance in patients with esophageal squamous cell carcinoma (ESCC). METHODS: Expression of EGFR, phosphorylated (p)-EGFR, AKT1, p-AKT1, AKT2, p-AKT2, ERK1, ERK2, p-ERK1/2, STAT3, and p-STAT3 was assessed by immunohistochemical analysis of tissue microarrays for 275 ESCC patients who had undergone complete three-field lymphadenectomy. Spearman rank correlation tests were used to determine the relationships among protein expression, and Cox regression analyses were performed to determine the prognostic factors on overall survival (OS). RESULTS: p-EGFR expression was correlated statistically with all of the other phosphorylated markers. Gender, N stage, and p-AKT1 expression were found to be independent prognostic factors for OS. Increased expression of p-AKT1 was associated with decreased patient survival. EGFR and p-EGFR expression was not significantly associated with patient survival. CONCLUSION: Activation of AKT1 was associated with poor prognosis in ESCC

    A Unified Scheme of ResNet and Softmax

    Full text link
    Large language models (LLMs) have brought significant changes to human society. Softmax regression and residual neural networks (ResNet) are two important techniques in deep learning: they not only serve as significant theoretical components supporting the functionality of LLMs but also are related to many other machine learning and theoretical computer science fields, including but not limited to image classification, object detection, semantic segmentation, and tensors. Previous research works studied these two concepts separately. In this paper, we provide a theoretical analysis of the regression problem: exp(Ax)+Ax,1n1(exp(Ax)+Ax)b22\| \langle \exp(Ax) + A x , {\bf 1}_n \rangle^{-1} ( \exp(Ax) + Ax ) - b \|_2^2, where AA is a matrix in Rn×d\mathbb{R}^{n \times d}, bb is a vector in Rn\mathbb{R}^n, and 1n{\bf 1}_n is the nn-dimensional vector whose entries are all 11. This regression problem is a unified scheme that combines softmax regression and ResNet, which has never been done before. We derive the gradient, Hessian, and Lipschitz properties of the loss function. The Hessian is shown to be positive semidefinite, and its structure is characterized as the sum of a low-rank matrix and a diagonal matrix. This enables an efficient approximate Newton method. As a result, this unified scheme helps to connect two previously thought unrelated fields and provides novel insight into loss landscape and optimization for emerging over-parameterized neural networks, which is meaningful for future research in deep learning models

    Entity Alignment

    Get PDF
    This open access book systematically investigates the topic of entity alignment, which aims to detect equivalent entities that are located in different knowledge graphs. Entity alignment represents an essential step in enhancing the quality of knowledge graphs, and hence is of significance to downstream applications, e.g., question answering and recommender systems. Recent years have witnessed a rapid increase in the number of entity alignment frameworks, while the relationships among them remain unclear. This book aims to fill that gap by elaborating the concept and categorization of entity alignment, reviewing recent advances in entity alignment approaches, and introducing novel scenarios and corresponding solutions. Specifically, the book includes comprehensive evaluations and detailed analyses of state-of-the-art entity alignment approaches and strives to provide a clear picture of the strengths and weaknesses of the currently available solutions, so as to inspire follow-up research. In addition, it identifies novel entity alignment scenarios and explores the issues of large-scale data, long-tail knowledge, scarce supervision signals, lack of labelled data, and multimodal knowledge, offering potential directions for future research. The book offers a valuable reference guide for junior researchers, covering the latest advances in entity alignment, and a valuable asset for senior researchers, sharing novel entity alignment scenarios and their solutions. Accordingly, it will appeal to a broad audience in the fields of knowledge bases, database management, artificial intelligence and big data

    A Fast Optimization View: Reformulating Single Layer Attention in LLM Based on Tensor and SVM Trick, and Solving It in Matrix Multiplication Time

    Full text link
    Large language models (LLMs) have played a pivotal role in revolutionizing various facets of our daily existence. Solving attention regression is a fundamental task in optimizing LLMs. In this work, we focus on giving a provable guarantee for the one-layer attention network objective function L(X,Y)=j0=1ni0=1d(exp(Aj0x),1n1exp(Aj0x),A3Y,i0bj0,i0)2L(X,Y) = \sum_{j_0 = 1}^n \sum_{i_0 = 1}^d ( \langle \langle \exp( \mathsf{A}_{j_0} x ) , {\bf 1}_n \rangle^{-1} \exp( \mathsf{A}_{j_0} x ), A_{3} Y_{*,i_0} \rangle - b_{j_0,i_0} )^2. Here ARn2×d2\mathsf{A} \in \mathbb{R}^{n^2 \times d^2} is Kronecker product between A1Rn×dA_1 \in \mathbb{R}^{n \times d} and A2Rn×dA_2 \in \mathbb{R}^{n \times d}. A3A_3 is a matrix in Rn×d\mathbb{R}^{n \times d}, Aj0Rn×d2\mathsf{A}_{j_0} \in \mathbb{R}^{n \times d^2} is the j0j_0-th block of A\mathsf{A}. The X,YRd×dX, Y \in \mathbb{R}^{d \times d} are variables we want to learn. BRn×dB \in \mathbb{R}^{n \times d} and bj0,i0Rb_{j_0,i_0} \in \mathbb{R} is one entry at j0j_0-th row and i0i_0-th column of BB, Y,i0RdY_{*,i_0} \in \mathbb{R}^d is the i0i_0-column vector of YY, and xRd2x \in \mathbb{R}^{d^2} is the vectorization of XX. In a multi-layer LLM network, the matrix BRn×dB \in \mathbb{R}^{n \times d} can be viewed as the output of a layer, and A1=A2=A3Rn×dA_1= A_2 = A_3 \in \mathbb{R}^{n \times d} can be viewed as the input of a layer. The matrix version of xx can be viewed as QKQK^\top and YY can be viewed as VV. We provide an iterative greedy algorithm to train loss function L(X,Y)L(X,Y) up ϵ\epsilon that runs in O~((Tmat(n,n,d)+Tmat(n,d,d)+d2ω)log(1/ϵ))\widetilde{O}( ({\cal T}_{\mathrm{mat}}(n,n,d) + {\cal T}_{\mathrm{mat}}(n,d,d) + d^{2\omega}) \log(1/\epsilon) ) time. Here Tmat(a,b,c){\cal T}_{\mathrm{mat}}(a,b,c) denotes the time of multiplying a×ba \times b matrix another b×cb \times c matrix, and ω2.37\omega\approx 2.37 denotes the exponent of matrix multiplication
    corecore