2,040 research outputs found
Hybrid Coding Technique for Pulse Detection in an Optical Time Domain Reflectometer
The paper introduces a novel hybrid coding technique for improved pulse detection in an optical time domain reflectometer. The hybrid schemes combines Simplex codes with signal averaging to articulate a very sophisticated coding technique that considerably reduces the processing time to extract specified coding gains in comparison to the existing techniques. The paper quantifies the coding gain of the hybrid scheme mathematically and provide simulative results in direct agreement with the theoretical performance. Furthermore, the hybrid scheme has been tested on our self-developed OTDR
Time-Modulated Array Co-Simulation with the Aid of Commercial Software
This thesis report exhibits an integrated Electromagnetic/Circuit level technique to analyze the time modulated arrays. I restate this work in commercial software which was already developed in CAD domain i.e. combining the exact harmonic balanced based analysis of non-linear switches with the electromagnetic characterization of the radiating elements. In this commercial software I try to perform a full-wave co-simulation to compute the radiated far-field envelope to show the array behavior. Simulation in this new software allows a precise evaluation of several non-linear performance aspects of the radiating system, such as power usage capabilities and the switch modulation frequency boundary.
Secondly I also discuss a TMA wireless power transfer technique which is based on two step procedure. In this case I carry out the full wave co-simulation in this new commercial software to analyze the antenna array with modulated non-linear feeding network. Schottky-diode based network provide proper modulated RF excitations of the array elements. Therefore, the array architecture is extremely simple, if compared to phased arrays, only simple control circuit board
Recommended from our members
Data Scarcity in Event Analysis and Abusive Language Detection
Lack of data is almost always the cause of the suboptimal performance of neural networks. Even though data scarce scenarios can be simulated for any task by assuming limited access to training data, we study two problem areas where data scarcity is a practical challenge: event analysis and abusive content detection} Journalists, social scientists and political scientists need to retrieve and analyze event mentions in unstructured text to compute useful statistical information to understand society. We claim that it is hard to specify information need about events using keyword-based representation and propose a Query by Example (QBE) setting for event retrieval. In the QBE setting, we assume that there are a few example sentences mentioning the event class a user is interested in and we aim to retrieve relevant events using only the examples as a query. Traditional event detection approaches are not applicable in this setting as event detection datasets are constructed based on pre-defined schemas which limits them to a small set of event and event-argument types. Moreover, the amount of annotated data in event detection datasets is limited that only allows us to build a retrieval corpus for evaluation. Thus we assume that there are no relevance judgments to train an event retrieval model -- except for the few examples of a specific event type. We create three QBE evaluation settings from three event detection datasets: PoliceKilling, ACE, and IndiaPoliceEvents. For the PoliceKilling dataset, where a relevant sentence describes a police killing event, we show that a query model constructed from the NLP features extracted from the few given examples is effective compared to event detection baselines. For the ACE dataset, where there are thirty-three types of events, we construct a QBE setting for each type and show that a sentence embedding approach effectively transfers for event matching. Finally, we conducted a unified evaluation of all three datasets using the sentence-embedding-based model and showed that it outperforms strong baselines.
We further examine the effect of data scarcity in abusive language detection. We first study a specific type of abusive language -- hate speech. Neural hate speech detection models trained from one dataset poorly generalize to another dataset from a different domain. This is because characteristics of hate speech vary based on racial and cultural aspects. Our data scarcity scenario assumes that we have a hate speech dataset from a domain and it needs to generalize to a test set from another domain using the unlabeled data from the test domain only. Thus we assume zero target domain data in this scenario. To tackle the data scarcity, we propose an unsupervised domain adaptation approach to augment labeled data for hate speech detection. We evaluate the approach with three different models (character CNNs, BiLSTMs, and BERT) on three different collections. We show our approach improves Area under the Precision/Recall curve by as much as 42% and recall by as much as 278%, with no loss (and in some cases a significant gain) in precision.
Finally, we examine the cross-lingual abusive language detection problem. Abusive language is a superclass of hate speech that includes profanity, aggression, offensiveness, cyberbullying, toxicity, and hate speech itself. There is a large collection of abusive language detection datasets in English such as Jigsaw. For other languages there exist datasets for abusive language detection but with very limited data. We propose a cross-lingual transfer learning approach to learn an effective neural abusive language classifier for such low-resource languages with help from a dataset from a resource-rich language. The framework is based on a nearest-neighbor architecture and is thus interpretable by design. It is a modern instantiation of the classic k-nearest neighbor model, as we use transformer representations in all its components. Unlike prior work on neighborhood-based approaches, we encode the neighborhood information based on query-neighbor interactions. We propose two encoding schemes and show their effectiveness using both qualitative and quantitative analyses. Our evaluation results on eight languages from two different datasets for abusive language detection show sizable improvements in F1 over strong baselines
Bäoklund haps and related connections
The aim of this thesis is to show a link between solutions of Differential equations, and the integral submanifolds of sets of forms defined on jet bundles. The original idea behind Baoklund maps was discovered by A, V. Baoklund around 1875 during research into pseudospherlcal surfaces i.e. surfaces of constant negative curvature. The central idea of this thesis is the Backlund map, which is a smooth map of jet bundles parameterised by the target manifold of its co-domain. The original system of differential equations appears as a system of integrability conditions for the Baoklund map. The map induces an horizontal distribution on its domain from the natural contact structure of its co-domain, which makes possible a geometrical description in tenns of a connection, called here the BBcklund connection the system of integrability conditions reappears as the vanishing of the curvature of this oonnection. The paper by Backlund was very obscurely written and perhaps for that reason was ignored for nearly a hundred years. Development of applied mathematics, hydrodynamios, mechanlos and fluid mechanics published work raise the Interest of Baoklund maps and related topics. Chapter I gives a brief account of Jet-Bundles (Pirani) and contact module on Jet-Bundles, Chapter II summarises different ways of describing integrability conditions associated with Baoklund maps. It also explains hash-operator, use of contact module and some examples. Chapter III gives the idea of prolongation and explains with some examples. Chapter IV discusses the Idea of connections associated with Backlund maps, given by pull back of contact module of forms on J(^1)(M,N(_2)) determine a cart mi connection. Then shows that the solution of differential equation corresponds of curvature tensor of their connection. I conclude the introduction with a summary of my notation and conventions. All objects and maps are assumed to be in C; in the application they are generally real-analytic. If f:M→N is a map, then the domain of f is an open set in M, not necessarily the whole of M. If M is any set then Δ(_M) denotes the original map M→M x M by m → m (m,m) for every mEM. If o is a map of manifold then ϕ* is the induced map of forms and functions. If 0 is any collection of exterior forms the ϕ *0 means { ϕ *0/0G0}, d0 means I(0) means the ldeal {ɳ∆0|0←0ɳanyform} generated by 0, where denotes ∆ the exterior product and xj0 means {x┘0/0E0} where X is any vector field and ┘ denotes the interior product of a vector field and a form. Projection of a cartesian product on the i-th factor is nenoted by piri. The end of an example is denoted by □
An Analysis of Public Expenditure on Education in Pakistan
Achieving economic growth is an important goal of any country. However, in recent years it has increasingly been realised that economic growth is a necessary but not a sufficient condition for human development. Pakistan provides a good example of a country which has historically enjoyed a respectable GDP growth rate and yet failed to translate this positive development into a satisfactory level of human development. Since its independence in 1947, Pakistan’s development policies have focused primarily on realising high economic growth and only incidentally on the task of providing social necessities. Such a process has given rise to a structure of production and distribution which has been only indirectly responsive to social goals. However, there is now a growing realisation that we could have done much better had we stressed human resource investments relatively more.
Measuring Impact of Demographic and Environmental Factors on Small Business Performance: A case study of D.I.Khan KPK (Pakistan)
Small businesses play a vital role in economic development as they can provide the economy withefficiency, innovation, competition and job growth. Environment and Entrepreneurs are responsible forsuccesses of the businesses. To know Impact of environment and various characteristics of entrepreneur onsmall business data was collected from 60 respondents randomly selected on structured questionnaire inD.I.Khan. Regression analysis showed positive significant of investment, entrepreneurial experience,business profile and culture with R2=0.638 and F= 11.222. Provision of opportunities to develop skill forbusiness promotion is suggested by researcher
Bankruptcy prediction : static logit and discrete hazard models incorporating macoreconomic dependencies and industry effects
In this thesis, we present firm default prediction models based on firm financial statements
and macroeconomic variables. We seek to develop reliable models to forecast out-of-sample
default probability, and we are particularly interested in exploring the impact of
incorporating macroeconomic variables and industry effects. To the best of our knowledge,
this is the first study to account for both macroeconomic dependencies and industry effects
in one analysis. Additionally, we investigate the impact of the 2008 financial crisis on
bankruptcies.
We develop five models, one static logit model and four hazard models, and compare the
out-of-sample predictive performance of these models. To explore the impact of industry
effects and the financial crisis, our study includes 562 U.S. public companies across all
sectors (except financial) that filed for bankruptcy between 2003 and 2013. These were
matched to a control group of non-bankrupt firms.
We find that the cash flow, profitability, leverage, liquidity, solvency, and firm size are all
significant determinants of bankruptcy. The ratio of cash flow from operations to total
liabilities, and total debt to total assets, are the most significant variables in the static logit
model. In addition to these ratios, cash to total assets and net income to total assets are
also among the most important covariates in the hazard models. Next, we find that the
forecasting results are improved by incorporating macroeconomic variables. Finally, we find
that the hazard model with macroeconomic variables and industry effects has the best outof-sample
accuracy.nhhma
- …