210 research outputs found
Computing Network of Diseases and Pharmacological Entities through the Integration of Distributed Literature Mining and Ontology Mapping
The proliferation of -omics (such as, Genomics, Proteomics) and -ology (such as, System Biology, Cell Biology, Pharmacology) have spawned new frontiers of research in drug discovery and personalized medicine. A vast amount (21 million) of published research results are archived in the PubMed and are continually growing in size. To improve the accessibility and utility of such a large number of literatures, it is critical to develop a suit of semantic sensitive technology that is capable of discovering knowledge and can also infer possible new relationships based on statistical co-occurrences of meaningful terms or concepts. In this context, this thesis presents a unified framework to mine a large number of literatures through the integration of latent semantic analysis (LSA) and ontology mapping. In particular, a parameter optimized, robust, scalable, and distributed LSA (DiLSA) technique was designed and implemented on a carefully selected 7.4 million PubMed records related to pharmacology. The DiLSA model was integrated with MeSH to make the model effective and efficient for a specific domain. An optimized multi-gram dictionary was customized by mapping the MeSH to build the DiLSA model. A fully integrated web-based application, called PharmNet, was developed to bridge the gap between biological knowledge and clinical practices. Preliminary analysis using the PharmNet shows an improved performance over global LSA model. A limited expert evaluation was performed to validate the retrieved results and network with biological literatures. A thorough performance evaluation and validation of results is in progress
Distributed Denial of Service Attack Detection
Distributed Denial of Service (DDoS) attacks on web applications has been a persistent threat. Successful attacks can lead to inaccessible service to legitimate users in time and loss of business reputation. Most research effort on DDoS focused on network layer attacks. Existing approaches on application layer DDoS attack mitigation have limitations such as the lack of detection ability for low rate DDoS and not being able to detect attacks targeting resource files. In this work, we propose DDoS attack detection using concepts from information retrieval and machine learning. We include two popular concepts from information retrieval: Term Frequency (TF)-Inverse Document Frequency (IDF) and Latent Semantic Indexing (LSI). We analyzed web server log data generated in a distributed environment. Our evaluation results indicate that while all the approaches can detect various ranges of attacks, information retrieval approaches can identify attacks ongoing in a given session. All the approaches can detect three well known application level DDoS attacks (trivial, intermediate, advanced). Further, these approaches can enable an administrator identifying new pattern of DDoS attacks
Cross-language Ontology Learning: Incorporating and Exploiting Cross-language Data in the Ontology Learning Process
Hans Hjelm. Cross-language Ontology Learning:
Incorporating and Exploiting Cross-language Data in the Ontology Learning Process.
NEALT Monograph Series, Vol. 1 (2009), 159 pages.
© 2009 Hans Hjelm.
Published by
Northern European Association for Language
Technology (NEALT)
http://omilia.uio.no/nealt .
Electronically published at
Tartu University Library (Estonia)
http://hdl.handle.net/10062/10126
LEARNING FROM MULTIPLE VIEWS OF DATA
This dissertation takes inspiration from the abilities of our brain to extract information and learn from multiple sources of data and try to mimic this ability for some practical problems. It explores the hypothesis that the human brain can extract and store information from raw data in a form, termed a common representation, suitable for cross-modal content matching. A human-level performance for the aforementioned task requires - a) the ability to extract sufficient information from raw data and b) algorithms to obtain a task-specific common representation from multiple sources of extracted information. This dissertation addresses the aforementioned requirements and develops novel content extraction and cross-modal content matching architectures.
The first part of the dissertation proposes a learning-based visual information extraction approach: Recursive Context Propagation Network or RCPN, for semantic segmentation of images. It is a deep neural network that utilizes the contextual information from the entire image for semantic segmentation, through bottom-up followed by top-down context propagation. This improves the feature representation of every super-pixel in an image for better classification into semantic categories. RCPN is analyzed to discover that the presence of bypass-error paths in RCPN can hinder effective context propagation. It is shown that bypass-errors can be tackled by inclusion of classification loss of internal nodes as well. Secondly, a novel tree-MRF structure is developed using the parse trees to model the hierarchical dependency present in the output.
The second part of this dissertation develops algorithms to obtain and match the common representations across different modalities. A novel Partial Least Square (PLS) based framework is proposed to learn a common subspace from multiple modalities of data. It is used for multi-modal face biometric problems such as pose-invariant face recognition and sketch-face recognition. The issue of sensitivity to the noise in pose variation is analyzed and a two-stage discriminative model is developed to tackle it. A generalized framework is proposed to extend various popular feature extraction techniques that can be solved as a generalized eigenvalue problem to their multi-modal counterpart. It is termed Generalized Multiview Analysis or GMA, and used for pose-and-lighting invariant face recognition and text-image retrieval
Vector representation of Internet domain names using Word embedding techniques
Word embeddings is a well-known set of techniques widely used in
natural language processing ( NLP ). This thesis explores the use of word
embeddings in a new scenario. A vector space model ( VSM) for Internet
domain names ( DNS) is created by taking core ideas from NLP techniques
and applying them to real anonymized DNS log queries from a large
Internet Service Provider ( ISP) . The main goal is to find semantically
similar domains only using information of DNS queries without any other
knowledge about the content of those domains.
A set of transformations through a detailed preprocessing pipeline
with eight specific steps is defined to move the original problem to a
problem in the NLP field. Once the preprocessing pipeline is applied and
the DNS log files are transformed to a standard text corpus, we show that
state-of-the-art techniques for word embeddings can be successfully
applied in order to build what we called a DNS-VSM (a vector space model
for Internet domain names).
Different word embeddings techniques are evaluated in this work:
Word2Vec (with Skip-Gram and CBOW architectures), App2Vec (with a
CBOW architecture and adding time gaps between DNS queries), and
FastText (which includes sub-word information).
The obtained results are compared using various metrics from Information
Retrieval theory and the quality of the learned vectors is validated with a
third party source, namely, similar sites service offered by Alexa Internet,
Inc2 .
Due to intrinsic characteristics of domain names, we found that FastText is
the best option for building a vector space model for DNS. Furthermore, its
performance (considering the top 3 most similar learned vectors to each
domain) is compared against two baseline methods: Random Guessing
(returning randomly any domain name from the dataset) and Zero Rule
(returning always the same most popular domains), outperforming both of
them considerably.
The results presented in this work can be useful in many
engineering activities, with practical application in many areas. Some
examples include websites recommendations based on similar sites,
competitive analysis, identification of fraudulent or risky sites,
parental-control systems, UX improvements (based on recommendations,
spell correction, etc.), click-stream analysis, representation and clustering
of users navigation profiles, optimization of cache systems in recursive
DNS resolvers (among others).
Finally, as a contribution to the research community a set of vectors
of the DNS-VSM trained on a similar dataset to the one used in this thesis
is released and made available for download through the github page in
[1]. With this we hope that further work and research can be done using
these vectors.La vectorización de palabras es un conjunto de técnicas bien
conocidas y ampliamente usadas en el procesamiento del lenguaje natural
( PLN ). Esta tesis explora el uso de vectorización de palabras en un nuevo
escenario. Un modelo de espacio vectorial ( VSM) para nombres de
dominios de Internet ( DNS ) es creado tomando ideas fundamentales de
PLN, l as cuales son aplicadas a consultas reales anonimizadas de logs de
DNS de un gran proveedor de servicios de Internet ( ISP) . El objetivo
principal es encontrar dominios relacionados semánticamente solamente
usando información de consultas DNS sin ningún otro conocimiento sobre
el contenido de esos dominios.
Un conjunto de transformaciones a través de un detallado pipeline
de preprocesamiento con ocho pasos específicos es definido para llevar el
problema original a un problema en el campo de PLN. Una vez aplicado el
pipeline de preprocesamiento y los logs de DNS son transformados a un
corpus de texto estándar, se muestra que es posible utilizar con éxito
técnicas del estado del arte respecto a vectorización de palabras para
construir lo que denominamos un DNS-VSM (un modelo de espacio
vectorial para nombres de dominio de Internet).
Diferentes técnicas de vectorización de palabras son evaluadas en
este trabajo: Word2Vec (con arquitectura Skip-Gram y CBOW) , App2Vec
(con arquitectura CBOW y agregando intervalos de tiempo entre consultas
DNS ), y FastText (incluyendo información a nivel de sub-palabra).
Los resultados obtenidos se comparan usando varias métricas de la teoría
de Recuperación de Información y la calidad de los vectores aprendidos
es validada por una fuente externa, un servicio para obtener sitios
similares ofrecido por Alexa Internet, Inc .
Debido a características intrínsecas de los nombres de dominio,
encontramos que FastText es la mejor opción para construir un modelo de
espacio vectorial para DNS . Además, su performance es comparada
contra dos métodos de línea base: Random Guessing (devolviendo
cualquier nombre de dominio del dataset de forma aleatoria) y Zero Rule
(devolviendo siempre los mismos dominios más populares), superando a
ambos de manera considerable.
Los resultados presentados en este trabajo pueden ser útiles en
muchas actividades de ingeniería, con aplicación práctica en muchas
áreas. Algunos ejemplos incluyen recomendaciones de sitios web, análisis
competitivo, identificación de sitios riesgosos o fraudulentos, sistemas de
control parental, mejoras de UX (basada en recomendaciones, corrección
ortográfica, etc.), análisis de flujo de clics, representación y clustering de
perfiles de navegación de usuarios, optimización de sistemas de cache en
resolutores de DNS recursivos (entre otros).
Por último, como contribución a la comunidad académica, un
conjunto de vectores del DNS-VSM entrenado sobre un juego de datos
similar al utilizado en esta tesis es liberado y hecho disponible para
descarga a través de la página github en [1]. Con esto esperamos a que
más trabajos e investigaciones puedan realizarse usando estos vectores
- …