14 research outputs found

    Sensing real-world events using Arabic Twitter posts

    Get PDF
    In recent years, there has been increased interest in event detection using data posted to social media sites. Automatically transforming user-generated content into information relating to events is a challenging task due to the short informal language used within the content and the variety oftopics discussed on social media. Recent advances in detecting real-world events in English and other languages havebeen published. However, the detection of events in the Arabic language has been limited to date. To address this task, wepresent an end-to-end event detection framework which comprises six main components: data collection, pre-processing, classification, feature selection, topic clustering and summarization. Large-scale experiments over millions of Arabic Twitter messages show the effectiveness of our approach for detecting real-world event content from Twitter posts

    EveTAR: Building a Large-Scale Multi-Task Test Collection over Arabic Tweets

    Full text link
    This article introduces a new language-independent approach for creating a large-scale high-quality test collection of tweets that supports multiple information retrieval (IR) tasks without running a shared-task campaign. The adopted approach (demonstrated over Arabic tweets) designs the collection around significant (i.e., popular) events, which enables the development of topics that represent frequent information needs of Twitter users for which rich content exists. That inherently facilitates the support of multiple tasks that generally revolve around events, namely event detection, ad-hoc search, timeline generation, and real-time summarization. The key highlights of the approach include diversifying the judgment pool via interactive search and multiple manually-crafted queries per topic, collecting high-quality annotations via crowd-workers for relevancy and in-house annotators for novelty, filtering out low-agreement topics and inaccessible tweets, and providing multiple subsets of the collection for better availability. Applying our methodology on Arabic tweets resulted in EveTAR , the first freely-available tweet test collection for multiple IR tasks. EveTAR includes a crawl of 355M Arabic tweets and covers 50 significant events for which about 62K tweets were judged with substantial average inter-annotator agreement (Kappa value of 0.71). We demonstrate the usability of EveTAR by evaluating existing algorithms in the respective tasks. Results indicate that the new collection can support reliable ranking of IR systems that is comparable to similar TREC collections, while providing strong baseline results for future studies over Arabic tweets

    Classification of colloquial Arabic tweets in real-time to detect high-risk floods

    Get PDF
    Twitter has eased real-time information flow for decision makers, it is also one of the key enablers for Open-source Intelligence (OSINT). Tweets mining has recently been used in the context of incident response to estimate the location and damage caused by hurricanes and earthquakes. We aim to research the detection of a specific type of high-risk natural disasters frequently occurring and causing casualties in the Arabian Peninsula, namely `floods'. Researching how we could achieve accurate classification suitable for short informal (colloquial) Arabic text (usually used on Twitter), which is highly inconsistent and received very little attention in this field. First, we provide a thorough technical demonstration consisting of the following stages: data collection (Twitter REST API), labelling, text pre-processing, data division and representation, and training models. This has been deployed using `R' in our experiment. We then evaluate classifiers' performance via four experiments conducted to measure the impact of different stemming techniques on the following classifiers SVM, J48, C5.0, NNET, NB and k-NN. The dataset used consisted of 1434 tweets in total. Our findings show that Support Vector Machine (SVM) was prominent in terms of accuracy (F1=0.933). Furthermore, applying McNemar's test shows that using SVM without stemming on Colloquial Arabic is significantly better than using stemming techniques

    DEVELOPMENT OF BASIC CONCEPT OF ICT PLATFORMS DEPLOYMENT STRATEGY FOR SOCIAL MEDIA MARKETING CONSIDERING TECTONIC THEORY

    Get PDF
    This paper presents authors analytical view on social impacts as targeted advertisement into the network environment using Omori tectonic theory for description the processes of audience response evolution. This could be extremely important and useful in the modern world to realize desirable e-Gov informational policy in the circumstances of hybrid treats emergence that is especially relevant for the informational space and reaching a cyber-supremacy. Some mathematical and algorithmic basics were contributed for narrative description of information and communications technologies (ICT) architectural deployment could be used for outer regulation of audience response character by Social Media Marketing (SMM) principles. That could be performed by controlled distribution of specified digital content that contains respective key phrases, for example social advertisements and analyzing respective feed-backs. Some results of the empiric study of live audience response dependence on controlled impacts are discussed. Election processes data and recent media recordings for preliminary proof of the contributed concept feasibility have been analyzed. There were shown using gathered empiric data sets, that the extent of impacts to targeted audience response intensity could be the subject of outer regulation. The index has been contributed for assessment the efficiency of the impact’s propagation inside the audience by calculation of row correlation of keyword occurrence and audience response intensity. The approaches suggested in the article can be useful both for building effective interactive systems of state-society interaction and for detecting manipulative traits when influencing a specific audienc

    DEVELOPMENT OF BASIC CONCEPT OF ICT PLATFORMS DEPLOYMENT STRATEGY FOR SOCIAL MEDIA MARKETING CONSIDERING TECTONIC THEORY

    Get PDF
    This paper presents authors analytical view on social impacts as targeted advertisement into the network environment using Omori tectonic theory for description the processes of audience response evolution. This could be extremely important and useful in the modern world to realize desirable e-Gov informational policy in the circumstances of hybrid treats emergence that is especially relevant for the informational space and reaching a cyber-supremacy. Some mathematical and algorithmic basics were contributed for narrative description of information and communications technologies (ICT) architectural deployment could be used for outer regulation of audience response character by Social Media Marketing (SMM) principles. That could be performed by controlled distribution of specified digital content that contains respective key phrases, for example social advertisements and analyzing respective feed-backs. Some results of the empiric study of live audience response dependence on controlled impacts are discussed. Election processes data and recent media recordings for preliminary proof of the contributed concept feasibility have been analyzed. There were shown using gathered empiric data sets, that the extent of impacts to targeted audience response intensity could be the subject of outer regulation. The index has been contributed for assessment the efficiency of the impact’s propagation inside the audience by calculation of row correlation of keyword occurrence and audience response intensity. The approaches suggested in the article can be useful both for building effective interactive systems of state-society interaction and for detecting manipulative traits when influencing a specific audienc

    Can we predict a riot? Disruptive event detection using Twitter

    Get PDF
    In recent years, there has been increased interest in real-world event detection using publicly accessible data made available through Internet technology such as Twitter, Facebook, and YouTube. In these highly interactive systems, the general public are able to post real-time reactions to “real world” events, thereby acting as social sensors of terrestrial activity. Automatically detecting and categorizing events, particularly small-scale incidents, using streamed data is a non-trivial task but would be of high value to public safety organisations such as local police, who need to respond accordingly. To address this challenge, we present an end-to-end integrated event detection framework that comprises five main components: data collection, pre-processing, classification, online clustering, and summarization. The integration between classification and clustering enables events to be detected, as well as related smaller-scale “disruptive events,” smaller incidents that threaten social safety and security or could disrupt social order. We present an evaluation of the effectiveness of detecting events using a variety of features derived from Twitter posts, namely temporal, spatial, and textual content. We evaluate our framework on a large-scale, real-world dataset from Twitter. Furthermore, we apply our event detection system to a large corpus of tweets posted during the August 2011 riots in England. We use ground-truth data based on intelligence gathered by the London Metropolitan Police Service, which provides a record of actual terrestrial events and incidents during the riots, and show that our system can perform as well as terrestrial sources, and even better in some cases

    تطوير منهجية تعتمد على تنقيب الأنماط المتكررة المرنة للكشف عن الأحداث الهامة في المدونات العربية المصغرة

    Get PDF
    Recently, Microblogs have become the new communication medium between users. It allows millions of users to post and share content of their own activities, opinions about different topics. Posting about occurring real-world events has attracted people to follow events through microblogs instead of mainstream media. As a result, there is an urgent need to detect events from microblogs so that users can identify events quickly, also and more importantly to aid higher authorities to respond faster to occurring events by taking proper actions. While considerable researches have been conducted for event detection on the English language. Arabic context have not received much research even though there are millions of Arabic users. Also existing approaches rely on platform dependent features such as hashtags, mentions, retweets etc. which make their approaches fail when these features are not present in the process. In addition to that, approaches that depend on the presence of frequently used words only do not always detect real events because it cannot differentiate events and general viral topics. In this thesis, we propose an approach for Arabic event detection from microblogs. We first collect the data, then a preprocessing step is applied to enhance the data quality and reduce noise. The sentence text is analyzed and the part-of-speech tags are identified. Then a set of rules are used to extract event indicator keywords called event triggers. The frequency of each event triggers is calculated, where event triggers that have frequencies higher than the average are kept, or removed otherwise. We detect events by clustering similar event triggers together. An Adapted soft frequent pattern mining is applied to the remaining event triggers for clustering. We used a dataset called Evetar to evaluate the proposed approach. The dataset contains tweets that cover different types of Arabic events that occurred in a one month period. We split the dataset into different subsets using different time intervals, so that we can mimic the streaming behavior of microblogs. We used precision, recall and fmeasure as evaluation metrics. The highest average f-measure value achieved was 0.717. Our results were acceptable compared to three popular approaches applied to the same dataset.حديثا،ً أصبحت المدونات الصغيرة وسيلة إتصال جديدة بين المستخدمين. فقد سمحت لملايين المستخدمين من نشر ومشاركة محتويات متعلقة بأنشطتهم وأرائهم عن مواضيع مختلفة. إن نشر المحتوى المتعلق بالأحداث الجارية في العالم الحقيقي قد جذب الناس لمتابعة الأحداث من خلال المدونات الصغيرة بدلاً من وسائل الإعلام الرئيسية. نتيجة لذلك، أصبحت هناك حاجة طارئة لكشف الأحداث من الدونات الصغيرة حتى يتمكن المستخدمون من تحديد الأحداث الجارية بشكل أسرع، أيضا والأهم من ذلك، مساعدة السلطات العليا للإستجابة بشكل سريع في عمل اللازم عند حدوث حدثا ما. في حين أنه أجريت العديد من الأبحاث على كشف الأحداث باللغة الإنجليزية، إلا أن السياق العربي لم يأخذ نصيبا وفير ا في هذا المجال، على الرغم من وجود الملايين من المستخدمين العرب. ايضا،ً العديد من المناهج الموجودة حاليا تعتمد على خصائص معتمدة على المنصة المستخدمة في البحث مثل وسم الهاشتاق، وتأشيرة المستخدم، وإعادة التغريد، إلخ. مما يجعل النهج المستخدم يتأثر سلبا في حال لم تكن هذه الخصائص موجودة أثناء عملية الكشف عن الأحداث. بالإضافة الي ذلك، المناهج التي تعتمد فقط على وجود الكلمات الأكثر استخداما لا تكشف الاحداث الحقيقية دائما لانها لا تستطيع التفرقة بين الحدث والمواضيع العامة الشائعة. في هذه الأطروحة، نقترح نهج لكشف الأحداث العربية من المدونات الصغيرة. أولاً نقوم بجمع البيانات، ثم نقوم بتجهيزها من خلال تحسينها وتقليل الشوائب فيها. يتم تحليل نص الجملة لإستخراج الأوسمة الخاصة بأجزاء الكلام. بعدها نقوم بتطبيق مجموعة من القواعد لإستخراج الكلمات الدلالية التي تشير إلي الأحدات و تسمى مشغلات الأحداث. يتم حساب عدد تكرار كل مشغل حدث، بحيث يتم الإحتفاظ على المشغلات التي لها عدد تكراراكبر من المتوسط ويتم حذف عكس ذالك. يتم الكشف عن الحدث من خلال تجميع مشغلات الأحداث المتشابهة مع بعضها. حيث نقوم بتطبيق إصدار ملائم من خوارزمية "التنقيب الناعم عن الأنماط المتكررة" على مشغلات الأحداث التي تبقت لكي يتم تجميع المتشابه منها. قمنا بإستخدام قاعدة بيانات تسمى (Evetar) لتقييم النهج المقترح. حيث تحتوي قاعدة البيانات على تغريدات تغطى عدة انواع من الأحداث العربية التي حدثت خلال فترة شهر. لكي نقوم بمحاكاة طريقة تدفق البيانات في المدونات الصغيرة، قمنا بتقسييم البيانات إلي عدة مجموعات بناءاُ على فترات زمنية مختلفة. تم استخدام كل من (Precision)، (Recall)، (F-Measure) كمقياس للتقييم، حيث كانت أعلى متوسط قيمة لل (F-Measure) تم الحصول عليها هي 0.717 . تعتبر النتائج التي حصلنا عليها مقبولة مقارنة مع ثلاث مناهج مشهورة تم تطبيقها على نفس قاعدة البيانات

    Building a Test Collection for Significant-Event Detection in Arabic Tweets

    Get PDF
    With the increasing popularity of microblogging services like Twitter, researchers discov- ered a rich medium for tackling real-life problems like event detection. However, event detection in Twitter is often obstructed by the lack of public evaluation mechanisms such as test collections (set of tweets, labels, and queries to measure the eectiveness of an information retrieval system). The problem is more evident when non-English lan- guages, e.g., Arabic, are concerned. With the recent surge of signicant events in the Arab world, news agencies and decision makers rely on Twitters microblogging service to obtain recent information on events. In this thesis, we address the problem of building a test collection of Arabic tweets (named EveTAR) for the task of event detection. To build EveTAR, we rst adopted an adequate denition of an event, which is a signicant occurrence that takes place at a certain time. An occurrence is signicant if there are news articles about it. We collected Arabic tweets using Twitter's streaming API. Then, we identied a set of events from the Arabic data collection using Wikipedias current events portal. Corresponding tweets were extracted by querying the Arabic data collection with a set of manually-constructed queries. To obtain relevance judgments for those tweets, we leveraged CrowdFlower's crowdsourcing platform. Over a period of 4 weeks, we crawled over 590M tweets, from which we identied 66 events that cover 8 dierent categories and gathered more than 134k relevance judgments. Each event contains an average of 779 relevant tweets. Over all events, we got an average Kappa of 0.6, which is a substantially acceptable value. EveTAR was used to evalu- ate three state-of-the-art event detection algorithms. The best performing algorithms achieved 0.60 in F1 measure and 0.80 in both precision and recall. We plan to make our test collection available for research, including events description, manually-crafted queries to extract potentially-relevant tweets, and all judgments per tweet. EveTAR is the rst Arabic test collection built from scratch for the task of event detection. Addi- tionally, we show in our experiments that it supports other tasks like ad-hoc search

    Erschließen von Freitextfeldern mittels Text Mining und die Qualität der gewonnenen Informationen

    Get PDF
    Vermehrt fallen innerhalb von Firmen neben den einfach auszuwertenden strukturierten Daten, auch unstrukturierte Daten in Form von Freitexten an. In dieser Ausarbeitung werden Techniken zur Strukturierung von Freitexten sowie verwandte Arbeiten und Vor- und Nachteile der Nutzung von Freitexten vorgestellt. Der Fokus liegt auf der Repräsentation der Daten als Vektoren und der Filterung von Stoppwörtern. Außerdem wird ein Prototyp zum Clustern von Freitextfeldern vorgestellt und auf einen Datensatz der NHTSA angewendet. Durch die Anwendung des Prototyps auf den NHTSA-Datensatz wird geklärt, inwiefern dieser Informationen in den Freitextfelder enthält, die nicht in den strukturierten Daten enthalten sind. Und ob das Clustering zu vollständigeren Informationen, das heißt zur erhöhter Datenqualität führt. Die Beantwortung geschieht durch Datenanalysen auf den vom Prototyp erweiterten Datensatz. Eine zusätzliche Anwendung und Auswertung des Prototyps, findet auf einen Datensatz aus der Industrie statt
    corecore