21 research outputs found

    FCAIR 2012 Formal Concept Analysis Meets Information Retrieval Workshop co-located with the 35th European Conference on Information Retrieval (ECIR 2013) March 24, 2013, Moscow, Russia

    Get PDF
    International audienceFormal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classifiation. The area came into being in the early 1980s and has since then spawned over 10000 scientific publications and a variety of practically deployed tools. FCA allows one to build from a data table with objects in rows and attributes in columns a taxonomic data structure called concept lattice, which can be used for many purposes, especially for Knowledge Discovery and Information Retrieval. The Formal Concept Analysis Meets Information Retrieval (FCAIR) workshop collocated with the 35th European Conference on Information Retrieval (ECIR 2013) was intended, on the one hand, to attract researchers from FCA community to a broad discussion of FCA-based research on information retrieval, and, on the other hand, to promote ideas, models, and methods of FCA in the community of Information Retrieval

    Proceedings of the First Workshop on Computing News Storylines (CNewsStory 2015)

    Get PDF
    This volume contains the proceedings of the 1st Workshop on Computing News Storylines (CNewsStory 2015) held in conjunction with the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2015) at the China National Convention Center in Beijing, on July 31st 2015. Narratives are at the heart of information sharing. Ever since people began to share their experiences, they have connected them to form narratives. The study od storytelling and the field of literary theory called narratology have developed complex frameworks and models related to various aspects of narrative such as plots structures, narrative embeddings, characters’ perspectives, reader response, point of view, narrative voice, narrative goals, and many others. These notions from narratology have been applied mainly in Artificial Intelligence and to model formal semantic approaches to narratives (e.g. Plot Units developed by Lehnert (1981)). In recent years, computational narratology has qualified as an autonomous field of study and research. Narrative has been the focus of a number of workshops and conferences (AAAI Symposia, Interactive Storytelling Conference (ICIDS), Computational Models of Narrative). Furthermore, reference annotation schemes for narratives have been proposed (NarrativeML by Mani (2013)). The workshop aimed at bringing together researchers from different communities working on representing and extracting narrative structures in news, a text genre which is highly used in NLP but which has received little attention with respect to narrative structure, representation and analysis. Currently, advances in NLP technology have made it feasible to look beyond scenario-driven, atomic extraction of events from single documents and work towards extracting story structures from multiple documents, while these documents are published over time as news streams. Policy makers, NGOs, information specialists (such as journalists and librarians) and others are increasingly in need of tools that support them in finding salient stories in large amounts of information to more effectively implement policies, monitor actions of “big players” in the society and check facts. Their tasks often revolve around reconstructing cases either with respect to specific entities (e.g. person or organizations) or events (e.g. hurricane Katrina). Storylines represent explanatory schemas that enable us to make better selections of relevant information but also projections to the future. They form a valuable potential for exploiting news data in an innovative way.JRC.G.2-Global security and crisis managemen

    Exploring Pattern Structures of Syntactic Trees for Relation Extraction

    Get PDF
    International audienceIn this paper we explore the possibility of defining an original pattern structure for managing syntactic trees.More precisely, we are interested in the extraction of relations such as drug-drug interactions (DDIs) in medical texts where sentences are represented as syntactic trees.In this specific pattern structure, called STPS, the similarity operator is based on rooted tree intersection.Moreover, we introduce "Lazy Pattern Structure Classification" (LPSC), which is a symbolic method able to extract and classify DDI sentences w.r.t. STPS.To decrease computation time, a projection and a set of tree-simplification operations are proposed.We evaluated the method by means of a 10-fold cross validation on the corpus of the DDI extraction challenge 2011, and we obtained very encouraging results that are reported at the end of the paper

    Workshop NotesInternational Workshop ``What can FCA do for Artificial Intelligence?'' (FCA4AI 2015)

    Get PDF
    International audienceThis volume includes the proceedings of the fourth edition of the FCA4AI --What can FCA do for Artificial Intelligence?-- Workshop co-located with the IJCAI 2015 Conference in Buenos Aires (Argentina). Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classification. FCA allows one to build a concept lattice and a system of dependencies (implications) which can be used for many AI needs, e.g. knowledge discovery, learning, knowledge representation, reasoning, ontology engineering, as well as information retrieval and text processing. There are many ``natural links'' between FCA and AI, and the present workshop is organized for discussing about these links and more generally for improving the links between knowledge discovery based on FCA and knowledge management in artificial intelligence

    Workshop Notes of the Seventh International Workshop "What can FCA do for Artificial Intelligence?"

    Get PDF
    International audienceThese are the proceedings of the seventh edition of the FCA4AI workshop (http://www.fca4ai.hse.ru/) co-located with the IJCAI 2019 Conference in Macao (China). Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at classification and knowledge discovery that can be used for many purposes in Artificial Intelligence (AI). The objective of the FCA4AI workshop is to investigate two main issues: how can FCA supports various AI activities (knowledge discovery, knowledge engineering, machine learning, data mining, information retrieval, recommendation. . . ), and how can FCA be extended in order to help AI researchers to solve new and complex problems in their domain

    9th International Workshop "What can FCA do for Artificial Intelligence?" (FCA4AI 2021)

    Get PDF
    International audienceFormal Concept Analysis (FCA) is a mathematically well-founded theory aimed at classification and knowledge discovery that can be used for many purposes in Artificial Intelligence (AI). The objective of the ninth edition of the FCA4AI workshop (see http://www.fca4ai.hse.ru/) is to investigate several issues such as: how can FCA support various AI activities (knowledge discovery, knowledge engineering, machine learning, data mining, information retrieval, recommendation...), how can FCA be extended in order to help AI researchers to solve new and complex problems in their domains, and how FCA can play a role in current trends in AI such as explainable AI and fairness of algorithms in decision making.The workshop was held in co-location with IJCAI 2021, Montréal, Canada, August, 28 2021

    ρan-ρan

    Get PDF
    "With the peristaltic gurglings of this gastēr-investigative procedural – a soooo welcomed addition to the ballooning corpus of slot-versatile bad eggs The Confraternity of Neoflagellants (CoN) – [users] and #influencers everywhere will be belly-joyed to hold hands with neomedieval mutter-matter that literally sticks and branches, available from punctum in both frictionless and grip-gettable boke-shaped formats. A game-changer in Brownian temp-controlled phoneme capture, ρan-ρan’s writhing paginations are completely oxygen-soaked, overwriting the flavour profiles of 2013’s thN Lng folk 2go with no-holds-barred argumentations on all voice-like and lung-adjacent functions. Rumoured by experts to be dead to the Worldℱ, CoN has clearly turned its ear canal arrays towards the jabbering OMFG feedback signals from their scores of naive listeners, scrapping all lenticular exegesis and content profiles to construct taped-together vernacular dwellings housing ‘shrooming atmospheric awarenesses and pan-dimensional cross-talkers, making this anticipatory sequel a serious competitor across ambient markets, and a crowded kitchen in its own right. An utterly mondegreen-infested deep end may deter would-be study buddies from taking the plunge, but feet-wetted Dog Heads eager to sniff around for temporal folds and whiff past the stank of hastily proscribed future fogs ought to ©k no further than the roll-upable-rim of ρan-ρan’s bleeeeeding premodern lagoon. Arrange yerself cannonball-wise or lead with the #gut and you’ll be kersplashing in no times.

    Artificial intelligence as a tool for research and development in European patent law

    Get PDF
    Artificial intelligence (“AI”) is increasingly fundamental for research and development (“R&D”). Thanks to its powerful analytical and generative capabilities, AI is arguably changing how we invent. According to several scholars, this finding calls into question the core principles of European patent law—the field of law devoted to protecting inventions. In particular, the AI revolution might have an impact on the notions of “invention”, “inventor”, “inventive step”, and “skilled person”. The present dissertation examines how AI might affect each of those fundamental concepts. It concludes that European patent law is a flexible legal system capable of adapting to technological change, including the advent of AI. First, this work finds that “invention” is a purely objective notion. Inventions consist of technical subject-matter. Whether artificial intelligence had a role in developing the invention is therefore irrelevant as such. Nevertheless, de lege lata, the inventor is necessarily a natural person. There is no room for attributing inventorship to an AI system. In turn, the notion of “inventor” comprises whoever makes an intellectual contribution to the inventive concept. And patent law has always embraced “serendipitous” inventions—those that one stumbles upon by accident. Therefore, at a minimum, the natural person who recognizes an invention developed through AI would qualify as its inventor. Instead, lacking a human inventor, the right to the patent would not arise at all. Besides, the consensus among scholars is that, de facto, AI cannot invent “autonomously” at the current state of technology. The likelihood of an “invention without an inventor” is thus remote. AI is rather a tool for R&D, albeit a potentially sophisticated one. Coming to the “skilled person”, they are the average expert in the field that can rely on the standard tools for routine research and experimentation. Hence, this work finds that if and when AI becomes a “standard” research tool, it should be framed as part of the skilled person. Since AI is an umbrella term for a myriad of different technologies, the assessment of what is truly “standard” for the skilled person – and what would be considered inventive against that figure – demands a precise case-by-case analysis, which takes into account the different AI techniques that exist, the degree of human involvement and skill for using them, and the crucial relevance of data for many AI tools. However, while AI might cause increased complexities and require adaptations – especially to the inventive step assessment – the fundamental principles of European patent law stand the test of time

    Chomskyan (R)evolutions

    Get PDF
    It is not unusual for contemporary linguists to claim that “Modern Linguistics began in 1957” (with the publication of Noam Chomsky’s Syntactic Structures). Some of the essays in Chomskyan (R)evolutions examine the sources, the nature and the extent of the theoretical changes Chomsky introduced in the 1950s. Other contributions explore the key concepts and disciplinary alliances that have evolved considerably over the past sixty years, such as the meanings given for “Universal Grammar”, the relationship of Chomskyan linguistics to other disciplines (Cognitive Science, Psychology, Evolutionary Biology), and the interactions between mainstream Chomskyan linguistics and other linguistic theories active in the late 20th century: Functionalism, Generative Semantics and Relational Grammar. The broad understanding of the recent history of linguistics points the way towards new directions and methods that linguistics can pursue in the future
    corecore