96 research outputs found

    Prediction, detection, and correction of misunderstandings in interactive tasks

    Get PDF
    Technology has allowed all kinds of devices and software to come into our lives. Advances in GPS, Virtual Reality, and wearable computers with increased computing power and Internet connectivity open the doors for interactive systems that were considered science fiction less than a decade ago, and are capable of guiding us in a variety of environments. This increased accessibility comes at the cost of increasing both the scale of problems that can be realistically tackled and the capabilities that we expect from such systems. Indoor navigation is an example of such a task: although guiding a car is a solved problem, guiding humans for instance inside a museum is much more challenging. Unlike cars, pedestrians use landmarks rather than absolute distances. They must discriminate from a larger number of distractors, and expect sentences of higher complexity than those appropriate for a car driver. A car driver prefers short, simple instructions that do not distract them from traffic. A tourist inside a museum on the contrary can afford the mental effort that a detailed grounding process would require. Both car and indoor navigation are specific examples of a wider family of collaborative tasks known as “Instruction Following”. In these tasks, agents with the two clearly defined roles of Instruction Giver and Instruction Follower must cooperate to achieve a joint objective. The former has access to all required information about the environment, including (but not limited to) a detailed map of the environment, a clear list of objectives, and a profound understanding of the effect that specific actions have in the environment. The latter is tasked with following the instructions, interacting with the environment and moving the undertaking forward. It is then the Instruction Giver’s responsibility to assess a detailed plan of action, segment it into smaller subgoals, and present instructions to the Instruction Follower in a language that is clear and understandable. No matter how carefully crafted the Instruction Giver’s utterances are, it is expected that misunderstandings will take place. Although some of these misunderstandings are easy to detect and repair, others can be very difficult or even impossible to solve. It is therefore important for the Instruction Giver to generate instructions that are as clear as possible, to detect misunderstandings as early as possible, and to correct them in the most effective way. This thesis introduces several algorithms and strategies designed to tackle the aforementioned problems from end to end, presenting the individual aspects of a system that successfully predicts, detects, and corrects misunderstandings in interactive Instruction Following tasks. We focus on one particular type of instruction: those involving Referring Expressions. A Referring Expression identifies a single object out of many, such as “the red button” or “the tall plant”. Generating Referring Expressions is a key component of Inst. Following tasks, since any kind of object manipulation is likely to require a description of the object. Due to its importance and complexity, this is one of the most widely studied areas of Natural Language Generation. In this thesis we use Semantically Interpreted Grammars, an approach that integrates both Referring Expression Generation (identifying which properties are required for a unique description) and Surface realization (combining those properties into a concrete Noun Phrase). The complexity of performing, recording, and analyzing Instruction Following tasks in the real world is one of the major challenges of Instruction Following research. In order to simplify both the development of new algorithms and the access to those results by the research community, our work is evaluated in what we call a Virtual Environment—an environment that mimics the main aspects of the real world and abstracts distractions while preserving enough characteristics of the real world to be useful for research. Selecting the appropriate virtual environment for a research task ensures that results will be applicable in the real world. We have selected the Virtual Environment of the GIVE Challenge, an environment designed for an Instruction Following task in which a human Instruction Follower is paired with an automated Instruction Giver in a maze-like 3D world. Completing the task requires navigating the space, avoiding alarms, interacting with objects, generating instructions in Natural Language, and preventing mistakes that can bring the task to a premature end. Even under these simplified conditions, the task presents several computational challenges: performing these tasks in real time require fast algorithms, and ensuring the efficiency of our approaches remains a priority at every step. Our first experimental study identifies the most challenging type of mistakes that our system is expected to find. Creating an Inst. Following system that leverages previously-recorded human data and follows instructions using a simple greedy algorithm, we clearly separate those situations for which no further study is warranted from those that are of interest for our research. We test our algorithm with similarity metrics of varying complexity, ranging from overlap measures such as Jaccard and edit distances to advanced machine learning algorithms such as Support Vector Machines. The best performing algorithms achieve not only good accuracy, but we show in fact that mistakes are highly correlated with situations that are also challenging for human annotators. Going a step further, we also study the type of improvement that can be expected from our system if we give it the chance of retrying after a mistake was made. This system has no prior beliefs on which actions are more likely to be selected next, and our results make a good case for this vision to be one of its weakest points. Moving away from a paradigm where all actions are considered equally likely, and moving towards a model in which the Inst. Follower’s own action is taken into account, our subsequent step is the development of a system that explicitly models listener’s understanding. Given an instruction containing a Referring Expression, we approach the Instruction Follower’s understanding of it with a combination of two probabilistic models. The Semantic model uses features of the Referring Expression to identify which object is more likely to be selected: if the instruction mentions a red button, it is unlikely that the Inst. Follower will select a blue one. The Observational model, on the other hand, predicts which object will be selected by the Inst. Follower based on their behavior: if the user is walking straight towards a specific object, it is very likely that this object will be selected. These two log-linear, probabilistic models were trained with recorded human data from the GIVE Challenge, resulting in a model that can effectively predict that a misunderstanding is about to take place several seconds before it actually happens. Using our Combined model, we can easily detect and predict misunderstandings — if the Inst. Giver tells the Inst. Follower to “click the red button”, and the Combined model detects that the Inst. Follower will select a blue one, we know that a misunderstanding took place, we know what the misunderstood object is, and we know both facts early enough to generate a correction that will stop the Inst. Follower from making the mistake in the first place. A follow-up study extends the Observational model introducing features based on the gaze of the Inst. Follower. Gaze has been shown to correlate with human attention, and our study explores whether gaze-based features can improve the accuracy of the Observational model. Using previouslycollected data from the GIVE Environment in which gaze was recorded using eye-tracking equipment, the resulting Extended Observational model improves the accuracy of predictions in challenging scenes where the number of distractors is high. Having a reliable method for the detection of misunderstandings, we turn our attention towards corrections. A corrective Referring Expression is one designed not only for the identification of a single object out of many, but rather, for identifying a previously-wrongly-identified object. The simplest possible corrective Referring Expression is repetition: if the user misunderstood the expression “the red button” the first time, it is possible that they will understand it correctly the second time. A smarter approach, however, is to reformulate the Referring Expression in a way that makes it easier for the Inst. Follower to understand. We designed and evaluated two different strategies for the generation of corrective feedback. The first of these strategies exploits the pragmatics concept of a Context Set, according to which human attention can be segmented into objects that are being attended to (that is, those inside the Context Set) and those that are ignored. According to our theory, we could virtually ignore all objects outside the Context Set and generate Referring Expressions that would not be uniquely identifying with respect to the entire context, but would still be identifying enough for the Inst. Follower. As an example, if the user is undecided between a red button and a blue one, we could generate the Referring Expression “the red one” even if there are other red buttons on the scene that the user is not paying attention to. Using our probabilistic models as a measure for which elements to include in the Context Set, we modified our Referring Expression Generation algorithm to build sentences that explicitly account for this behavior. We performed experiments over the GIVE Challenge Virtual Environment, crowdsourcing the data collection process, with mixed results: even if our definition of a Context Set were correct (a point that our results can neither confirm nor deny), our strategy generates Referring Expressions that prevents some mistakes, but are in general harder to understand than the baseline approach. The results are presented along with an extensive error analysis of the algorithm. They imply that corrections can cause the Instruction Follower to re-evaluate the entire situation in a new light, making our previous definition of Context Set impractical. Our approach also fails at identifying previously grounded referents, compounding the number of pragmatic effects that conspire against this approach. The second strategy for corrective feedback consists on adding Contrastive focus to a second, corrective Referring Expression In a scenario in which the user receives the Referring Expression “the red button” and yet mistakenly selects a blue one, an approach with contrastive focus would generate “no, the RED button” as a correction. Such a Referring Expression makes it clear to the Inst. Follower that on the one hand their selection of an object of type “button” was correct, and that on the other hand it is the property “color” that needs re-evaluation. In our approach, we model a misunderstanding as a noisy channel corruption: the Inst. Giver generates a correct Referring Expression for a given object, but it is corrupted in transit and reaches the Inst. Follower in the form of an altered, incorrect Referring Expression We correct this misconstrual by generating a new, corrective Referring Expression: starting from the original Referring Expression and the misunderstood object, we identify the constituents of the Referring Expression that were corrupted and place contrastive focus on them. Our hypothesis states that the minimum edit sequence between the original and misunderstood Referring Expression correctly identifies the constituents requiring contrastive focus, a claim that we verify experimentally. We perform crowdsourced preference tests over several variations of this idea, evaluating Referring Expressions that either present contrast side by side (as in “no, not the BLUE button, the RED button”) or attempt to remove redundant information (as in “no, the RED one”). We evaluate our approaches using both simple scenes from the GIVE Challenge and more complicated ones showing pictures from the more challenging TUNA people corpus. Our results show that human users significantly prefer our most straightforward contrastive algorithm. In addition to detailing models and strategies for misunderstanding detection and correction, this thesis also includes practical considerations that must be taken into account when dealing with similar tasks to those discussed here. We pay special attention to Crowdsourcing, a practice in which data about tasks can be collected from participants all over the world at a lower cost than traditional alternatives. Researchers interested in using crowdsourced data must often deal both with unmotivated players and with players whose main motivation is to complete as many tasks as possible in the least amount of time. Designing a crowdsourced experiment requires a multifaceted approach: the task must be designed in such a way as to motivate honest players, discourage other players from cheating, implementing technical measures to detect bad data, and prevent undesired behavior looking at the entire pipeline with a Security mindset. We dedicate a Chapter to this issue, presenting a full example that will undoubtedly be of help for future research. We also include sections dedicated to the theory behind our implementations. Background literature includes the pragmatics of dialogue, misunderstandings, and focus, the link between gaze and visual attention, the evolution of approaches towards Referring Expression Generation, and reports on the motivations of crowdsourced workers that borrow from fields such as psychology and economics. This background contextualizes our methods and results with respect to wider fields of study, enabling us to explain not only that our methods work but also why they work. We finish our work with a brief overview of future areas of study. Research on the prediction, detection, and correction of misunderstandings for a multitude of environments is already underway. With the introduction of more advanced virtual environments, modern spoken, dialoguebased tools revolutionizing the market of home devices, and computing power and data being easily available, we expect that the results presented here will prove useful for researchers in several areas of Natural Language Processing for many years to come.Die Technologie hat alle möglichen Arten von unterstĂŒtzenden GerĂ€ten und Softwares in unsere Leben gefĂŒhrt. Fortschritte in GPS, Virtueller RealitĂ€t, und tragbaren Computern mit wachsender Rechenkraft und Internetverbindung öffnen die TĂŒren fĂŒr interaktive Systeme, die vor weniger als einem Jahrzehnt als Science Fiction galten, und die in der Lage sind, uns in einer Vielfalt von Umgebungen anzuleiten. Diese gesteigerte ZugĂ€nglichkeit kommt zulasten sowohl des Umfangs der Probleme, die realistisch gelöst werden können, als auch der LeistungsfĂ€higkeit, die wir von solchen Systemen erwarten. Innennavigation ist ein Beispiel einer solcher Aufgaben: obwohl Autonavigation ein gelöstes Problem ist, ist das Anleiten von Meschen zum Beispiel in einem Museum eine grĂ¶ĂŸere Herausforderung. Anders als Autos, nutzen FußgĂ€nger eher Orientierungspunkte als absolute Distanzen. Sie mĂŒssen von einer grĂ¶ĂŸeren Anzahl von Ablenkungen unterscheiden können und SĂ€tze höherer KomplexitĂ€t erwarten, als die, die fĂŒr Autofahrer angebracht sind. Ein Autofahrer bevorzugt kurze, einfache Instruktionen, die ihn nicht vom Verkehr ablenken. Ein Tourist in einem Museum dagegen kann die metale Leistung erbringen, die ein detaillierter Fundierungsprozess benötigt. Sowohl Auto- als auch Innennavigation sind spezifische Beispiele einer grĂ¶ĂŸeren Familie von kollaborativen Aufgaben bekannt als Instruction Following. In diesen Aufgaben mĂŒssen die zwei klar definierten Akteure des Instruction Givers und des Instruction Followers zusammen arbeiten, um ein gemeinsames Ziel zu erreichen. Der erstere hat Zugang zu allen benötigten Informationen ĂŒber die Umgebung, inklusive (aber nicht begrenzt auf) einer detallierten Karte der Umgebung, einer klaren Liste von Zielen und einem genauen VerstĂ€ndnis von Effekten, die spezifische Handlungen in dieser Umgebung haben. Der letztere ist beauftragt, den Instruktionen zu folgen, mit der Umgebung zu interagieren und die Aufgabe voranzubringen. Es ist dann die Verantwortung des Instruction Giver, einen detaillierten Handlungsplan auszuarbeiten, ihn in kleinere Unterziele zu unterteilen und die Instruktionen dem Instruction Follower in einer klaren, verstĂ€ndlichen Sprache darzulegen. Egal wie sorgfĂ€ltig die Äußerungen des Instruction Givers erarbeitet sind, ist es zu erwarten, dass MissverstĂ€ndnisse stattfinden. Obwohl einige dieser MissverstĂ€ndnisse einfach festzustellen und zu beheben sind, können anderen sehr schwierig oder gar unmöglich zu lösen sein. Daher ist es wichtig, dass der Instruction Giver die Anweisungen so klar wie möglich formuliert, um MissverstĂ€ndnisse so frĂŒh wie möglich aufzudecken, und sie in der effektivstenWeise zu berichtigen. Diese Thesis fĂŒhrt mehrere Algorithmen und Strategien ein, die dazu entworfen wurden, die oben genannten Probleme in einem End-to-End Prozess zu lösen. Dabei werden die individuellen Aspekte eines Systems prĂ€sentiert, dass erfolgreich MissverstĂ€ndnisse in interaktiven Instruction Following Aufgaben vorhersagen, feststellen und korrigieren kann.Wir richten unsere Aufmerksamkeit auf eine bestimmte Art von Instruktion: die sogennanten Referring Expressions. Eine Referring Expression idenfiziert ein einzelnes Objekt aus vielen, wie zum Beispiel „der rote Knopf” oder „die große Pflanze”. Das Generieren von Referring Expressions ist eine SchlĂŒsselkomponente von Instruction Following Aufgaben, da jegliche Art von Manipulation sehr wahrscheinlich eine Beschreibung des Objektes erfordert. Wegen derWichtigkeit und KomplexitĂ€t ist dies eine der am meisten untersuchten Gebiete der Textgenerierung. In dieser Thesis verwenden wir Semantisch Interpretierte Grammatik, eine Methode, die sowohl die Generierung von Referring Expressions (Identifizierung von Eigenschaften fĂŒr eine eindeutige Beschreibung) als auch Surface Realization (Kombinieren dieser Eigenschaften in eine konkrete Substantivgruppe) integriert. Die KomplexitĂ€t der DurchfĂŒhrung, Aufzeichnung und Analyse von Instruction Following Aufgaben in der realen Welt ist eine der großen Herausforderungen der Instruction Following Forschung. Um sowohl die Entwicklung neuer Algorithmen und den Zugang zu diesen Ergebnissen durch die Wissenschaftsgemeinde zu vereinfachen, wird unsere Arbeit in einer Virtuellen Umgebung bewertet. Eine virtuelle Umgebung ahmt die Hauptaspekte der realen Welt nach und nimmt Ablenkungen weg, wĂ€hrend genug Eigenschaften der realen Welt erhalten bleiben, um verwendbar fĂŒr die Untersuchung zu sein. Die Auswahl der angebrachten virtuellen Umgebung fĂŒr eine Forschungsaufgabe gewĂ€hrleistet, dass die Ergebnisse auch in der realenWelt anwendbar sind. Wir haben eine virtuelle Umgebung der GIVE Challenge ausgesucht ù˘A žS eine Umgebung, die fĂŒr eine Instruction Following Aufgabe entworfen wurde, in der ein menschlicher Instruction Follower mit einem automatischen Instruction Giver in einer Labyrinth-artigen 3D Welt verbunden wird. Die Aufgabe zu beenden erfordert Navigation im Raum, Vermeidung von Alarmen, Interagieren mit Objekten, Textgenerierung und Verhindern von Fehlern, die zu einer vorzeitigen Beendung der Aufgabe fĂŒhren. Sogar unter diesen vereinfachten Bedingungen stellt die Aufgabe mehrere rechentechnische Herausforderungen dar: die Aufgabe in Echtzeit durchzufĂŒhren erfordert schnelle Algorithmen, und die Effizienz unserer Methode zu gewĂ€hrleisten bleibt PriorotĂ€t in jedem Schritt. Unser erstes Experiment identifiziert die herausfordernste Art von Fehlern, die unser System erwartungsgemĂ€ĂŸ finden soll. Durch den Entwurf eines Instruction Following Systems, das sich zuvor aufgezeichnete menschliche Daten zu Nutze macht und durch die Nutzung eines einfachen gierigen Algorithmus Intruktionen folgt, grenzen wir klar die Situationen ab, die keine weitere Studie rechtfertigen, von denen, die interessant fĂŒr unsere Forschung sind. Wir testen unseren Algorithmus mit Ähnlichkeitsmaßen verschiedener KomplexitĂ€t, die sich von Überlappungsmaßnahmen wie Jaccard und Editierdistanzen, bis zu fortgeschrittenen Algorithmen des Maschinellen Lernens erstrecken. Die am besten ausfĂŒhrenden Algorithmen erreichen nicht nur gute Genauigkeit sondern tatsĂ€chlich zeigen wir, dass Fehler hoch korreliert sind mit Situationen, die auch herausfordernd fĂŒr menschliche Kommentatoren sind. In einem weiteren Schritt untersuchen wir die Art von Verbesserung, die von unserem System erwartet werden kann wenn wir ihm die Chance geben, es wieder zu versuchen nachdem ein Fehler gemacht wurde. Dieses System macht keine vorherigen Annahmen darĂŒber, welche Aktionen am wahrscheinlichsten als nĂ€chstes ausgewĂ€hlt werden und unsere Ergebnisse liefern gute Argumente dafĂŒr, dass dieser Ansatz einer der schwĂ€chsten Aspekte ist. Um sich von einem Paradigma wegzubewegen, in dem alle Aktionen gleich wahrscheinlich betrachtet werden, zu einem Model, in dem das Handeln des Instruction Followers in Betracht gezogen wird, ist unser folgender Schritt die Entwicklung eines Systems, dass explizit das VerstĂ€ndnis des Anwenders modelliert. Voraussetzend, dass die Instruktion eine Referring Expression beinhaltet, gehen wir das Verstehen des Instruction Followers mit einer Kombination aus zwei probabilistischen Modellen an. Das semantische Modell verwendet Eigenschaften der Referring Expression um zu identifizieren, welches Objekt wahrscheinlicher a

    Grammars for generating isiXhosa and isiZulu weather bulletin verbs

    Get PDF
    The Met Office has investigated the use of natural language generation (NLG) technologies to streamline the production of weather forecasts. Their approach would be of great benefit in South Africa because there is no fast and large scale producer, automated or otherwise, of textual weather summaries for Nguni languages. This is because of, among other things, the complexity of Nguni languages. The structure of these languages is very different from Indo-European languages, and therefore we cannot reuse existing technologies that were developed for the latter group. Traditional NLG techniques such as templates are not compatible with 'Bantu' languages, and existing works that document scaled-down 'Bantu' language grammars are also not sufficient to generate weather text. In pursuance of generating weather text in isiXhosa and isiZulu - we restricted our text to only verbs in order to ensure a manageable scope. In particular, we have developed a corpus of weather sentences in order to determine verb features. We then created context free verbal grammar rules using an incremental approach. The quality of these rules was evaluated using two linguists. We then investigated the grammatical similarity of isiZulu verbs with their isiXhosa counterparts, and the extent to which a singular merged set of grammar rules can be used to produce correct verbs for both languages. The similarity analysis of the two languages was done through the developed rules' parse trees, and by applying binary similarity measures on the sets of verbs generated by the rules. The parse trees show that the differences between the verb's components are minor, and the similarity measures indicate that the verb sets are at most 59.5% similar (Driver-Kroeber metric). We also examined the importance of the phonological conditioning process by developing functions that calculate the ratio of verbs that will require conditioning out of the total strings that can be generated. We have found that the phonological conditioning process affects at least 45% of strings for isiXhosa, and at least 67% of strings for isiZulu depending on the type of verb root that is used. Overall, this work shows that the differences between isiXhosa and isiZulu verbs are minor, however, the exploitation of these similarities for the goal of creating a unified rule set for both languages cannot be achieved without significant maintainability compromises because there are dependencies that exist in one language and not the other between the verb's 'modules'. Furthermore, the phonological conditioning process should be implemented in order to improve generated text due to the high ratio of verbs it affects

    Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009)

    Get PDF

    EIB-Information 3 - 1997 No. 93

    Get PDF

    Supporting process model validation through natural language generation

    Get PDF
    The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation

    A Study Towards Spanish Abstract Meaning Representation

    Get PDF
    Taking into account the increasing attention that researchers of Natural Language Understanding (NLU) and Natural Language Generation (NLG) are paying to Computational Semantics, we analyze the feasibility of annotating Spanish Abstract Meaning Representations. The Abstract Meaning Representation (AMR) project aims to create a large- scale sembank of simple structures that represent unified, complete semantic information contained in English sentences. Although AMR is not destined to be an interlingua, one of its key features is the ability to focus on events rather than on word forms. They do this, for instance, by abstracting away from morpho-syntactic idiosyncrasies. In this thesis, we investigate the requirements to – and we come up with a proposal to – annotate Spanish AMRs, based on the premise that many of these idiosyncrasies mark differences between languages. To our knowledge, this is the first work towards the development of Abstract Meaning Representation for Spanish

    Bridging the gap between textual and formal business process representations

    Get PDF
    Tesi en modalitat de compendi de publicacionsIn the era of digital transformation, an increasing number of organizations are start ing to think in terms of business processes. Processes are at the very heart of each business, and must be understood and carried out by a wide range of actors, from both technical and non-technical backgrounds alike. When embracing digital transformation practices, there is a need for all involved parties to be aware of the underlying business processes in an organization. However, the representational complexity and biases of the state-of-the-art modeling notations pose a challenge in understandability. On the other hand, plain language representations, accessible by nature and easily understood by everyone, are often frowned upon by technical specialists due to their ambiguity. The aim of this thesis is precisely to bridge this gap: Between the world of the techni cal, formal languages and the world of simpler, accessible natural languages. Structured as an article compendium, in this thesis we present four main contributions to address specific problems in the intersection between the fields of natural language processing and business process management.A l’era de la transformaciĂł digital, cada vegada mĂ©s organitzacions comencen a pensar en termes de processos de negoci. Els processos sĂłn el nucli principal de tota empresa i, com a tals, han de ser fĂ cilment comprensibles per un ampli ventall de rols, tant perfils tĂšcnics com no-tĂšcnics. Quan s’adopta la transformaciĂł digital, Ă©s necessari que totes les parts involucrades estiguin ben informades sobre els protocols implantats com a part del procĂ©s de digitalitzaciĂł. Tot i aixĂČ, la complexitat i biaixos de representaciĂł dels llenguatges de modelitzaciĂł que actualment conformen l’estat de l’art sovint en dificulten la seva com prensiĂł. D’altra banda, les representacions basades en documentaciĂł usant llenguatge natural, accessibles per naturalesa i fĂ cilment comprensibles per tothom, moltes vegades sĂłn vistes com un problema pels perfils mĂ©s tĂšcnics a causa de la presĂšncia d’ambigĂŒitats en els textos. L’objectiu d’aquesta tesi Ă©s precisament el de superar aquesta distĂ ncia: La distĂ ncia entre el mĂłn dels llenguatges tĂšcnics i formals amb el dels llenguatges naturals, mĂ©s accessibles i senzills. Amb una estructura de compendi d’articles, en aquesta tesi presentem quatre grans lĂ­nies de recerca per adreçar problemes especĂ­fics en aquesta intersecciĂł entre les tecnologies d’anĂ lisi de llenguatge natural i la gestiĂł dels processos de negoci.Postprint (published version

    Contexts and Contributions: Building the Distributed Library

    Get PDF
    This report updates and expands on A Survey of Digital Library Aggregation Services, originally commissioned by the DLF as an internal report in summer 2003, and released to the public later that year. It highlights major developments affecting the ecosystem of scholarly communications and digital libraries since the last survey and provides an analysis of OAI implementation demographics, based on a comparative review of repository registries and cross-archive search services. Secondly, it reviews the state-of-practice for a cohort of digital library aggregation services, grouping them in the context of the problem space to which they most closely adhere. Based in part on responses collected in fall 2005 from an online survey distributed to the original core services, the report investigates the purpose, function and challenges of next-generation aggregation services. On a case-by-case basis, the advances in each service are of interest in isolation from each other, but the report also attempts to situate these services in a larger context and to understand how they fit into a multi-dimensional and interdependent ecosystem supporting the worldwide community of scholars. Finally, the report summarizes the contributions of these services thus far and identifies obstacles requiring further attention to realize the goal of an open, distributed digital library system

    An investigation into expression of hydrogen sulfide synthesising enzymes in placentas from normal and complicated pregnancies

    Get PDF
    Hydrogen sulfide (H2S) has recently attracted substantial interest as an endogenous gaseous signalling molecule. Like nitric oxide (NO) and carbon monoxide (CO), it promotes vasodilation and exhibits cytoprotective and anti-inflammatory properties. It is involved in diverse physiological and pathophysiological processes such as neurogenesis, regulation of blood pressure, atherosclerosis and inflammation. Endogenous H2S is synthesised predominantly by three enzymes: cystathionine ÎČ-synthase (CBS), cystathionine Îł-lyase (CSE) and 3-mercaptopyruvate sulfurtransferase (3-MST). Recently, the endogenous production of H2S in human myometrium and placental tissues and its role in pathophysiology of complicated pregnancies with pre-eclampsia (PE) and fetal growth restriction (FGR) has been studied. In PE, inadequate remodelling of spiral arteries by trophoblasts causes ischemia-reperfusion insult of the placenta which is one source for oxidative stress, and it causes reductions in utero-placental blood flow, often resulting in FGR. The limited research undertaken to study expression of H2S synthesising enzymes in anormal and abnormal placental tissues is characterised by contradictory findings in mRNA and protein expressions of these enzymes. All studies used a random placental sampling method which could mask spatial variation in gene expression. Therefore, this study aimed to examine expression of these enzymes using a systematic sampling method in which placental samples were taken from different identified zones of each placenta. The expression of these enzymes was studied at both mRNA and protein levels using quantitative polymerase chain reaction (qPCR) and western blotting techniques. Extensive testing of several anti-CBS and CSE antibodies for western blotting revealed persistent non-specific binding of these antibodies to multiple unidentified proteins in all sample types. Antibodies tested included all those used in the previous studies in western blotting and in situ methods. These all showed highly images of CBS and CSE bands and lacked any supplementary data about testing antibodies specificity raising concerns about the previous research findings and identifying another potential source of the lack of consistency in conclusions they reached. CRISPR knockout clones of each protein were generated using two different cell lines to test specificity of these antibodies which confirmed the antibodies did detect proteins of the expected size. This enabled identification the correct CBS and CSE bands to study the expression at protein level in the comparative groups. Additionally, these clones identified isoforms not predicted by gene databases, allowed retrospective specificity testing of in situ procedures used and revealed an intriguing regulation of CBS, CSE by 3-MST which may be of relevance to placenta. CBS, CSE and 3-MST spatial expressions were examined in normal placentas obtained from women who delivered by caesarean section (CS) and were not in labour, in placentas from women who delivered spontaneously and in placentas from women with complicated pregnancies with PE, FGR and high body mass index (BMI). The study showed that there were significant spatial differences in expression of CSE and 3-MST, with an up-regulation in labour when compared to non-labour group at a particular placental site. Also, the study showed a significant increase in 3-MST mRNA and protein abundances in FGR group compared to control healthy group at the outer placental site. Only CBS and CSE mRNA abundances were significantly increased in PE compared to healthy controls at inner and middle placental sites, respectively. The spatial difference in gene expression in labour or complicated pregnancies at precise zones suggests that there is a controlled spatial change in expression or susceptibility to change which may be due to the vascular biology of the placenta. The physiological and pathological significances of these differences remain to be elucidated but oxidative stress and inflammatory pathway are the common links. Also, the reduction in CBS and CSE protein abundances in absence of 3-MST may suggest that there is a complex of regulatory mechanisms on these enzymes. Taken together, these results suggest that H2S involves in labour and pathophysiology of PE and FGR. However, further investigations with highly specific antibodies are required especially the data in this study showed significant differences in expression between controls and the targeted groups. To conclude, the present study highlights the possibility that ordinary placental sampling methods may mask the altered expression of some genes, and therefore, this study represent a further step toward developing a systematic way for sampling in placental research. In addition to this, the present study illustrates how using simple CRISPR knockout technology could help testing specificity of primary antibodies, and presents a real example of how incomplete reporting of antibodies research antibodies compromise the reducibility of research results. Furthermore, it emphasises on full documentation of the experimental procedure describing the data supporting the specificity of antibodies validation to help researchers to collaborate and build on each other’s work
    • 

    corecore