915 research outputs found

    The role of interferon‐gamma and its signaling pathway in pediatric hematological disorders

    Get PDF
    AbstractInterferon‐gamma (IFN‐γ) plays a key role in the pathophysiology of hemophagocytic lymphohistiocytosis (HLH), and available evidence also points to a role in other conditions, including aplastic anemia (AA) and graft failure following allogeneic hematopoietic stem cell transplantation. Recently, the therapeutic potential of IFN‐γ inhibition has been documented; emapalumab, an anti‐IFN‐γ monoclonal antibody, has been approved in the United States for treatment of primary HLH that is refractory, recurrent or progressive, or in patients with intolerance to conventional therapy. Moreover, ruxolitinib, an inhibitor of JAK/STAT intracellular signaling, is currently being investigated for treating HLH. In AA, IFN‐γ inhibits hematopoiesis by disrupting the interaction between thrombopoietin and its receptor, c‐MPL. Eltrombopag, a small‐molecule agonist of c‐MPL, acts at a different binding site to IFN‐γ and is thus able to circumvent its inhibitory effects. Ongoing trials will elucidate the role of IFN‐γ neutralization in secondary HLH and future studies could explore this strategy in controlling hyperinflammation due to CAR T cells

    Modeling temporal dimensions of semistructured data

    Get PDF
    In this paper we propose an approach to manage in a correct way valid time semantics for semistructured temporal clinical information. In particular, we use a graph-based data model to represent radiological clinical data, focusing on the patient model of the well known DICOM standard, and define the set of (graphical) constraints needed to guarantee that the history of the given application domain is consistent

    Promoting data provenance tracking in the archaeological interpretation process

    Get PDF
    n this paper we propose a model and a set of derivation rules for tracking data provenance during the archaeological interpretation process. The interpretation process is the main task performed by an archaeologist that, starting from ground data about evidences and findings, tries to derive knowledge about an ancient object or event. In particular, in this work we concentrate on the dating process used by archaeologists to assign one or more time intervals to a finding in order to define its lifespan on the temporal axis and we propose a framework to represent such information and infer new knowledge including provenance of data. Archaeological data, and in particular their temporal dimension, are typically vague, since many different interpretations can coexist, thus we will use Fuzzy Logic to assign a degree of confidence to values and Fuzzy Temporal Constraint Networks to model relationships between dating of different finding

    Tracking Data Provenance of Archaeological Temporal Information in Presence of Uncertainty

    Get PDF
    The interpretation process is one of the main tasks performed by archaeologists who, starting from ground data about evidences and findings, incrementally derive knowledge about ancient objects or events. Very often more than one archaeologist contributes in different time instants to discover details about the same finding and thus, it is important to keep track of history and provenance of the overall knowledge discovery process. To this aim, we propose a model and a set of derivation rules for tracking and refining data provenance during the archaeological interpretation process. In particular, among all the possible interpretation activities, we concentrate on the one concerning the dating that archaeologists perform to assign one or more time intervals to a finding to define its lifespan on the temporal axis. In this context, we propose a framework to represent and derive updated provenance data about temporal information after the mentioned derivation process. Archaeological data, and in particular their temporal dimension, are typically vague, since many different interpretations can coexist, thus, we will use Fuzzy Logic to assign a degree of confidence to values and Fuzzy Temporal Constraint Networks to model relationships between dating of different findings represented as a graph-based dataset. The derivation rules used to infer more precise temporal intervals are enriched to manage also provenance information and their following updates after a derivation step. A MapReduce version of the path consistency algorithm is also proposed to improve the efficiency of the refining process on big graph-based datasets

    Operational and abstract semantics of the query language G-Log

    Get PDF
    The amount and variety of data available electronically have dramatically increased in the led decade; however, data and documents are stored in different ways and do notusual# show their internal structure. In order to take ful advantage of thetopolk9dQ# structure ofdigital documents, andparticulIII web sites, theirhierarchical organizationshouliz explizatio introducing a notion of querysimil; to the one usedin database systems. A good approach, in that respect, is the one provided bygraphical querylrydM#99; original; designed to model object bases and lndd proposed for semistructured data, la, G-Log. The aim of this paper is to providesuitabl graph-basedsemantics to thislisd;BI# supporting both data structure variabil#I andtopol#Ik;M similpol#I between queries and document structures. A suite ofoperational semantics basedon the notion ofbisimulQM#I is introduced both at theconcr--h level (instances) andat theabstru( level (schemata), giving rise to a semantic framework that benefits from the cross-fertil9;dl of tool originalM designed in quite different research areas (databases, concurrency,loncur static analysis)

    A graph-based meta-model for heterogeneous data management

    Get PDF
    The wave of interest in data-centric applications has spawned a high variety of data models, making it extremely difficult to evaluate, integrate or access them in a uniform way. Moreover, many recent models are too specific to allow immediate comparison with the others and do not easily support incremental model design. In this paper, we introduce GSMM, a meta-model based on the use of a generic graph that can be instantiated to a concrete data model by simply providing values for a restricted set of parameters and some high-level constraints, themselves represented as graphs. In GSMM, the concept of data schema is replaced by that of constraint, which allows the designer to impose structural restrictions on data in a very flexible way. GSMM includes GSL, a graph-based language for expressing queries and constraints that besides being applicable to data represented in GSMM, in principle, can be specialised and used for existing models where no language was defined. We show some sample applications of GSMM for deriving and comparing classical data models like the relational model, plain XML data, XML Schema, and time-varying semistructured data. We also show how GSMM can represent more recent modelling proposals: the triple stores, the BigTable model and Neo4j, a graph-based model for NoSQL data. A prototype showing the potential of the approach is also described

    Semi-automatic support for evolving functional dependencies

    Get PDF
    During the life of a database, systematic and frequent violations of a given constraint may suggest that the represented reality is changing and thus the constraint should evolve with it. In this paper we propose a method and a tool to (i) find the functional dependencies that are violated by the current data, and (ii) support their evolution when it is necessary to update them. The method relies on the use of confidence, as a measure that is associated with each dependency and allows us to understand \u201dhow far\u201d the dependency is from correctly describing the current data; and of goodness, as a measure of balance between the data satisfying the antecedent of the dependency and those satisfying its consequent. Our method compares favorably with literature that approaches the same problem in a different way, and performs effectively and efficiently as shown by our tests on both real and synthetic databases

    Top-N recommendations on Unpopular Items with Contextual Knowledge

    Get PDF
    Traditional recommender systems provide recommendations of items to users; recently, some of them also consider the context related to predictions. In this paper we propose a technique that relies on classical recommendation algorithms and post-filters recommendations on the basis of contextual information available for them. Association rules are exploited to identify the most significant correlations among context and item characteristics. The mined rules are used to filter the predictions performed by traditional recommender systems to provide contextualized recommendations. Our experimental results show that the proposed approach allows improving the output of classical algorithms proposed in the literature, especially in the case of unpopular items

    CoPart: a context-based partitioning technique for big data

    Get PDF
    The MapReduce programming paradigm is frequently used in order to process and analyse a huge amount of data. This paradigm relies on the ability to apply the same operation in parallel on independent chunks of data. The consequence is that the overall performances greatly depend on the way data are partitioned among the various computation nodes. The default partitioning technique, provided by systems like Hadoop or Spark, basically performs a random subdivision of the input records, without considering the nature and correlation between them. Even if such approach can be appropriate in the simplest case where all the input records have to be always analyzed, it becomes a limit for sophisticated analyses, in which correlations between records can be exploited to preliminarily prune unnecessary computations. In this paper we design a context-based multi-dimensional partitioning technique, called COPART, which takes care of data correlation in order to determine how records are subdivided between splits (i.e., units of work assigned to a computation node). More specifically, it considers not only the correlation of data w.r.t. contextual attributes, but also the distribution of each contextual dimension in the dataset. We experimentally compare our approach with existing ones, considering both quality criteria and the query execution times
    • 

    corecore