32 research outputs found

    DECIMAL: A requirements engineering tool for product families

    Get PDF
    Today, many software organizations are utilizing product families as a way of improving productivity, improving quality and reducing development time. When a new member is added to a product family, there must be a way to verify whether the new member\u27s specific requirements are met within the reuse constraints of its product family. The contribution of this paper is to demonstrate such a verification process by describing a requirements engineering tool called DECIMAL. DECIMAL is an interactive, automated, GUI driven verification tool that automatically checks for completeness (checking to see if all commonalities are satisfied) and consistency (checking to see if dependencies between variabilities are satisfied) of the new member\u27s requirements with the product family\u27s requirements. DECIMAL also checks that variabilities are within the range and data type specified for the product family. The approach is to perform the verification using a database as the underlying analysis engine. A pilot study of a virtual reality device driver product family is also described which investigates the feasibility of this approach by evaluating the tool

    InTune: Reinforcement Learning-based Data Pipeline Optimization for Deep Recommendation Models

    Full text link
    Deep learning-based recommender models (DLRMs) have become an essential component of many modern recommender systems. Several companies are now building large compute clusters reserved only for DLRM training, driving new interest in cost- and time- saving optimizations. The systems challenges faced in this setting are unique; while typical deep learning training jobs are dominated by model execution, the most important factor in DLRM training performance is often online data ingestion. In this paper, we explore the unique characteristics of this data ingestion problem and provide insights into DLRM training pipeline bottlenecks and challenges. We study real-world DLRM data processing pipelines taken from our compute cluster at Netflix to observe the performance impacts of online ingestion and to identify shortfalls in existing pipeline optimizers. We find that current tooling either yields sub-optimal performance, frequent crashes, or else requires impractical cluster re-organization to adopt. Our studies lead us to design and build a new solution for data pipeline optimization, InTune. InTune employs a reinforcement learning (RL) agent to learn how to distribute the CPU resources of a trainer machine across a DLRM data pipeline to more effectively parallelize data loading and improve throughput. Our experiments show that InTune can build an optimized data pipeline configuration within only a few minutes, and can easily be integrated into existing training workflows. By exploiting the responsiveness and adaptability of RL, InTune achieves higher online data ingestion rates than existing optimizers, thus reducing idle times in model execution and increasing efficiency. We apply InTune to our real-world cluster, and find that it increases data ingestion throughput by as much as 2.29X versus state-of-the-art data pipeline optimizers while also improving both CPU & GPU utilization.Comment: Accepted at RecSys 2023. 11 pages, 2 pages of references. 8 figures with 2 table

    Managing Data Replication in Mobile Ad-Hoc Network Databases

    Full text link
    A Mobile Ad-hoc Network (MANET) is a collection of wireless autonomous nodes without any fixed backbone infrastructure. All the nodes in MANET are mobile and power restricted and thus, disconnection and network partitioning occur frequently. In addition, many MANET database transactions have time constraints. In this paper, a Data REplication technique for real-time Ad-hoc Mobile databases (DREAM) is proposed that addresses all those issues. It improves data accessibility while considering the issue of energy limitation by replicating hot data items at servers that have higher remaining power. It addresses disconnection and network partitioning by introducing new data and transaction types and by considering the stability of wireless link. It handles the real-time transaction issue by replicating data items that are accessed frequently by firm transactions before those accessed frequently by soft transactions. DREAM is prototyped on laptops and PDAs and compared with two existing replication techniques using a military database application. The results show that DREAM performs the best in terms of percentage of successfully executed transactions, servers’ and clients’ energy consumption, and balance of energy consumption distribution among servers

    Effect of amino acid substitutions at the subunit interface on the stability and aggregation properties of a dimeric protein: role of Arg 178 and Arg 218 at the dimer interface of thymidylate synthase

    Get PDF
    The significance of two interface arginine residues on the structural integrity of an obligatory dimeric enzyme thymidylate synthase (TS) from Lactobacillus casei was investigated by thermal and chemical denaturation. While the R178F mutant showed apparent stability to thermal denaturation by its decreased tendency to aggregate, the Tm of the R218K mutant was lowered by 5°C. Equilibrium denaturation studies in guanidinium chloride (GdmCl) and urea indicate that in both the mutants, replacement of Arg residues results in more labile quaternary and tertiary interactions. Circular dichroism studies in aqueous buffer suggest that the protein interior in R218K may be less well-packed as compared to the wild type protein. The results emphasize that quaternary interactions may influence the stability of the tertiary fold of TS. The amino acid replacements also lead to notable alteration in the ability of the unfolding intermediate of TS to aggregate. The aggregated state of partially unfolded intermediate in the R178F mutant is stable over a narrower range of denaturant concentrations. In contrast, there is an exaggerated tendency on the part of R218K to aggregate in intermediate concentrations of the denaturant. The 3 Å crystal structure of the R178F mutant reveals no major structural change as a consequence of amino acid substitution. The results may be rationalized in terms of mutational effects on both the folded and unfolded state of the protein. Site specific amino acid substitutions are useful in identifying specific regions of TS involved in association of non-native protein structures

    Influence of sonication on the physicochemical and biological characteristics of selenium-substituted hydroxyapatites

    Get PDF
    Although the material hydroxyapatite (HAP) has excellent porous, biocompatible, and biodegradable properties, its mechanical strength and microbial inhibition rate are not adequate for its direct use in bone tissue engineering or in constructing artificial teeth. To overcome some of its limitations, in the present study, we have formed an organic-inorganic composite with an altered internal structureviadoping selenium (Se) cations into the lattice of HAP. We have synthesized Se-substituted HAP (Se-HAP) composites with different Se/P ratios (0.01, 0.05, and 0.1 M)viaa wet chemical route in which two different sets of samples were collected (1) after only precipitation (referred to as the precipitation method) and (2) after precipitation followed by sonication (referred to as the sonochemical method). FTIR and Raman spectroscopic analyses confirmed the successful doping of Se into the HAP matrices, while powder XRD studies indicated their highly crystalline nature, which was significantly influenced by Se doping. The XRD data also showed that the Se-HAP particles formed by the precipitation method have a size of 56 nm and those formed by the sonochemical method have a size of 29 nm. Morphological analysis by means of SEM and TEM indicated that the sonochemical method produces well-defined rod-shaped particles, while the precipitation method produces particles with agglomerated structures. Hemolytic studies confirmed that the Se-HAP particles are biocompatible, and that the hemolytic ratio increases with the Se content. In addition, antibacterial studies indicated that Se-HAP responds quite well against a Gram-positive strain (S. aureus), on a par with the response to a Gram-negative strain (P. aeruginosa). Finally,in vitrocell viability and proliferation studies indicated an increase in the proliferation capacity of non-cancer cells (NIH-3T3 fibroblasts) and a considerable reduction in the viability of cancer cells (MG-63 osteosarcoma). Based on the overall analysis, the Se-HAP samples formed by the sonochemical approach could have potential for biomedical applications in bone cell repair, growth, and regeneration

    DECIMAL: a requirements engineering tool for product families

    No full text
    Today, many software organizations are utilizing product lines as a way of improving productivity, improving quality and reducing development time. When a product family evolves (a new member is added to it), there must be a way to verify whether the new member's specific requirements are met within the reuse constraints of its product family. The contribution of this paper is to demonstrate such a verification process by describing a requirements engineering tool called DECIMAL. DECIMAL is an interactive, automated, GUI driven verification tool that automatically checks for completeness (checking to see if all commonalities are satisfied) and consistency (checking to see if dependencies between variabilities are satisfied) of the new member's requirements with the product family's requirements. DECIMAL also checks that variabilities are within the range and data type specified for the product family. The approach is to perform the verification using a database as the underlying analysis engine. Finally, a pilot study of a virtual reality device driver product family is described which investigates the feasibility of this approach by evaluating the tool.</p

    Abstract

    No full text
    Several algorithms based on link analysis have been developed to measure the importance of nodes on a graph such as pages on the World Wide Web. PageRank and HITS are the most popular ranking algorithms to rank the nodes of any directed graph. But, both these algorithms assign equal importance to all the edges and nodes, ignoring the semantically rich information from nodes and edges. Therefore, in the case of a graph containing natural clusters, these algorithms do not differentiate between inter-cluster edges and intra-cluster edges. Based on this parameter, we propose a Weighted Inter-Cluster Edge Ranking for clustered graphs that weighs edges (based on whether it is an inter-cluster or an intracluster edge) and nodes (based on the number of clusters it connects). We introduce a parameter ‘α ’ which can be adjusted depending on the bias desired in a clustered graph. Our experiments were two fold. We implemented our algorithm to relationship set representing legal entities and documents and the results indicate the significance of the weighted edge approach. We also generated biased and random walks to quantitatively study the performance. 1
    corecore