30 research outputs found

    An Improved & Adaptive Software Development Methodology

    Get PDF
    The methods of software development have increased a lot from the beginning. From the first waterfall to current agile methodology there still have some drawbacks. For this reason, the software delivery is still a very challenging and heavy-duty work. In this paper, we proposed a new software development methodology which is easy to implement and will help software development companies a secure and robust software releases. The proposed SDLC process is known as 4A. The empirical result shows that the proposed methodology is more adaptive and flexible for developers and project managers

    Data Lakes for Digital Humanities

    Full text link
    Traditional data in Digital Humanities projects bear various formats (structured, semi-structured, textual) and need substantial transformations (encoding and tagging, stemming, lemmatization, etc.) to be managed and analyzed. To fully master this process, we propose the use of data lakes as a solution to data siloing and big data variety problems. We describe data lake projects we currently run in close collaboration with researchers in humanities and social sciences and discuss the lessons learned running these projects.Comment: Data and Digital Humanities Trac

    A Java Graphical User Interface for Large-Scale Scientific Computations in Distributed Systems

    Get PDF
    Large-scale scientific applications present great challenges to computational scientists in terms of obtaining high performance and in managing large datasets. These applications (most of which are simulations) may employ multiple techniques and resources in a heterogeneously distributed environment. Effective working in such an environment is crucial for modern large-scale simulations. In this paper, we present an integrated Java graphical user interface (IJ-GUI) that provides a control platform for managing complex programs and their large datasets easily. As far as performance is concerned, we present and evaluate our initial implementation of two optimization schemes: data replication and data prediction. Data replication can take advantage of \u27temporal locality\u27 by caching the remote datasets on local disks; data prediction, on the other hand, provides prefetch hints based on the datasets\u27 past activities that are kept in databases. We first introduce the data contiguity concept in such an environment that guides data prediction. The relationship between the two approaches is discussed

    based on K-water case

    Get PDF
    Thesis(Master) --KDI School:Master of Public Management,2020.1. Introduction 2. Big data in water resources sector 3. Big data Policy and Technology Trends 4. Big data in water resources 5. Changes of the Data Usage in the Big Data Era 6. In the Big Data Era, the limitations of traditional systems development and management 7. Development Method and Governance in Big Data System 8. Big Data Operation Case in K-watermaste

    NoSQL Storage Systems – A Review

    Get PDF
    NoSQL systems have fully grown in quality for storing massive information as a result of these systems supply high convenience, i.e., operations with high output and low latency. However, information in these systems square measure handled these days in ad-hoc ways that. We have a tendency to gift Wasef, a system that treats information in a very NoSQL information system, as excellent voters. Information might embrace data such as: operational history for an information table (e.g., columns), placement data for ranges of keys, and operational logs for information things (key-value pairs). Wasef permits the NoSQL system to store and question this information with efficiency. We have a tendency to integrate Wasef into Apache Cassandra, one among the foremost widespread key-value stores
    corecore