155,639 research outputs found
GOES-R Algorithms: A Common Science and Engineering Design and Development Approach for Delivering Next Generation Environmental Data Products
GOES-R, the next generation of the National Oceanic and Atmospheric Administrationâs (NOAA) Geostationary Operational Environmental Satellite (GOES) System, represents a new technological era in operational geostationary environmental satellite systems. GOES-R will provide advanced products that describe the state of the atmosphere, land, oceans, and solar/ space environments over the western hemisphere. The Harris GOES-R Ground Segment team will provide the software, based on government-supplied algorithms, and engineering infrastructures designed to produce and distribute these next-generation data products. The Harris GOES-R Team has adopted an integrated applied science and engineering approach that combines rigorous system engineering methods, with modern software design elements to facilitate the transition of algorithms for Level 1 and 2+ products to operational software. The Harris Team GOES-R GS algorithm framework, which includes a common data model interface, provides general design principles and standardized methods for developing general algorithm services, interfacing to external data, generating intermediate and L1b and L2 products and implementing common algorithm features such as metadata generation and error handling.
This work presents the suite of GOES-R products, their properties and the process by which the related requirements are maintained during the complete design/development life-cycle. It also describes the algorithm architecture/engineering approach that will be used to deploy these algorithms, and provides a preliminary implementation road map for the development of the GOES-R GS software infrastructure, and a view into the integration of the framework and data model into the final design
Recommended from our members
A component-based product line architecture for workflow management systems
This paper presents a component-based product line for workflow management systems. The process followed to design the product line was based on the Catalysis method. Extensions were made to represent variability across the process. The domain of workflow management systems has been shown to be appropriate to the application of the product line approach as there are a standard architecture and models established by a regulatory board, the Workflow Management Coalition. In addition, there is a demand for similar workflow management systems but with some different features. The product line architecture was evaluated with Rapide simulation tools. The evaluation was based on selected scenarios, thus, avoiding implementation issues. The strategy that has been used to populate the architecture and experiment with the product line is shown. In particular, the design of the workflow execution manager component is described
Management of e-technology in China
"e" technology is bringing about many challenges for companies, in particular for their managers. This concerns a vast range of business processes in many sectors of the economy and in nearly every country of the world. In rapidly industrializing China, companies and other organizations are actively finding their way by adapting, developing and exploiting new e-technologies. The paper's focus is the identification of the management issues in implementing e-technology in China. The paper reports on research into difficulties of establishing and operating e-business in China. In particular, it discusses management related to e-technology sharing and application. A brief review of literature is followed by the analysis of three recent case studies: an international IT services alliance, a financial services provider and an international manufacturing joint venture. All case companies are applying e-technology in China, but the role of e-technology differs in the three cases: adding a service line to the existing business processes; developing a new business process; and increasing efficiency and effectiveness in business processes. The conclusions present the emerging management issues: cooperation is a key asset in networking; the choice of business models plays an important role; adequate management attention for details such as a training program is require
Deep Space Network information system architecture study
The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control
Programming patterns and development guidelines for Semantic Sensor Grids (SemSorGrid4Env)
The web of Linked Data holds great potential for the creation of semantic applications that can combine self-describing structured data from many sources including sensor networks. Such applications build upon the success of an earlier generation of 'rapidly developed' applications that utilised RESTful APIs. This deliverable details experience, best practice, and design patterns for developing high-level web-based APIs in support of semantic web applications and mashups for sensor grids. Its main contributions are a proposal for combining Linked Data with RESTful application development summarised through a set of design principles; and the application of these design principles to Semantic Sensor Grids through the development of a High-Level API for Observations. These are supported by implementations of the High-Level API for Observations in software, and example semantic mashups that utilise the API
A Quality Model for Actionable Analytics in Rapid Software Development
Background: Accessing relevant data on the product, process, and usage
perspectives of software as well as integrating and analyzing such data is
crucial for getting reliable and timely actionable insights aimed at
continuously managing software quality in Rapid Software Development (RSD). In
this context, several software analytics tools have been developed in recent
years. However, there is a lack of explainable software analytics that software
practitioners trust. Aims: We aimed at creating a quality model (called
Q-Rapids quality model) for actionable analytics in RSD, implementing it, and
evaluating its understandability and relevance. Method: We performed workshops
at four companies in order to determine relevant metrics as well as product and
process factors. We also elicited how these metrics and factors are used and
interpreted by practitioners when making decisions in RSD. We specified the
Q-Rapids quality model by comparing and integrating the results of the four
workshops. Then we implemented the Q-Rapids tool to support the usage of the
Q-Rapids quality model as well as the gathering, integration, and analysis of
the required data. Afterwards we installed the Q-Rapids tool in the four
companies and performed semi-structured interviews with eight product owners to
evaluate the understandability and relevance of the Q-Rapids quality model.
Results: The participants of the evaluation perceived the metrics as well as
the product and process factors of the Q-Rapids quality model as
understandable. Also, they considered the Q-Rapids quality model relevant for
identifying product and process deficiencies (e.g., blocking code situations).
Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model
enables detecting problems that take more time to find manually and adds
transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by
IEEE in the 44th Euromicro Conference on Software Engineering and Advanced
Applications (SEAA) 2018. The final authenticated version will be available
onlin
- âŠ