69,364 research outputs found
Do DL models and training environments have an impact on energy consumption?
Current research in the computer vision field mainly focuses on improving
Deep Learning (DL) correctness and inference time performance. However, there
is still little work on the huge carbon footprint that has training DL models.
This study aims to analyze the impact of the model architecture and training
environment when training greener computer vision models. We divide this goal
into two research questions. First, we analyze the effects of model
architecture on achieving greener models while keeping correctness at optimal
levels. Second, we study the influence of the training environment on producing
greener models. To investigate these relationships, we collect multiple metrics
related to energy efficiency and model correctness during the models' training.
Then, we outline the trade-offs between the measured energy efficiency and the
models' correctness regarding model architecture, and their relationship with
the training environment. We conduct this research in the context of a computer
vision system for image classification. In conclusion, we show that selecting
the proper model architecture and training environment can reduce energy
consumption dramatically (up to 98.83%) at the cost of negligible decreases in
correctness. Also, we find evidence that GPUs should scale with the models'
computational complexity for better energy efficiency.Comment: 49th Euromicro Conference Series on Software Engineering and Advanced
Applications (SEAA). 8 pages, 3 figure
Verification of model transformations
Model transformations are a central element of model-driven
development (MDD) approaches such as the model-driven architecture (MDA). The correctness of model transformations is critical to their effective use in practical software development, since users must be able
to rely upon the transformations correctly preserving the semantics of models. In this paper we define a formal semantics for model transformations, and provide techniques for proving the termination, confluence and correctness of model transformations
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness
Analysis and Design of Intelligent Logistics System Based on Internet of Things
Based on Internet of things, .NET software development technology and GIS technology, this paper analyzes and designs a system of intelligent distribution information with software engineering life cycle theory as the guide to solve the problem of high complexity and low efficiency of manual operation in logistics and distribution, improve the level of intelligent operation and then improve the operating efficiency. It analyzes the business requirements of the system, then designs its physical architecture, software architecture and system structure, and constructs the terminal node distribution dynamic model of transmission route, realizing the main function modules of the system and verifying the correctness and effectiveness of the system results through systematic and comprehensive tests.
DOI: 10.17762/ijritcc2321-8169.15065
- …