632 research outputs found
Stochastic Minimum Principle for Partially Observed Systems Subject to Continuous and Jump Diffusion Processes and Driven by Relaxed Controls
In this paper we consider non convex control problems of stochastic
differential equations driven by relaxed controls. We present existence of
optimal controls and then develop necessary conditions of optimality. We cover
both continuous diffusion and Jump processes.Comment: Pages 23, Submitted to SIAM Journal on Control and Optimizatio
Centralized Versus Decentralized Team Games of Distributed Stochastic Differential Decision Systems with Noiseless Information Structures-Part II: Applications
In this second part of our two-part paper, we invoke the stochastic maximum
principle, conditional Hamiltonian and the coupled backward-forward stochastic
differential equations of the first part [1] to derive team optimal
decentralized strategies for distributed stochastic differential systems with
noiseless information structures. We present examples of such team games of
nonlinear as well as linear quadratic forms. In some cases we obtain closed
form expressions of the optimal decentralized strategies.
Through the examples, we illustrate the effect of information signaling among
the decision makers in reducing the computational complexity of optimal
decentralized decision strategies.Comment: 39 pages Submitted to IEEE Transaction on Automatic Contro
A Pragmatics Implicature Function of Hillary Clinton and Donald Trump Presidential Debate
The problem in this thesis is the utterances by the candidate of presidential debate Hillary Clinton and Donald Trump to get the support from the audience and can be a President of America. The candidates give suggestion and negative comment for the audience that he or she was the best to be a president of America.
In this writing, The writer use qualitative method of collecting data because of the element that used come from textual material, which data collected with utilize all of the information and the relevant thought. The writer use library research that kinds of qualitative method for analyze the data. Sarwono (2006:226) mention that library research use for analyze from textual data with systematically. In the writer point of view, this method is correctly use for analyzing this proposal. The writer will collect the data and categorized the primary data to secondary data with used pragmatics implicature function that is assertive function, directive function and expressive function and choose only some of the data for sampling.
The results of research in this thesis are the function of pragmatics implcature that was used the most is expressive function. From three presidential debate if Hillary Clinton and Donald trump.
 
Software Engineering Approaches for TinyML based IoT Embedded Vision: A Systematic Literature Review
Internet of Things (IoT) has catapulted human ability to control our
environments through ubiquitous sensing, communication, computation, and
actuation. Over the past few years, IoT has joined forces with Machine Learning
(ML) to embed deep intelligence at the far edge. TinyML (Tiny Machine Learning)
has enabled the deployment of ML models for embedded vision on extremely lean
edge hardware, bringing the power of IoT and ML together. However, TinyML
powered embedded vision applications are still in a nascent stage, and they are
just starting to scale to widespread real-world IoT deployment. To harness the
true potential of IoT and ML, it is necessary to provide product developers
with robust, easy-to-use software engineering (SE) frameworks and best
practices that are customized for the unique challenges faced in TinyML
engineering. Through this systematic literature review, we aggregated the key
challenges reported by TinyML developers and identified state-of-art SE
approaches in large-scale Computer Vision, Machine Learning, and Embedded
Systems that can help address key challenges in TinyML based IoT embedded
vision. In summary, our study draws synergies between SE expertise that
embedded systems developers and ML developers have independently developed to
help address the unique challenges in the engineering of TinyML based IoT
embedded vision.Comment: 8 pages, 3 figure
Developers Perception of Peer Code Review in Research Software Development
Context Research software is software developed by and/or used by researchers, across a wide variety of domains, to perform their research. Because of the complexity of research software, developers cannot conduct exhaustive testing. As a result, researchers have lower confidence in the correctness of the output of the software. Peer code review, a standard software engineering practice, has helped address this problem in other types of software.
Objective Peer code review is less prevalent in research software than it is in other types of software. In addition, the literature does not contain any studies about the use of peer code review in research software. Therefore, through analyzing developers perceptions, the goal of this work is to understand the current practice of peer code review in the development of research software, identify challenges and barriers associated with peer code review in research software, and present approaches to improve the peer code review in research software.
Method We conducted interviews and a community survey of research software developers to collect information about their current peer code review practices, difficulties they face, and how they address those difficulties.
Results We received 84 unique responses from the interviews and surveys. The results show that while research software teams review a large amount of their code, they lack formal process, proper organization, and adequate people to perform the reviews.
Conclusions Use of peer code review is promising for improving the quality of research software and thereby improving the trustworthiness of the underlying research results. In addition, by using peer code review, research software developers produce more readable and understandable code, which will be easier to maintain
Analyzing the Effects of CI/CD on Open Source Repositories in GitHub and GitLab
Numerous articles emphasize the benefits of implementing Continuous
Integration and Delivery (CI/CD) pipelines in software development. These
pipelines are expected to improve the reputation of a project and decrease the
number of commits and issues in the repository. Although CI/CD adoption may be
slow initially, it is believed to accelerate service delivery and deployment in
the long run. This study aims to investigate the impact of CI/CD on commit
velocity and issue counts in two open-source repositories, GitLab and GitHub.
By analyzing more than 12,000 repositories and recording every commit and
issue, it was discovered that CI/CD enhances commit velocity by 141.19 percent,
but also increases the number of issues by 321.21 percent.Comment: This paper has been accepted at the 20th IEEE/ACIS International
Conference on Software Engineering, Management and Applications (SERA 2022
Automatic Transformation of Natural to Unified Modeling Language: A Systematic Review
Context: Processing Software Requirement Specifications (SRS) manually takes
a much longer time for requirement analysts in software engineering.
Researchers have been working on making an automatic approach to ease this
task. Most of the existing approaches require some intervention from an analyst
or are challenging to use. Some automatic and semi-automatic approaches were
developed based on heuristic rules or machine learning algorithms. However,
there are various constraints to the existing approaches of UML generation,
such as restriction on ambiguity, length or structure, anaphora,
incompleteness, atomicity of input text, requirements of domain ontology, etc.
Objective: This study aims to better understand the effectiveness of existing
systems and provide a conceptual framework with further improvement guidelines.
Method: We performed a systematic literature review (SLR). We conducted our
study selection into two phases and selected 70 papers. We conducted
quantitative and qualitative analyses by manually extracting information,
cross-checking, and validating our findings. Result: We described the existing
approaches and revealed the issues observed in these works. We identified and
clustered both the limitations and benefits of selected articles. Conclusion:
This research upholds the necessity of a common dataset and evaluation
framework to extend the research consistently. It also describes the
significance of natural language processing obstacles researchers face. In
addition, it creates a path forward for future research
- …