268 research outputs found
Security considerations in the open source software ecosystem
Open source software plays an important role in the software supply chain, allowing stakeholders to
utilize open source components as building blocks in their software, tooling, and infrastructure. But
relying on the open source ecosystem introduces unique challenges, both in terms of security and trust,
as well as in terms of supply chain reliability.
In this dissertation, I investigate approaches, considerations, and encountered challenges of stakeholders in the context of security, privacy, and trustworthiness of the open source software supply
chain. Overall, my research aims to empower and support software experts with the knowledge and
resources necessary to achieve a more secure and trustworthy open source software ecosystem. In the
first part of this dissertation, I describe a research study investigating the security and trust practices
in open source projects by interviewing 27 owners, maintainers, and contributors from a diverse set
of projects to explore their behind-the-scenes processes, guidance and policies, incident handling, and
encountered challenges, finding that participants’ projects are highly diverse in terms of their deployed
security measures and trust processes, as well as their underlying motivations. More on the consumer
side of the open source software supply chain, I investigated the use of open source components in
industry projects by interviewing 25 software developers, architects, and engineers to understand their
projects’ processes, decisions, and considerations in the context of external open source code, finding
that open source components play an important role in many of the industry projects, and that most
projects have some form of company policy or best practice for including external code. On the side of
end-user focused software, I present a study investigating the use of software obfuscation in Android
applications, which is a recommended practice to protect against plagiarism and repackaging. The
study leveraged a multi-pronged approach including a large-scale measurement, a developer survey, and
a programming experiment, finding that only 24.92% of apps are obfuscated by their developer, that
developers do not fear theft of their own apps, and have difficulties obfuscating their own apps. Lastly,
to involve end users themselves, I describe a survey with 200 users of cloud office suites to investigate
their security and privacy perceptions and expectations, with findings suggesting that users are generally
aware of basic security implications, but lack technical knowledge for envisioning some threat models.
The key findings of this dissertation include that open source projects have highly diverse security
measures, trust processes, and underlying motivations. That the projects’ security and trust needs are
likely best met in ways that consider their individual strengths, limitations, and project stage, especially
for smaller projects with limited access to resources. That open source components play an important
role in industry projects, and that those projects often have some form of company policy or best
practice for including external code, but developers wish for more resources to better audit included
components.
This dissertation emphasizes the importance of collaboration and shared responsibility in building and maintaining the open source software ecosystem, with developers, maintainers, end users,
researchers, and other stakeholders alike ensuring that the ecosystem remains a secure, trustworthy, and
healthy resource for everyone to rely on
Security and Privacy for Modern Wireless Communication Systems
The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks
Developing an architecture for a test sequencer
Testing is an important aspect of developing systems and products. It ensures quality of the
product and that it works as it has been specified to work. Testing is a very expensive and time consuming process. It can be made more effective by automating it using test automation tools.
In this work a test automation tool called DriveTest2 is developed. DriveTest2 is a test sequencer
that can be used to test different kinds of devices. A test sequencer is a program that controls a
test setup by creating command sequences to device under test and other devices in the test
setup. DriveTest2 will be capable of data acquisition during tests. This data is used for monitoring
the sequence and for test reports. DriveTest2 can be used for different kinds of testing such as
verification, reliability, accelerated lifetime and stress testing. DriveTest2 will replace an existing
software called DriveTest. DriveTest is replaced because it is hard to maintain and update, and
there is no documentation. The objective of this work is to find out what kind of software archi tecture a test sequencer should have.
DriveTest2 and its architecture is developed using an iterative development process. Develop ment starts from gathering and analyzing requirements for the software which was done in pre vious work. The first iteration of the architecture is created from these requirements. Then the
first design and implementation of DriveTest2 is created and given to users to get feedback.
According to the feedback and requirements, refinements to the architecture and implementa tion are made. Then this process is repeated until we have software that we are satisfied with.
DriveTest2 is developed using C# programming language. DriveTest2 uses Model-View-View Model architectural pattern to separate user interface and business logic. The business logic of
DriveTest2 is designed to be object-oriented and is modelled using UML class diagrams.
Throughout the iterations the architecture and design are refined with changes to the initial
architecture and new functionalities like new interfaces to handle device specific functionalities.
As a result of this work, we have an iteration of DriveTest2 that is used in test laboratories with
success. The software architecture of DriveTest2 enables DriveTest2 to satisfies its critical func tional and non-functional requirements. We can conclude that a test sequencer software archi tecture should be able to enable the software to satisfy its requirements, have decoupled com ponents with clear responsibilities, be flexible enough to accommodate changes to the architec ture during the development and be extensible so that new functionalities can be added and
make use of abstraction to abstract lower-level components and the actual hardware.Testaaminen on tärkeä osa tuotteiden ja systeemien kehityksessä. Sillä varmistetaan, että tuote pystyy tekemään sille määritetyt asiat ja, että se on laadukas. Testaaminen on hyvin kallis ja aikaa vievä prosessi. Sitä voidaan tehostaa automatisoimalla käyttäen testiautomaa-tiotyökaluja. Tässä työssä kehitetään testiautomaatiotyökalu nimeltä DriveTest2. DriveTest2 on testisekvensseri, jota voidaan käyttää erilaisten laitteiden testaamiseen. Testisekvenssi on ohjelma, jolla voidaan ohjata testiympäristön laitteita luomalla sekvenssejä, jotka muo-dostuvat komennoista. DriveTest2 pystyy keräämään dataa sekvenssin ajon aikana. Tätä da-taa käytetään sekvenssin ajon monitorointiin ja testiraportissa. DriveTest2:sta voidaan käyt-tää erilaisissa testeissä kuten verifikaatio-, luotettavuus-, elinikä- ja stressitesteissä. Drive-Test2 korvaa olemassa olevan testisekvensserin nimeltä DriveTest. DriveTest korvataan, kos-ka sitä on vaikea ylläpitää ja päivittää ja siihen ei ole olemassa dokumentaatiota. Tämän työn tavoite on selvittää millainen ohjelmistoarkkitehtuuri testisekvensseri pitää olla.
DriveTest2 ja sen ohjelmistoarkkitehtuuria kehitetään iteratiivisella kehitysprosessilla. Kehi-tys alkaa vaatimusten keräämisellä ja analysoinnilla, joka tehtiin aikaisemmassa työssä. Näi-den vaatimusten perusteella luodaan ensimmäinen versio ohjelmistoarkkitehtuurista. Tämän ohjelmistoarkkitehtuurin perusteella luodaan ensimmäinen iteraatio DriveTest2:sen toteu-tuksesta ja se annetaan käyttäjille saadaksemme palautetta. Palautteen ja vaatimusten pe-rusteella arkkitehtuuria ja toteutusta hiotaan paremmaksi. Tätä prosessia sitten toistetaan, kunnes todetaan, että ohjelmisto on riittävän hyvä.
DriveTest2 kehitetään C# ohjelmointikielellä. DriveTest2 käyttää Model-View-ViewModel arkkitehtuurimallia erottaakseen käyttöliittymän muusta logiikasta. DriveTest2:sen logiikka on tehty olio-ohjelmoinnilla ja se on mallinnettu UML luokkakaavioilla. Iteraatioiden aikana Dri-veTest2:sen ohjelmistoarkkitehtuuria hiottiin paremmaksi muuttamalla aikaisempaa arkki-tehtuuria ja lisäämällä uusia toiminnallisuuksia kuten uusia rajapintoja hoitamaan laitekohtai-sia toiminallisuuksia.
Työn tuloksena saatiin luotua iteraatio DriveTest2:sta, jota pystytään käyttämään testilabora-torioissa hyvällä menestyksellä. DriveTest2:sen ohjelmistoarkkitehtuuri pystyy mahdollista-maan, että DriveTest2 pystyy toteuttamaan sille määritetyt toiminnalliset ja ei-toiminnalliset vaatimukset. Voidaan päätellä, että testisekvensserin ohjelmistoarkkitehtuurin on pystyttävä mahdollistamaan sille määritettyjen vaatimusten täyttäminen. Sillä on oltava komponentejä, joilla on selvät vastuualueet, ja sen on hyödynnettävä abstraktointia varsinkin laitetasolla. Arkkitehtuurin on oltava joustava, jotta sitä voidaan muokata ja kehittää kehityksen aikana sekä lisäämään uusia toiminnallisuuksia
Towards Expressive and Versatile Visualization-as-a-Service (VaaS)
The rapid growth of data in scientific visualization has posed significant challenges to the scalability and availability of interactive visualization tools. These challenges can be largely attributed to the limitations of traditional monolithic applications in handling large datasets and accommodating multiple users or devices. To address these issues, the Visualization-as-a-Service (VaaS) architecture has emerged as a promising solution. VaaS leverages cloud-based visualization capabilities to provide on-demand and cost-effective interactive visualization. Existing VaaS has been simplistic by design with focuses on task-parallelism with single-user-per-device tasks for predetermined visualizations. This dissertation aims to extend the capabilities of VaaS by exploring data-parallel visualization services with multi-device support and hypothesis-driven explorations. By incorporating stateful information and enabling dynamic computation, VaaS\u27 performance and flexibility for various real-world applications is improved. This dissertation explores the history of monolithic and VaaS architectures, the design and implementations of 3 new VaaS applications, and a final exploration of the future of VaaS. This research contributes to the advancement of interactive scientific visualization, addressing the challenges posed by large datasets and remote collaboration scenarios
Data-Driven Methods for Data Center Operations Support
During the last decade, cloud technologies have been evolving at
an impressive pace, such that we are now living in a cloud-native
era where developers can leverage on an unprecedented landscape
of (possibly managed) services for orchestration, compute, storage,
load-balancing, monitoring, etc. The possibility to have on-demand
access to a diverse set of configurable virtualized resources allows
for building more elastic, flexible and highly-resilient distributed
applications. Behind the scenes, cloud providers sustain the heavy
burden of maintaining the underlying infrastructures, consisting in
large-scale distributed systems, partitioned and replicated among
many geographically dislocated data centers to guarantee scalability,
robustness to failures, high availability and low latency. The larger the
scale, the more cloud providers have to deal with complex interactions
among the various components, such that monitoring, diagnosing and
troubleshooting issues become incredibly daunting tasks.
To keep up with these challenges, development and operations
practices have undergone significant transformations, especially in
terms of improving the automations that make releasing new software,
and responding to unforeseen issues, faster and sustainable at scale.
The resulting paradigm is nowadays referred to as DevOps. However,
while such automations can be very sophisticated, traditional DevOps
practices fundamentally rely on reactive mechanisms, that typically
require careful manual tuning and supervision from human experts.
To minimize the risk of outages—and the related costs—it is crucial to
provide DevOps teams with suitable tools that can enable a proactive
approach to data center operations.
This work presents a comprehensive data-driven framework to address
the most relevant problems that can be experienced in large-scale
distributed cloud infrastructures. These environments are indeed characterized
by a very large availability of diverse data, collected at each
level of the stack, such as: time-series (e.g., physical host measurements,
virtual machine or container metrics, networking components
logs, application KPIs); graphs (e.g., network topologies, fault graphs
reporting dependencies among hardware and software components,
performance issues propagation networks); and text (e.g., source code,
system logs, version control system history, code review feedbacks).
Such data are also typically updated with relatively high frequency,
and subject to distribution drifts caused by continuous configuration
changes to the underlying infrastructure. In such a highly dynamic scenario,
traditional model-driven approaches alone may be inadequate
at capturing the complexity of the interactions among system components. DevOps teams would certainly benefit from having robust
data-driven methods to support their decisions based on historical
information. For instance, effective anomaly detection capabilities may
also help in conducting more precise and efficient root-cause analysis.
Also, leveraging on accurate forecasting and intelligent control
strategies would improve resource management.
Given their ability to deal with high-dimensional, complex data,
Deep Learning-based methods are the most straightforward option for
the realization of the aforementioned support tools. On the other hand,
because of their complexity, this kind of models often requires huge
processing power, and suitable hardware, to be operated effectively
at scale. These aspects must be carefully addressed when applying
such methods in the context of data center operations. Automated
operations approaches must be dependable and cost-efficient, not to
degrade the services they are built to improve.
i
Towards sustainable e-learning platforms in the context of cybersecurity: A TAM-driven approach
The rapid growth of electronic learning (e-learning) platforms has raised concerns about cybersecurity risks. The vulnerability of university students to cyberattacks and privacy concerns within e-learning platforms presents a pressing issue. Students’ frequent and intense internet presence, coupled with their extensive computer usage, puts them at higher risk of being a potential victim of cyberattacks. This problem necessitates a deeper understanding in order to enhance cybersecurity measures and safeguard students’ privacy and intellectual property in educational environments. This dissertation work addresses the following research questions: (a) To what extent do cybersecurity perspectives affect student’s intention to use e-learning platforms? (b) To what extent do students’ privacy concerns affect their intention to use e-learning platforms? (c) To what extent does students’ cybersecurity awareness affect their intention to use e-learning platforms? (d) To what extent do academic integrity concerns affect their intention to use e-learning platforms? and (e) To what extent does students’ computer self-efficacy affect their intention to use e-learning platforms? This study was conducted using an enhanced version of the technology acceptance model (TAM3) to examine the factors influencing students’ intention to use e-learning platforms. The study involved undergraduate and graduate students at Eastern Michigan University, and data were collected through a web-based questionnaire. The questionnaire was developed using the Qualtrics tool and included validated measures and scales with close-ended questions. The collected data were analyzed using SPSS 28, and the significance level for hypothesis testing was set at 0.05. Out of 6,800 distributed surveys, 590 responses were received, and after data cleaning, 582 responses were included in the final sample. The findings revealed that cybersecurity perspectives, cybersecurity awareness, academic integrity concerns, and computer self-efficacy significantly influenced students’ intention to use e-learning platforms. The study has implications for practitioners, educators, and researchers involved in designing secure e-learning platforms, emphasizing the importance of cybersecurity and recommending effective cybersecurity training programs to enhance user engagement. Overall, the study highlights the role of cybersecurity in promoting the adoption and usage of e-learning platforms, providing valuable insights for developers and educators to create secure e-learning environments and benefiting stakeholders in the e-learning industry
Advancements in Real-Time Simulation of Power and Energy Systems
Modern power and energy systems are characterized by the wide integration of distributed generation, storage and electric vehicles, adoption of ICT solutions, and interconnection of different energy carriers and consumer engagement, posing new challenges and creating new opportunities. Advanced testing and validation methods are needed to efficiently validate power equipment and controls in the contemporary complex environment and support the transition to a cleaner and sustainable energy system. Real-time hardware-in-the-loop (HIL) simulation has proven to be an effective method for validating and de-risking power system equipment in highly realistic, flexible, and repeatable conditions. Controller hardware-in-the-loop (CHIL) and power hardware-in-the-loop (PHIL) are the two main HIL simulation methods used in industry and academia that contribute to system-level testing enhancement by exploiting the flexibility of digital simulations in testing actual controllers and power equipment. This book addresses recent advances in real-time HIL simulation in several domains (also in new and promising areas), including technique improvements to promote its wider use. It is composed of 14 papers dealing with advances in HIL testing of power electronic converters, power system protection, modeling for real-time digital simulation, co-simulation, geographically distributed HIL, and multiphysics HIL, among other topics
Applications in Electronics Pervading Industry, Environment and Society
This book features the manuscripts accepted for the Special Issue “Applications in Electronics Pervading Industry, Environment and Society—Sensing Systems and Pervasive Intelligence” of the MDPI journal Sensors. Most of the papers come from a selection of the best papers of the 2019 edition of the “Applications in Electronics Pervading Industry, Environment and Society” (APPLEPIES) Conference, which was held in November 2019. All these papers have been significantly enhanced with novel experimental results. The papers give an overview of the trends in research and development activities concerning the pervasive application of electronics in industry, the environment, and society. The focus of these papers is on cyber physical systems (CPS), with research proposals for new sensor acquisition and ADC (analog to digital converter) methods, high-speed communication systems, cybersecurity, big data management, and data processing including emerging machine learning techniques. Physical implementation aspects are discussed as well as the trade-off found between functional performance and hardware/system costs
- …