120 research outputs found

    Enabling Constraint-based Binary Reconfiguration by Binary Analysis

    Get PDF
    Today’s adaptable architectures require configurabilityand adaptability to be supported already at design level.However, modern software products are often constructed outof reusable but non-adaptable and/or legacy software artifacts(e.g. libraries) to meet early time-to-market requirements. Thus,modern adaptable architectures are rarely used in commercialapplications, because the effort to add adaptability to the reusedsoftware artifacts is just too high. In this paper, we propose amethodology to semi-automatically configure existing binaries ona given set of constraints. It is based on building the annotatedcontrol flow graph to identify and remove unused code on staticbasic block level depending on different execution requirementsgiven as a set of constraints. This allows for adaptation of binariesafter compile time without the use of source code. We thenpropose a way of adding additional reconfiguration support tothese configured binary objects. With this approach, adaptionand reconfiguration can be added with a low effort to nonadaptivesoftware

    Flattening Hierarchical Scheduling.

    Get PDF
    ABSTRACT Recently, the application of virtual-machine technology to integrate real-time systems into a single host has received significant attention and caused controversy. Drawing two examples from mixed-criticality systems, we demonstrate that current virtualization technology, which handles guest scheduling as a black box, is incompatible with this modern scheduling discipline. However, there is a simple solution by exporting sufficient information for the host scheduler to overcome this problem. We describe the problem, the modification required on the guest and show on the example of two practical real-time operating systems how flattening the hierarchical scheduling problem resolves the issue. We conclude by showing the limitations of our technique at the current state of our research

    Just-in-time Analytics Over Heterogeneous Data and Hardware

    Get PDF
    Industry and academia are continuously becoming more data-driven and data-intensive, relying on the analysis of a wide variety of datasets to gain insights. At the same time, data variety increases continuously across multiple axes. First, data comes in multiple formats, such as the binary tabular data of a DBMS, raw textual files, and domain-specific formats. Second, different datasets follow different data models, such as the relational and the hierarchical one. Data location also varies: Some datasets reside in a central "data lake", whereas others lie in remote data sources. In addition, users execute widely different analysis tasks over all these data types. Finally, the process of gathering and integrating diverse datasets introduces several inconsistencies and redundancies in the data, such as duplicate entries for the same real-world concept. In summary, heterogeneity significantly affects the way data analysis is performed. In this thesis, we aim for data virtualization: Abstracting data out of its original form and manipulating it regardless of the way it is stored or structured, without a performance penalty. To achieve data virtualization, we design and implement systems that i) mask heterogeneity through the use of heterogeneity-aware, high-level building blocks and ii) offer fast responses through on-demand adaptation techniques. Regarding the high-level building blocks, we use a query language and algebra to handle multiple collection types, such as relations and hierarchies, express transformations between these collection types, as well as express complex data cleaning tasks over them. In addition, we design a location-aware compiler and optimizer that masks away the complexity of accessing multiple remote data sources. Regarding on-demand adaptation, we present a design to produce a new system per query. The design uses customization mechanisms that trigger runtime code generation to mimic the system most appropriate to answer a query fast: Query operators are thus created based on the query workload and the underlying data models; the data access layer is created based on the underlying data formats. In addition, we exploit emerging hardware by customizing the system implementation based on the available heterogeneous processors â CPUs and GPGPUs. We thus pair each workload with its ideal processor type. The end result is a just-in-time database system that is specific to the query, data, workload, and hardware instance. This thesis redesigns the data management stack to natively cater for data heterogeneity and exploit hardware heterogeneity. Instead of centralizing all relevant datasets, converting them to a single representation, and loading them in a monolithic, static, suboptimal system, our design embraces heterogeneity. Overall, our design decouples the type of performed analysis from the original data layout; users can perform their analysis across data stores, data models, and data formats, but at the same time experience the performance offered by a custom system that has been built on demand to serve their specific use case

    Secure Virtualization of Latency-Constrained Systems

    Get PDF
    Virtualization is a mature technology in server and desktop environments where multiple systems are consolidate onto a single physical hardware platform, increasing the utilization of todays multi-core systems as well as saving resources such as energy, space and costs compared to multiple single systems. Looking at embedded environments reveals that many systems use multiple separate computing systems inside, including requirements for real-time and isolation properties. For example, modern high-comfort cars use up to a hundred embedded computing systems. Consolidating such diverse configurations promises to save resources such as energy and weight. In my work I propose a secure software architecture that allows consolidating multiple embedded software systems with timing constraints. The base of the architecture builds a microkernel-based operating system that supports a variety of different virtualization approaches through a generic interface, supporting hardware-assisted virtualization and paravirtualization as well as multiple architectures. Studying guest systems with latency constraints with regards to virtualization showed that standard techniques such as high-frequency time-slicing are not a viable approach. Generally, guest systems are a combination of best-effort and real-time work and thus form a mixed-criticality system. Further analysis showed that such systems need to export relevant internal scheduling information to the hypervisor to support multiple guests with latency constraints. I propose a mechanism to export those relevant events that is secure, flexible, has good performance and is easy to use. The thesis concludes with an evaluation covering the virtualization approach on the ARM and x86 architectures and two guest operating systems, Linux and FreeRTOS, as well as evaluating the export mechanism

    Security and trust in cloud computing and IoT through applying obfuscation, diversification, and trusted computing technologies

    Get PDF
    Cloud computing and Internet of Things (IoT) are very widely spread and commonly used technologies nowadays. The advanced services offered by cloud computing have made it a highly demanded technology. Enterprises and businesses are more and more relying on the cloud to deliver services to their customers. The prevalent use of cloud means that more data is stored outside the organization’s premises, which raises concerns about the security and privacy of the stored and processed data. This highlights the significance of effective security practices to secure the cloud infrastructure. The number of IoT devices is growing rapidly and the technology is being employed in a wide range of sectors including smart healthcare, industry automation, and smart environments. These devices collect and exchange a great deal of information, some of which may contain critical and personal data of the users of the device. Hence, it is highly significant to protect the collected and shared data over the network; notwithstanding, the studies signify that attacks on these devices are increasing, while a high percentage of IoT devices lack proper security measures to protect the devices, the data, and the privacy of the users. In this dissertation, we study the security of cloud computing and IoT and propose software-based security approaches supported by the hardware-based technologies to provide robust measures for enhancing the security of these environments. To achieve this goal, we use obfuscation and diversification as the potential software security techniques. Code obfuscation protects the software from malicious reverse engineering and diversification mitigates the risk of large-scale exploits. We study trusted computing and Trusted Execution Environments (TEE) as the hardware-based security solutions. Trusted Platform Module (TPM) provides security and trust through a hardware root of trust, and assures the integrity of a platform. We also study Intel SGX which is a TEE solution that guarantees the integrity and confidentiality of the code and data loaded onto its protected container, enclave. More precisely, through obfuscation and diversification of the operating systems and APIs of the IoT devices, we secure them at the application level, and by obfuscation and diversification of the communication protocols, we protect the communication of data between them at the network level. For securing the cloud computing, we employ obfuscation and diversification techniques for securing the cloud computing software at the client-side. For an enhanced level of security, we employ hardware-based security solutions, TPM and SGX. These solutions, in addition to security, ensure layered trust in various layers from hardware to the application. As the result of this PhD research, this dissertation addresses a number of security risks targeting IoT and cloud computing through the delivered publications and presents a brief outlook on the future research directions.Pilvilaskenta ja esineiden internet ovat nykyään hyvin tavallisia ja laajasti sovellettuja tekniikkoja. Pilvilaskennan pitkälle kehittyneet palvelut ovat tehneet siitä hyvin kysytyn teknologian. Yritykset enenevässä määrin nojaavat pilviteknologiaan toteuttaessaan palveluita asiakkailleen. Vallitsevassa pilviteknologian soveltamistilanteessa yritykset ulkoistavat tietojensa käsittelyä yrityksen ulkopuolelle, minkä voidaan nähdä nostavan esiin huolia taltioitavan ja käsiteltävän tiedon turvallisuudesta ja yksityisyydestä. Tämä korostaa tehokkaiden turvallisuusratkaisujen merkitystä osana pilvi-infrastruktuurin turvaamista. Esineiden internet -laitteiden lukumäärä on nopeasti kasvanut. Teknologiana sitä sovelletaan laajasti monilla sektoreilla, kuten älykkäässä terveydenhuollossa, teollisuusautomaatiossa ja älytiloissa. Sellaiset laitteet keräävät ja välittävät suuria määriä informaatiota, joka voi sisältää laitteiden käyttäjien kannalta kriittistä ja yksityistä tietoa. Tästä syystä johtuen on erittäin merkityksellistä suojata verkon yli kerättävää ja jaettavaa tietoa. Monet tutkimukset osoittavat esineiden internet -laitteisiin kohdistuvien tietoturvahyökkäysten määrän olevan nousussa, ja samaan aikaan suuri osuus näistä laitteista ei omaa kunnollisia teknisiä ominaisuuksia itse laitteiden tai niiden käyttäjien yksityisen tiedon suojaamiseksi. Tässä väitöskirjassa tutkitaan pilvilaskennan sekä esineiden internetin tietoturvaa ja esitetään ohjelmistopohjaisia tietoturvalähestymistapoja turvautumalla osittain laitteistopohjaisiin teknologioihin. Esitetyt lähestymistavat tarjoavat vankkoja keinoja tietoturvallisuuden kohentamiseksi näissä konteksteissa. Tämän saavuttamiseksi työssä sovelletaan obfuskaatiota ja diversifiointia potentiaalisiana ohjelmistopohjaisina tietoturvatekniikkoina. Suoritettavan koodin obfuskointi suojaa pahantahtoiselta ohjelmiston takaisinmallinnukselta ja diversifiointi torjuu tietoturva-aukkojen laaja-alaisen hyödyntämisen riskiä. Väitöskirjatyössä tutkitaan luotettua laskentaa ja luotettavan laskennan suoritusalustoja laitteistopohjaisina tietoturvaratkaisuina. TPM (Trusted Platform Module) tarjoaa turvallisuutta ja luottamuksellisuutta rakentuen laitteistopohjaiseen luottamukseen. Pyrkimyksenä on taata suoritusalustan eheys. Työssä tutkitaan myös Intel SGX:ää yhtenä luotettavan suorituksen suoritusalustana, joka takaa suoritettavan koodin ja datan eheyden sekä luottamuksellisuuden pohjautuen suojatun säiliön, saarekkeen, tekniseen toteutukseen. Tarkemmin ilmaistuna työssä turvataan käyttöjärjestelmä- ja sovellusrajapintatasojen obfuskaation ja diversifioinnin kautta esineiden internet -laitteiden ohjelmistokerrosta. Soveltamalla samoja tekniikoita protokollakerrokseen, työssä suojataan laitteiden välistä tiedonvaihtoa verkkotasolla. Pilvilaskennan turvaamiseksi työssä sovelletaan obfuskaatio ja diversifiointitekniikoita asiakaspuolen ohjelmistoratkaisuihin. Vankemman tietoturvallisuuden saavuttamiseksi työssä hyödynnetään laitteistopohjaisia TPM- ja SGX-ratkaisuja. Tietoturvallisuuden lisäksi nämä ratkaisut tarjoavat monikerroksisen luottamuksen rakentuen laitteistotasolta ohjelmistokerrokseen asti. Tämän väitöskirjatutkimustyön tuloksena, osajulkaisuiden kautta, vastataan moniin esineiden internet -laitteisiin ja pilvilaskentaan kohdistuviin tietoturvauhkiin. Työssä esitetään myös näkemyksiä jatkotutkimusaiheista

    Protecting Software through Obfuscation:Can It Keep Pace with Progress in Code Analysis?

    Get PDF
    Software obfuscation has always been a controversially discussed research area. While theoretical results indicate that provably secure obfuscation in general is impossible, its widespread application in malware and commercial software shows that it is nevertheless popular in practice. Still, it remains largely unexplored to what extent today’s software obfuscations keep up with state-of-the-art code analysis and where we stand in the arms race between software developers and code analysts. The main goal of this survey is to analyze the effectiveness of different classes of software obfuscation against the continuously improving deobfuscation techniques and off-the-shelf code analysis tools. The answer very much depends on the goals of the analyst and the available resources. On the one hand, many forms of lightweight static analysis have difficulties with even basic obfuscation schemes, which explains the unbroken popularity of obfuscation among malware writers. On the other hand, more expensive analysis techniques, in particular when used interactively by a human analyst, can easily defeat many obfuscations. As a result, software obfuscation for the purpose of intellectual property protection remains highly challenging.</jats:p

    Security and Privacy in Unified Communication

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The use of unified communication; video conferencing, audio conferencing, and instant messaging has skyrocketed during the COVID-19 pandemic. However, security and privacy considerations have often been neglected. This paper provides a comprehensive survey of security and privacy in Unified Communication (UC). We systematically analyze security and privacy threats and mitigations in a generic UC scenario. Based on this, we analyze security and privacy features of the major UC market leaders and we draw conclusions on the overall UC landscape. While confidentiality in communication channels is generally well protected through encryption, other privacy properties are mostly lacking on UC platforms

    Mobiilse värkvõrgu protsessihaldus

    Get PDF
    Värkvõrk, ehk Asjade Internet (Internet of Things, lüh IoT) edendab lahendusi nagu nn tark linn, kus meid igapäevaselt ümbritsevad objektid on ühendatud infosüsteemidega ja ka üksteisega. Selliseks näiteks võib olla teekatete seisukorra monitoorimissüsteem. Võrku ühendatud sõidukitelt (nt bussidelt) kogutakse videomaterjali, mida seejärel töödeldakse, et tuvastada löökauke või lume kogunemist. Tavaliselt hõlmab selline lahendus keeruka tsentraalse süsteemi ehitamist. Otsuste langetamiseks (nt milliseid sõidukeid parasjagu protsessi kaasata) vajab keskne süsteem pidevat ühendust kõigi IoT seadmetega. Seadmete hulga kasvades võib keskne lahendus aga muutuda pudelikaelaks. Selliste protsesside disaini, haldust, automatiseerimist ja seiret hõlbustavad märkimisväärselt äriprotsesside halduse (Business Process Management, lüh BPM) valdkonna standardid ja tööriistad. Paraku ei ole BPM tehnoloogiad koheselt kasutatavad uute paradigmadega nagu Udu- ja Servaarvutus, mis tuleviku värkvõrgu jaoks vajalikud on. Nende puhul liigub suur osa otsustustest ja arvutustest üksikutest andmekeskustest servavõrgu seadmetele, mis asuvad lõppkasutajatele ja IoT seadmetele lähemal. Videotöötlust võiks teostada mini-andmekeskustes, mis on paigaldatud üle linna, näiteks bussipeatustesse. Arvestades IoT seadmete üha suurenevat hulka, vähendab selline koormuse jaotamine vähendab riski, et tsentraalne andmekeskust ülekoormamist. Doktoritöö uurib, kuidas mobiilsusega seonduvaid IoT protsesse taoliselt ümber korraldada, kohanedes pidevalt muutlikule, liikuvate seadmetega täidetud servavõrgule. Nimelt on ühendused katkendlikud, mistõttu otsuste langetus ja planeerimine peavad arvestama muuhulgas mobiilseadmete liikumistrajektoore. Töö raames valminud prototüüpe testiti Android seadmetel ja simulatsioonides. Lisaks valmis tööriistakomplekt STEP-ONE, mis võimaldab teadlastel hõlpsalt simuleerida ja analüüsida taolisi probleeme erinevais realistlikes stsenaariumites nagu seda on tark linn.The Internet of Things (IoT) promotes solutions such as a smart city, where everyday objects connect with info systems and each other. One example is a road condition monitoring system, where connected vehicles, such as buses, capture video, which is then processed to detect potholes and snow build-up. Building such a solution typically involves establishing a complex centralised system. The centralised approach may become a bottleneck as the number of IoT devices keeps growing. It relies on constant connectivity to all involved devices to make decisions, such as which vehicles to involve in the process. Designing, automating, managing, and monitoring such processes can greatly be supported using the standards and software systems provided by the field of Business Process Management (BPM). However, BPM techniques are not directly applicable to new computing paradigms, such as Fog Computing and Edge Computing, on which the future of IoT relies. Here, a lot of decision-making and processing is moved from central data-centers to devices in the network edge, near the end-users and IoT sensors. For example, video could be processed in mini-datacenters deployed throughout the city, e.g., at bus stops. This load distribution reduces the risk of the ever-growing number of IoT devices overloading the data center. This thesis studies how to reorganise the process execution in this decentralised fashion, where processes must dynamically adapt to the volatile edge environment filled with moving devices. Namely, connectivity is intermittent, so decision-making and planning need to involve factors such as the movement trajectories of mobile devices. We examined this issue in simulations and with a prototype for Android smartphones. We also showcase the STEP-ONE toolset, allowing researchers to conveniently simulate and analyse these issues in different realistic scenarios, such as those in a smart city.  https://www.ester.ee/record=b552551
    corecore