2 research outputs found

    Al Planning Assistant for Scheduling Daily Activities

    Get PDF
    Artificial conversational agents are software agents that can interact with humans in the way humans do. Siri Cortana, and Alexa are examples of intelligent agents that can help us with almost all the basic tasks. These agents are smart enough to do the basic tasks, but not as much when it comes to complex tasks, such as analyzing traffic data, reviewing scheduling conflicts, rescheduling meetings while resolving conflicts, and offering suggestions based upon data analyses (e.g. traffic patterns, weather, etc.) The actual potential of dialogue-based task agent potential remains untapped. The reason is the fact agents lack the ability to fully understand human language. Natural Language processing (NLP) makes it possible for those agents to understand humans. The current research project explores the development of a conversational agent, Schedulio, that can be connected to the user\u27s calendar and assist with the scheduling, deletion, and modification of events using natural language conversation. Additionally, current research explores the implementation of a recommendation agent that provides suggestions based upon time and slot date recommendation, location-specific weather data, and traffic patterns

    Evidence-based Accountability Audits for Cloud Computing

    Get PDF
    Cloud computing is known for its on-demand service provisioning and has now become mainstream. Many businesses as well as individuals are using cloud services on a daily basis. There is a big variety of services that ranges from the provision of computing resources to services such as productivity suites and social networks. The nature of these services varies heavily in terms of what kind of information is being out-sourced to the cloud provider. Often, that data is sensitive, for instance when PII is being shared by an individual. Also, businesses that move (parts of) their processes to the cloud are actively participating in a major paradigm shift from having data on-premise to transfering data to a third-party provider. However, many new challenges come along with this trend, which are closely tied to the loss of control over data. When moving to the cloud, direct control over geographical storage location, who has access to it and how it is shared and processed is given up. Because of this loss of control, cloud customers have to trust cloud providers that they treat their data in an appropriate and responsible way. Cloud audits can be used to check how data has been processed in the cloud (i.e., by whom, for what purpose) and whether or not this happened in compliance with what has been defined in agreed-upon privacy and data storage, usage and maintenance (i.e., data handling) policies. This way, a cloud customer can regain some of the control he has given up by moving to the cloud. In this thesis, accountability audits are presented as a way to strengthen trust in cloud computing by providing assurance about the processing of data in the cloud according to data handling and privacy policies. In cloud accountability audits, various distributed evidence sources need to be considered. The research presented in this thesis discusses the use of various heterogeous evidence sources on all cloud layers. This way, a complete picture of the actual data handling practices that is based on hard facts can be presented to the cloud consumer. Furthermore, this strengthens transparency of data processing in the cloud, which can lead to improved trust in cloud providers, if they choose to adopt these mechanisms in order to assure their customers that their data is being handled according to their expectations. The system presented in this thesis enables continuous auditing of a cloud provider's adherence to data handling policies in an automated way that shortens audit intervals and that is based on evidence that is produced by cloud subsystems. An important aspect of many cloud offerings is the combination of multiple distinct cloud services that are offered by independent providers. Data is thereby freuqently exchanged between the cloud providers. This also includes trans-border flows of data, where one provider may be required to adhere to more strict data protection requirements than the others. The system presented in this thesis addresses such scenarios by enabling the collection of evidence at providers and evaluating it during audits. Securing evidence quickly becomes a challenge in the system design, when information that is needed for the audit is deemed sensitive or confidential. This means that securing the evidence at-rest as well as in-transit is of utmost importance, in order not to introduce a new liability by building an insecure data heap. This research presents the identification of security and privacy protection requirements alongside proposed solutions that enable the development of an architecture for secure, automated, policy-driven and evidence-based accountability audits
    corecore