5 research outputs found

    Provision of caller ring back tones for IP multimedia platforms

    Get PDF
    The conference aimed at supporting and stimulating active productive research set to strengthen the technical foundations of engineers and scientists in the continent, through developing strong technical foundations and skills, leading to new small to medium enterprises within the African sub-continent. It also seeked to encourage the emergence of functionally skilled technocrats within the continent.Customised Caller Ring Back Tones (CRBT) are used to entertain callers by playing a media clip while the callee’s phone is ringing. CRBT involves the mobile operator replacing the standard audio clip with a clip selected by the user, in this case the callee. The service may be offered by 3rd party application providers, but can also be offered by mobile operators themselves. CRBT service can be supported by different mobile network infrastructures including the circuit switched GSM networks and IP multimedia networks such as IMS. These networks need integration of additional components to provide the CRBT service.3GGP has standardized the IMS architecture, which comprises transport, control and application planes. SIP interface to application can enable 3rd party application providers to offer value added services such as IPTV and CRBT. RTP packets conveying media for these applications would be streamed across transport plane connections. This paper presents the design and implementation of CRBT on IMS networks. It presents considerations for deploying both CRBT and reverse CRBT. The design adopts the architecture where an IMS application server is used to control CRBT service, while the media is stored and served from an RTSP media server. We utilize the Fraunhofer Fokus open source IMS core and UCT IMS client for implementation. Test results are geared to proof of concept; performance tests show minimal added call setup delay of 15 millisecond.Strathmore University; Institute of Electrical and Electronics Engineers (IEEE

    A Sandboxing based security model to contain malicious traffic in smart homes

    No full text
    Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Technology at Strathmore UniversityThe Internet of Things (lOT) is a developing Next Generation Network (NGN) paradigm that aims to have more devices connected to the Internet and the possibility of these devices to autonomously communicate with each other. These devices mainly use wireless links to communicate, with little or no flow control, error checking or security monitoring. While this helps support mobility and optimize performance, the compromise in flow control and security monitoring, renders them more vulnerable to potential attacks from malicious users. This poses security threats to data exchanged between devices especially in a smart home environment. This necessitates having mechanisms to provide security against malicious messages and unauthorized modification of information to limit potential attacks on integrity and confidentiality of data. Isolation mechanisms would be ideal to cushion devices and the entire lOT network. Sandboxing involves isolating suspect data, processes, applications or devices from the rest of the system. This restricts access to more system resources hence ensuring continuity and availability of the entire system. This research work thus proposed a model to ensure comprehensive data security in a smart home by using sandboxing. The model proposed mechanisms to provide an isolating environment to contain malicious traffic by evaluating levels of authorization, and restricting communication nodes to what they were allowed to. This thus ensured a proactive data security approach in lOT networks within a smart home environment. Linux security Module implementations were used to provide a custom sandbox from the Kernel level. Instant Contiki, a virtual version of the lOT operating system Contiki, was used to emulate lOT communication with Cooja as the emulating module

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore