14 research outputs found

    Everything counts in small amounts

    Get PDF
    This paper describes an encoding tool which utilises the "data is code" principle of symbolic expressions available in Lisp-like languages to allow the scripting of tightly packed, cross-platform network protocols. This dynamic approach provides specific flexibility when working on embedded systems as it reduces the amount of cross compilation and deploy cycles that occur following more traditional development approaches. In addition, the separation of how the data is encoded from the compiled application facilitates a concept known as extensibility of the network protocol without requiring special handling

    A tone driven offline information kiosk

    Get PDF
    In this paper we introduce the concept of a low-cost, offline information kiosk that is controlled through a sound-based interface. More specifically, we will describe how we use a mobile phone to control a kiosk by communicating DTMF phone tones. Our main use-case is deployment within developing countries where we intend to examine issues related to cross-cultural interface design

    On the performance of markup language compression

    Get PDF
    Data compression is used in our everyday life to improve computer interaction or simply for storage purposes. Lossless data compression refers to those techniques that are able to compress a file in such ways that the decompressed format is the replica of the original. These techniques, which differ from the lossy data compression, are necessary and heavily used in order to reduce resource usage and improve storage and transmission speeds. Prior research led to huge improvements in compression performance and efficiency for general purpose tools which are mainly based on statistical and dictionary encoding techniques. Extensible Markup Language (XML) is based on redundant data which is parsed as normal text by general-purpose compressors. Several tools for compressing XML data have been developed, resulting in improvements for compression size and speed using different compression techniques. These tools are mostly based on algorithms that rely on variable length encoding. XML Schema is a language used to define the structure and data types of an XML document. As a result of this, it provides XML compression tools additional information that can be used to improve compression efficiency. In addition, XML Schema is also used for validating XML data. For document compression there is a need to generate the schema dynamically for each XML file. This solution can be applied to improve the efficiency of XML compressors. This research investigates a dynamic approach to compress XML data using a hybrid compression tool. This model allows the compression of XML data using variable and fixed length encoding techniques when their best use cases are triggered. The aim of this research is to investigate the use of fixed length encoding techniques to support general-purpose XML compressors. The results demonstrate the possibility of improving on compression size when a fixed length encoder is used to compressed most XML data types

    On the performance of emerging wireless mesh networks

    Get PDF
    Wireless networks are increasingly used within pervasive computing. The recent development of low-cost sensors coupled with the decline in prices of embedded hardware and improvements in low-power low-rate wireless networks has made them ubiquitous. The sensors are becoming smaller and smarter enabling them to be embedded inside tiny hardware. They are already being used in various areas such as health care, industrial automation and environment monitoring. Thus, the data to be communicated can include room temperature, heart beat, user’s activities or seismic events. Such networks have been deployed in wide range areas and various levels of scale. The deployment can include only a couple of sensors inside human body or hundreds of sensors monitoring the environment. The sensors are capable of generating a huge amount of information when data is sensed regularly. The information has to be communicated to a central node in the sensor network or to the Internet. The sensor may be connected directly to the central node but it may also be connected via other sensor nodes acting as intermediate routers/forwarders. The bandwidth of a typical wireless sensor network is already small and the use of forwarders to pass the data to the central node decreases the network capacity even further. Wireless networks consist of high packet loss ratio along with the low network bandwidth. The data transfer time from the sensor nodes to the central node increases with network size. Thus it becomes challenging to regularly communicate the sensed data especially when the network grows in size. Due to this problem, it is very difficult to create a scalable sensor network which can regularly communicate sensor data. The problem can be tackled either by improving the available network bandwidth or by reducing the amount of data communicated in the network. It is not possible to improve the network bandwidth as power limitation on the devices restricts the use of faster network standards. Also it is not acceptable to reduce the quality of the sensed data leading to loss of information before communication. However the data can be modified without losing any information using compression techniques and the processing power of embedded devices are improving to make it possible. In this research, the challenges and impacts of data compression on embedded devices is studied with an aim to improve the network performance and the scalability of sensor networks. In order to evaluate this, firstly messaging protocols which are suitable for embedded devices are studied and a messaging model to communicate sensor data is determined. Then data compression techniques which can be implemented on devices with limited resources and are suitable to compress typical sensor data are studied. Although compression can reduce the amount of data to be communicated over a wireless network, the time and energy costs of the process must be considered to justify the benefits. In other words, the combined compression and data transfer time must also be smaller than the uncompressed data transfer time. Also the compression and data transfer process must consume less energy than the uncompressed data transfer process. The network communication is known to be more expensive than the on-device computation in terms of energy consumption. A data sharing system is created to study the time and energy consumption trade-off of compression techniques. A mathematical model is also used to study the impact of compression on the overall network performance of various scale of sensor networks

    A context -and template- based data compression approach to improve resource-constrained IoT systems interoperability.

    Get PDF
    170 p.El objetivo del Internet de las Cosas (the Internet of Things, IoT) es el de interconectar todo tipo de cosas, desde dispositivos simples, como una bombilla o un termostato, a elementos más complejos y abstractoscomo una máquina o una casa. Estos dispositivos o elementos varían enormemente entre sí, especialmente en las capacidades que poseen y el tipo de tecnologías que utilizan. Esta heterogeneidad produce una gran complejidad en los procesos integración en lo que a la interoperabilidad se refiere.Un enfoque común para abordar la interoperabilidad a nivel de representación de datos en sistemas IoT es el de estructurar los datos siguiendo un modelo de datos estándar, así como formatos de datos basados en texto (e.g., XML). Sin embargo, el tipo de dispositivos que se utiliza normalmente en sistemas IoT tiene capacidades limitadas, así como recursos de procesamiento y de comunicación escasos. Debido a estas limitaciones no es posible integrar formatos de datos basados en texto de manera sencilla y e1ciente en dispositivos y redes con recursos restringidos. En esta Tesis, presentamos una novedosa solución de compresión de datos para formatos de datos basados en texto, que está especialmente diseñada teniendo en cuenta las limitaciones de dispositivos y redes con recursos restringidos. Denominamos a esta solución Context- and Template-based Compression (CTC). CTC mejora la interoperabilidad a nivel de los datos de los sistemas IoT a la vez que requiere muy pocos recursos en cuanto a ancho de banda de las comunicaciones, tamaño de memoria y potencia de procesamiento

    SDF-Pack: Towards Compact Bin Packing with Signed-Distance-Field Minimization

    Full text link
    Robotic bin packing is very challenging, especially when considering practical needs such as object variety and packing compactness. This paper presents SDF-Pack, a new approach based on signed distance field (SDF) to model the geometric condition of objects in a container and compute the object placement locations and packing orders for achieving a more compact bin packing. Our method adopts a truncated SDF representation to localize the computation, and based on it, we formulate the SDF minimization heuristic to find optimized placements to compactly pack objects with the existing ones. To further improve space utilization, if the packing sequence is controllable, our method can suggest which object to be packed next. Experimental results on a large variety of everyday objects show that our method can consistently achieve higher packing compactness over 1,000 packing cases, enabling us to pack more objects into the container, compared with the existing heuristics under various packing settings

    A context -and template- based data compression approach to improve resource-constrained IoT systems interoperability.

    Get PDF
    170 p.El objetivo del Internet de las Cosas (the Internet of Things, IoT) es el de interconectar todo tipo de cosas, desde dispositivos simples, como una bombilla o un termostato, a elementos más complejos y abstractoscomo una máquina o una casa. Estos dispositivos o elementos varían enormemente entre sí, especialmente en las capacidades que poseen y el tipo de tecnologías que utilizan. Esta heterogeneidad produce una gran complejidad en los procesos integración en lo que a la interoperabilidad se refiere.Un enfoque común para abordar la interoperabilidad a nivel de representación de datos en sistemas IoT es el de estructurar los datos siguiendo un modelo de datos estándar, así como formatos de datos basados en texto (e.g., XML). Sin embargo, el tipo de dispositivos que se utiliza normalmente en sistemas IoT tiene capacidades limitadas, así como recursos de procesamiento y de comunicación escasos. Debido a estas limitaciones no es posible integrar formatos de datos basados en texto de manera sencilla y e1ciente en dispositivos y redes con recursos restringidos. En esta Tesis, presentamos una novedosa solución de compresión de datos para formatos de datos basados en texto, que está especialmente diseñada teniendo en cuenta las limitaciones de dispositivos y redes con recursos restringidos. Denominamos a esta solución Context- and Template-based Compression (CTC). CTC mejora la interoperabilidad a nivel de los datos de los sistemas IoT a la vez que requiere muy pocos recursos en cuanto a ancho de banda de las comunicaciones, tamaño de memoria y potencia de procesamiento

    Exploring mobile learning opportunities and challenges in Nepal: the potential of open-source platforms

    Get PDF
    With the increasing access to mobile devices in developing countries, the number of pilots and projects embracing mobile devices as learning tools is also growing. The important role it can play in improving education is also positively received within education communities. But, providing a successful mobile learning service is still significantly challenging. The considerable problems arise due to existing pedagogical, technological, political, social and cultural challenges and there has been a shortage of research concerning how to deploy and sustain this technology in a resource constrained educational environment. There are studies mainly conducted in sub-Saharan countries, India, and Latin America, which provide some guidelines for incorporating technology in the existing educational process. However, considering the contextual differences between these regions and other countries in Asia, such as Nepal, it requires a broader study in its own challenging socio-cultural context. In response to this difficulty, the aims of this exploratory research work are to study the distinct challenges of schools’ education in Nepal and evaluate the use of open-source devices to provide offline access to learning materials in order to recommend a sustainable mobile learning model. The developmental study was conducted in University of West London in order to assess the feasibility of these devices. The main study in Nepal explored i) the overall challenges to education in the challenging learning environment of schools with limited or no access to ICT, ii) how ICT might be helping teaching and learning in the rural public schools, and iii) how an offline mobile learning solution based on the open source platforms may facilitate English language teaching and learning. Data collection primarily involved interviews, questionnaires, observations and supplemented by other methods. This thesis presents the sustainable model for deploying and supporting mobile technology for education, which is based on the findings emerging from completed exploratory studies in Nepal. It highlights all the aspects that need to be addressed to ensure sustainability. However, to translate this understanding to a design is a complex challenge. For a mobile learning solution to be used in such challenging learning contexts, the need is to develop simple and innovative solutions that provide access to relevant digital learning resources and train teachers to embed technology in education. This thesis discusses these findings, limitations and presents implications for the design of future mobile learning in the context of Nepal

    Learning Multi-step Robotic Manipulation Tasks through Visual Planning

    Get PDF
    Multi-step manipulation tasks in unstructured environments are extremely challenging for a robot to learn. Such tasks interlace high-level reasoning that consists of the expected states that can be attained to achieve an overall task and low-level reasoning that decides what actions will yield these states. A model-free deep reinforcement learning method is proposed to learn multi-step manipulation tasks. This work introduces a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel image input at real-time speeds (20ms). The proposed model architecture achieved a state-of-the-art accuracy on three standard grasping datasets. The adaptability of the proposed approach is demonstrated by directly transferring the trained model to a 7 DoF robotic manipulator with a grasp success rate of 95.4% and 93.0% on novel household and adversarial objects, respectively. A novel Robotic Manipulation Network (RoManNet) is introduced, which is a vision-based model architecture, to learn the action-value functions and predict manipulation action candidates. A Task Progress based Gaussian (TPG) reward function is defined to compute the reward based on actions that lead to successful motion primitives and progress towards the overall task goal. To balance the ratio of exploration/exploitation, this research introduces a Loss Adjusted Exploration (LAE) policy that determines actions from the action candidates according to the Boltzmann distribution of loss estimates. The effectiveness of the proposed approach is demonstrated by training RoManNet to learn several challenging multi-step robotic manipulation tasks in both simulation and real-world. Experimental results show that the proposed method outperforms the existing methods and achieves state-of-the-art performance in terms of success rate and action efficiency. The ablation studies show that TPG and LAE are especially beneficial for tasks like multiple block stacking

    LEGO : linear embedding via Green's operators

    Get PDF
    Reduction of lead time has long been an important target in product development. Owing to the advance of computer power product optimization has been moved from the production stage to the preceding design stage. In particular, the full electromagnetic behavior of the final product can now be predicted through numerical methods. However, for the tuning of device parameters in the optimization stage, commercial software packages often rely on brute-force parameter sweeps. Further, for each set of parameter values a full recomputation of the entire configuration is usually required. In case of stringent product specifications or large complex structures, the computational burden may become severe. Recently, "marching on in anything" has been introduced to accelerate parameter sweeps. Nevertheless, it remains necessary to further reduce the computational costs of electromagnetic device design. This is the main goal in this thesis. As an alternative to existing electromagnetic modeling methods, we propose a modular modeling technique called linear embedding via Green’s operators (LEGO). It is a so-called diakoptic method based on the Huygens principle, involving equivalent boundary current sources by which simply connected scattering domains of arbitrary shape may fully be characterized. Mathematically this may be achieved using either Love’s or Schelkunoff’s equivalence principles, LEP or SEP, respectively. LEGO may be considered as the electromagnetic generalization of decomposing an electric circuit into a system of multi-port subsystems. We have captured the pertaining equivalent current distributions in terms of a lucid Green’s operator formalism. For instance, our scattering operator expresses the equivalent sources that would produce the scattered field exterior to a scattering domain in terms of the equivalent sources that would produce the incident field inside that domain. The enclosed scattering objects may be of arbitrary shape and composition. The scattering domains together with their scattering operators constitute the LEGO building blocks. We have employed various alternative electromagnetic solution methods to construct the scattering operators. In its most elementary form, LEGO is a generalization of an embedding procedure introduced in inverse scattering to describe multiple scattering be tween adjacent blocks, by considering one of the blocks as the environment of the other and vice versa. To establish an interaction between current distributions on disjoint domain boundaries we define a source transfer operator. Through such transfer operators we obtain a closed loop that connects the scattering operators of both domains, which describes the total field including the multiple scattering. Subsequently, a combined scattering block is composed by merging the separate scattering operators via transfer operators, and removing common boundaries. We have validated the LEGO approach for both 2D and 3D configurations. In the field of electromagnetic bandgap (EBG) structures we have demonstrated that a cascade of embedding steps can be employed to form electromagnetically large complex composite blocks. LEGO is a modular method, in that previously combined blocks may be stored in a database for possible reuse in subsequent LEGO building step. Besides scattering operators that account for the exterior scattered field, we also use interior field operators by which the field may be reproduced within (sub)domains that have been combined at an earlier stage. Only the subdomains of interest are stored and updated to account for the presence of additional domains added in subsequent steps. We have also shown how the scattering operator can be utilized to compute the band diagram of EBG structures. Two alternative methods have been proposed to solve the pertaining eigenvalue problem. We have validated the results via a comparison with results from a plane-wave method for 2D EBG structures. In addition, we have demonstrated that our method also applies to unit cells containing scattering objects that are perfectly conducting or extend across the boundary of the unit cell. The optimization stage of a design process often involves tuning local medium properties. In LEGO we accommodated for this through a transfer of the equivalent sources on the boundary of a large scattering operator to the boundary of a relatively small designated domain in which local structure variations are to be tested. As a result, subsequent LEGO steps can be carried out with great efficiency. As demonstrators, we have locally tuned the transmission properties at the Y-junction of both a power splitter and a mode splitter in EBG waveguide technology. In these design examples the computational advantageous of the LEGO approach become clearly manifest, as computation times reduce from hours to minutes. This efficient optimization stage of the LEGO method may also be integrated with existing software packages as an additional design tool. In addition to the acceleration of the computations, the reusability of the composite building constitute an important advantage. The Green’s operators are expressed in terms of equivalent boundary currents. These operators have been obtained using integral equations. In the numerical implementation of the LEGO method we have discretized the operators via the method of moments with a flat-facetted mesh using local test and expansion functions for the fields and currents, respectively. In the 2D case we have investigated the influence of using piecewise constant and piecewise linear functions. For the 3D implementation, we have applied the Rao-Wilton-Glisson (RWG) functions in combination with rotated RWG functions. After discretization, operators and operator compositions are matrices and matrix multiplications, respectively. Since the matrix multiplications in a LEGO step dominate the computational costs, we aim at a maximum accuracy of the field for a minimum mesh density. For LEGO with SEP, we have determined the unknown currents through inverse field propagators, whereas with LEP, the currents are directly obtained from the tangential field components via inverse Gram matrices. After a careful assessment of the computational costs of the LEGO method, it turns out that owing to the removal of common boundaries and the reusability of scattering domains, the most efficient application of LEGO involves a closely-packed configuration of identical blocks. In terms of the number of array elements, N, the complexity of a sequence of LEGO steps for 2D and 3D applications increases as O(N1.5) and O(N2), respectively. We have discussed possible improvements that can be expected from "marching on in anything" or multi-level fast-multipole algorithms. From an evaluation of the resulting scattered field, it turns out that LEGO with SEP is more accurate than with LEP. However, the spurious interior resonance effect common to SEP in the construction of composite building blocks can not simply be avoided through a combined field integral equation. By contrast, LEGO based on LEP is robust. Further, we have demonstrated that additional errors due to the choice of domain shape or building sequence, or the accumulation of errors due to long LEGO sequences are negligible. Further, we have investigated integral equations for the scattering from 2D and 3D perfectly conducting and dielectric objects. The discretized integral operators directly apply to the LEGO method. For scattering objects that are not canonical, these integral equations are used in the construction of the elementary LEGO blocks. Since we aim at a maximum accuracy of the field for a minimum mesh density, the regular test and expansion integral parts are primarily determined through adaptive quadrature rules, while analytic expressions are used for the singular integral parts. It turns out that the convergence of the scattered field is a direct measure for the accuracy of the scattered field computed with LEGO based on SEP or LEP. As an alternative to the PMCHW and the M¨uller integral equations, we have proposed an new integral equation formulation, which leads to cubic convergence in the 2D case, irrespective of the mesh density and object shape. In case of scattering object with a regular boundary domain scaling may be used to improve the convergence rate of the scattered field
    corecore