501 research outputs found

    Analysis of Uplink Scheduling for Haptic Communications

    Get PDF
    While new mechanisms and configurations of the 5G radio are offering step forward in delivery of ultra-reliable low latency communication services in general, and haptic communications in particular, they could inversely impact the remainder of traffic services. In this paper, we investigate the uplink access procedure, how different advances in this procedure enhance delivery of haptic communication, and how it affects the remainder of traffic services in the network. We model this impact as the remainder of service, using stochastic network calculus. Our results show how best the tradeoff between faster or more resource efficient uplink access can be made depending on the rate of haptic data, which is directly relevant to the application domain of haptic communication.Comment: 8 pages, 14 figures, conference pape

    Maximum likelihood estimation of local stellar kinematics

    Full text link
    Context. Kinematical data such as the mean velocities and velocity dispersions of stellar samples are useful tools to study galactic structure and evolution. However, observational data are often incomplete (e.g., lacking the radial component of the motion) and may have significant observational errors. For example, the majority of faint stars observed with Gaia will not have their radial velocities measured. Aims. Our aim is to formulate and test a new maximum likelihood approach to estimating the kinematical parameters for a local stellar sample when only the transverse velocities are known (from parallaxes and proper motions). Methods. Numerical simulations using synthetically generated data as well as real data (based on the Geneva-Copenhagen survey) are used to investigate the statistical properties (bias, precision) of the method, and to compare its performance with the much simpler "projection method" described by Dehnen & Binney (1998). Results. The maximum likelihood method gives more correct estimates of the dispersion when observational errors are important, and guarantees a positive-definite dispersion matrix, which is not always obtained with the projection method. Possible extensions and improvements of the method are discussed.Comment: 7 pages, 2 figures. Accepted for publication in Astronomy & Astrophysic

    Reliable and Low-Latency Fronthaul for Tactile Internet Applications

    Get PDF
    With the emergence of Cloud-RAN as one of the dominant architectural solutions for next-generation mobile networks, the reliability and latency on the fronthaul (FH) segment become critical performance metrics for applications such as the Tactile Internet. Ensuring FH performance is further complicated by the switch from point-to-point dedicated FH links to packet-based multi-hop FH networks. This change is largely justified by the fact that packet-based fronthauling allows the deployment of FH networks on the existing Ethernet infrastructure. This paper proposes to improve reliability and latency of packet-based fronthauling by means of multi-path diversity and erasure coding of the MAC frames transported by the FH network. Under a probabilistic model that assumes a single service, the average latency required to obtain reliable FH transport and the reliability-latency trade-off are first investigated. The analytical results are then validated and complemented by a numerical study that accounts for the coexistence of enhanced Mobile BroadBand (eMBB) and Ultra-Reliable Low-Latency (URLLC) services in 5G networks by comparing orthogonal and non-orthogonal sharing of FH resources.Comment: 11pages, 13 figures, 3 bio photo

    Latency Bounds of Packet-Based Fronthaul for Cloud-RAN with Functionality Split

    Get PDF
    The emerging Cloud-RAN architecture within the fifth generation (5G) of wireless networks plays a vital role in enabling higher flexibility and granularity. On the other hand, Cloud-RAN architecture introduces an additional link between the central, cloudified unit and the distributed radio unit, namely fronthaul (FH). Therefore, the foreseen reliability and latency for 5G services should also be provisioned over the FH link. In this paper, focusing on Ethernet as FH, we present a reliable packet-based FH communication and demonstrate the upper and lower bounds of latency that can be offered. These bounds yield insights into the trade-off between reliability and latency, and enable the architecture design through choice of splitting point, focusing on high layer split between PDCP and RLC and low layer split between MAC and PHY, under different FH bandwidth and traffic properties. Presented model is then analyzed both numerically and through simulation, with two classes of 5G services that are ultra reliable low latency (URLL) and enhanced mobile broadband (eMBB).Comment: 6 pages, 7 figures, 3 tables, conference paper (ICC19

    A Comparison of Bridge Deterioration Models

    Get PDF
    Predicting how bridges will deteriorate is the key to budgeting financial and personnel resources. Deterioration models exist for specific components of a bridge, but no models exist for the sufficiency rating which is an overall measure of the condition and relevance of a bridge used for determining eligibility for federal funds. We have 25 years worth of data collected by the Georgia Department of Transportation from 1992 to 2016 about all bridges in the State of Georgia. More precisely, each row in this data set includes the characteristics of each bridge along with the sufficiency rating of that bridge in a specific year which was inspected by engineers. In this thesis we introduce two models (Mixed Logistics-Gamma model and Poisson model) that predict the change of sufficiency rating of each bridge and tell us how much a specific bridge deteriorates in a specific period of time. We then compare these two models

    Effect of Plant-Based Filtration and Bio-Treatment on Toxicity of Bio-Oil Process Water

    Get PDF
    This study evaluated physical and biological treatments of bio-oil process water to decrease organic contaminants. A three-sequential-column filtration system compared four treatments: three columns filled with kenaf only; three columns filled with wood shavings only; first column filled with wood shavings and two with kenaf; and first column filled with kenaf and two with wood shavings. The kenaf and wood shavings were composted after filtration. The filtrate water underwent further bio-treatment by adding aeration and selected bacteria. After filtration and bio-treatment, oil and grease concentrations were reduced over 80 percent and toxicity reduced over 90 percent. There were no significant differences among filtration treatments. Most of the oil and grease was removed by the first column. Aeration significantly decreased the concentration of oil and grease and toxicity in the filtrate water. Composting of the bioiltration matrices significantly reduced the oil and grease concentrations at day 45 by 80 percent

    A comparison of two approaches for estimating local stellar kinematics: The Projection Method versus Maximum Likelihodd

    Get PDF
    We test a new approach to calculate the mean velocity and velocity dispersion for a sample of nearby stars when their radial velocities are not available. The most commonly used method (here called the projection method, PM) was introduced in a paper by Dehnen & Binney (1998). That method is here compared, theoretically and numerically, with an application of the Maximum Likelihood (ML) method. The two methods are tested on synthetically generated samples as well as on real samples from the Hipparcos Catalogue. In general it turns out that ML is not significantly more accurate than PM, except that ML allows to take into account observational errors and therefore gives more correct dispersions when the uncertainty in the proper motions is significant. Applying PM and ML to samples from the Geneva-Copenhagen survey (Nordström et al. 2004), we find that both methods give very similar results as when the published three-dimensional velocities are used

    Towards a Simplified LCA Tool: Minimising the Life Cycle Environmental Impact at the Early Stages of Building Design

    Get PDF
    Minimising the direct and indirect environmental impact of buildings over their life cycle has become a growing concern worldwide. Life Cycle Assessment (LCA) has been effective in addressing this goal. However, it is constrained by several factors. First, little-detailed information to calculate LCA is available at the early stages of building design when the most important decisions are made. Second, the LCA method is too complex to apply at this stage, both in terms of the knowledge required to operate the conventional calculation software and for the inputs. Streamlining the LCA application at the early design phase has been hampered by the lack of reliable, available, comparable and consistent information on the life cycle environmental performance of buildings. Most of the previous LCA studies are based on a quantification method that requires the use of complex tools and an enormous amount of data and is best employed when the building is complete. The time and expense involved in this type of analysis make it unfeasible for small or individual projects. This thesis is the first step toward the development of a tool that allows designers to employ LCA in the early design stages of a building. It aims to allow designers to more easily apply the logic of LCA by minimising data requirements and identifying the most effective parameters that promise to make the most difference. A review of relevant literature has further identified the external criteria that are most closely associated with the effectiveness of LCA, namely: climate, location, building type and building lifetime. These parameters have been used to define a system boundary for generating results that reflect the characteristics of a specific building. Common rules have been extracted from the results of previous LCA studies of building envelopes and classified in relation to the most effective external criteria. The process shows that the results of quantitative LCA provide a ready-to-use database

    Interent Usage among Female Undergraduates in Ferdowsi Univrsity, Iran

    Get PDF
    The importance of the public use of Internet has noticeably increased in people’s daily life. Internet, regarding its potential possibilities has attracted most Internet users’ prominent attention specially, students. The significant portions of students who use the Internet are females. They involve with the Internet based on their needs for accessing information, satisfying their leisure activities, transferring their information, making communication with others and so forth. This study was designed by applying the Use and Gratification theory framework to understand the Internet usage among female undergraduate students. The objectives of this study are to identify the relationship between the pattern of Internet usage, attitude towards Internet, English language knowledge, field of studies, Internet skills, problems and purpose of using Internet with the gratification of Internet usage. The present study used a survey design to achieve the objectives of the study. Non-probability sampling was employed in this study. The purposive sampling method was chosen for this study because the subjects were selected based on the specific demographical characteristics such as gender, age, education level, not working and using the Internet. A total of 319 respondents participated in the study in which 62 are from the field of English language and 257 from the other fields of humanities. Five categories of gratification for using the Internet were identified, namely, Escape, Affective, Cognitive, Social Integration, and Personal Integration. Most of the female undergraduate students used Internet for searching and getting knowledge. Finding relevant information for research was the most important purpose for students. They mostly search in Persian Google.com. The most common problem of using the Internet is that it takes too long time to download the Internet pages. The most common gratification of using the Internet was related to information gathering and learning new things. This study found no significant relationship between numbers of years in using the Internet with gratification of Internet usage. However, the relationships between attitude, purpose and frequency of Internet usage with gratifications of Internet usage are significant and positive
    corecore