123 research outputs found
Potential of Computer-Vision Cellular-Phone Based System to Extract Traffic Parameters
This research work investigates the potential of computer – vision cellular – phone based systems to extract traffic and pedestrian parameters; using handheld smartphones with different camera characteristics; i.e. resolution, sensor size and image depth. Various locations with different geometry and functions were selected. Different traffic parameters were extracted: vehicle spot speed and three state vehicle speed profiles (steady, acceleration and deceleration), vehicle speed and time headway relation, vehicle classifications, roadway level of service and pedestrian walking and crossing speed in light and congested traffic area. The difference between actual and measured parameters defined as error and the relationship between error and camera characteristics were investigated. Also, a linear regression models were developed to express actual measured parameters as function of smartphone measures, error as function of camera characteristics. Analysis of extracted parameters showed there was a high correlation between camera characteristics and the accuracy of measured parameters. In fact, increasing camera resolution and sensor size would give high accuracy results for all studied parameters. The percentage of error was consistently ranged, for vehicle speeds it ranged between (1.4% - 10%), for pedestrian speeds it ranged between (0.5% - 9%) and for vehicles dimensions it ranged between (10% - 25%). The outcomes of this research showed high potential accuracy of smartphone – based vision systems in extracting traffic parameters and opened the door to integrate smartphones in different transportation engineering and civil engineering applications
Digital Mapping of Urban Arterial Roads Pavement Conditions
This paper explores integrating Geographic Information System (GIS), Computer Vision (CV) and Artificial Intelligent (AI) using cellular phones to evaluate and manage pavement conditions. It enables non-expert users to assess and maintain pavements effectively. The study develops CV-based mapping and a knowledge-based system for distress detection and management. Statistical regression models predict pavement sustainability. Finally, this approach empowers users to make informed maintenance decisions. Different cameras resolutions ranging from 0.2 to 16 MP were used to build an intelligent framework system to evaluate and manage pavement conditions with suitable maintenance works by non-expert users. A CV based system was developed for pavement surface mapping using cellular phones. A macro scale mapping of pavement surface conditions in the presence of flexible pavement distresses was developed by cellular phones with normal based configuration and various camera resolutions of arterial roads in Irbid city, Jordan. GIS layers were built for pavement conditions with various parameters, rating, distress types, severities and repairs options based on Global Position System (GPS) determination for distresses locations. The developed GIS System was established by integrating a set of computerized programs as a part of GIS software. New parameters were introduced to the system to expedite the pavement distresses classification, detection, management, and maintenance process, taking into account distress types, severities and geometrical measurements. A knowledge-based system (KBS) for pavement maintenance was also developed. It took into consideration distress type, severity, and pavement conditions. A criterion for images enhancement processes based on image processing technique for pavement distresses detection and management was developed. Surface measurements, pavement conditions as well as decision-making tasks have been supported and considered for all distress types. The developed statistical regression analysis models for pavement sustainability, serviceability and condition prediction utilized a set of extracted resulted measured variables of Pavement Condition Index (PCI), Present Serviceability Index (PSI) and Sustainability Index (SI) from various sources. They were conducted for the collected actual data with the values of (84.8, 2.988 and 0.6057) respectively. Regression results were statistically significant for all models with normal probability plots near a straight line
Experimental Investigation of Biogas Production from Kitchen Waste Mixed with Chicken Manure
ogas produced from solid kitchen waste (KW) mixed with chicken manure (M) at different mass ratios was investigated. The effect of the ratio of the amount of water to the mixed solid waste on the amount of biogas produced was studied. The results showed that at a fixed ratio of water-to-solid waste, the amount of biogas increased as the amount of chicken M increased. At a fixed M-to-KW ratio, the amount of biogas produced increased as the solid content increased and then decreased, reaching its maximum value at a solid waste-to-water ratio of 1:1. The pH of the bioreactor containing the KW-M mixture dropped with time, resulting in a decrease in the amount of biogas produced. Controlling the pH value by titrating with NaOH solution improved the production of biogas. Investigating biogas produced from sludge showed that the pH of the reactor slightly decreased and then increased slightly. The results also showed that the amount of biogas produced from sludge containing 3% solid waste was larger than the amount produced from sludge containing 6% solid waste
Quality of service optimization in IoT driven intelligent transportation system
High mobility in ITS, especially V2V communication networks, allows increasing coverage and quick assistance to users and neighboring networks, but also degrades the performance of the entire system due to fluctuation in the wireless channel. How to obtain better QoS during multimedia transmission in V2V over future generation networks (i.e., edge computing platforms) is very challenging due to the high mobility of vehicles and heterogeneity of future IoT-based edge computing networks. In this context, this article contributes in three distinct ways: to develop a QoS-aware, green, sustainable, reliable, and available (QGSRA) algorithm to support multimedia transmission in V2V over future IoT-driven edge computing networks; to implement a novel QoS optimization strategy in V2V during multimedia transmission over IoT-based edge computing platforms; to propose QoS metrics such as greenness (i.e., energy efficiency), sustainability (i.e., less battery charge consumption), reliability (i.e., less packet loss ratio), and availability (i.e., more coverage) to analyze the performance of V2V networks. Finally, the proposed QGSRA algorithm has been validated through extensive real-time datasets of vehicles to demonstrate how it outperforms conventional techniques, making it a potential candidate for multimedia transmission in V2V over self-adaptive edge computing platforms
Measurement of energy efficiency metrics of data centers. case study: higher education institution of Barranquilla
Data centers have become fundamental pillars of the network infrastructures of the various companies or entities regardless of their size. Since they support the processing, analysis, assurance of the data generated in the network, and by the applications in the cloud, which every day increases its volume thanks to diverse and sophisticated technologies. The management and storage of this large volume of information make the data centers consume a lot of energy, generating great concern to owners and administrators. Green Data Center (GDC) is a solution for this problem, reducing the impact produced by the data centers in the environment through the monitoring and control of these and to the application of standards-based on metrics. Although each data center has its particularities and requirements, the metrics are the tools that allow us to measure the energy efficiency of the data center and evaluate if it is friendly to the environment (1.Adv. Intell. Syst. Comput. 574:329–340). The objective of the study is to calculate these metrics in the data centers of a Higher Education Institution in Barranquilla, on both campuses, and the analysis of these will be carried out. It is planned to extend this study by reviewing several metrics to conclude, which is the most efficient and which allows defining the guidelines to update or convert the data center in a friendly environment. The research methodology used for the development of the project is descriptive and no-experimental
Non-Hermitian Delocalization and Eigenfunctions
Recent literature on delocalization in non-Hermitian systems has stressed
criteria based on sensitivity of eigenvalues to boundary conditions and the
existence of a non-zero current. We emphasize here that delocalization also
shows up clearly in eigenfunctions, provided one studies the product of left-
and right-eigenfunctions, as required on physical grounds, and not simply the
squared modulii of the eigenfunctions themselves. We also discuss the right-
and left-eigenfunctions of the ground state in the delocalized regime and
suggest that the behavior of these functions, when considered separately, may
be viewed as ``intermediate'' between localized and delocalized.Comment: 8 pages, 11 figures include
Vortex Flow and Transverse Flux Screening at the Bose Glass Transition
We investigate the vortex phase diagram in untwinned YBaCuO single crystals
with columnar defects. These randomly distributed defects, produced by heavy
ion irradiation, are expected to induce a ``Bose Glass'' phase of localized
vortices characterized by a vanishing resistance and a Meissner effect for
magnetic fields transverse to the defect axis. We directly observe the
transverse Meissner effect using an array of Hall probe magnetometers. As
predicted, the Meissner state breaks down at temperatures Ts that decrease
linearly with increasing transverse magnetic field. However, Ts falls well
below the conventional melting temperature Tm determined by a vanishing
resistivity, suggesting an intermediate regime where flux lines are effectively
localized even when rotated off the columnar defects.Comment: 15 pages, 5 figure
Surgical management of low grade isthmic spondylolisthesis; a randomized controlled study of the surgical fixation with and without reduction
<p>Abstract</p> <p>Background</p> <p>spondylolisthesis is a condition in which a vertebra slips out of the proper position onto the bone below it as a result of pars interarticularis defect. The slipped segment produces abnormal positioning of the vertebrae in relation to each other along the spinal column and causes mechanical back pain and neural breach.</p> <p>Materials and methods</p> <p>A randomized and double blinded study consisted of 41 patients aged 36-69 years (18 females and 28 males) treated for symptomatic spondylolisthesis between December,2006 and December, 2009. All patients were randomly distributed into two groups I and II. Twenty patients were in Group I; they underwent reduction of the slipped vertebrae by using Reduction-Screw Technique and posterior lumbar interbody fixation (PLIF). Group II consisted of twenty one patients who underwent only surgical fixation (PLIF) without reduction. All patients in this study had same pre and post operative management.</p> <p>Results</p> <p>only one case had broken rod in group I that required revision. Superficial wound infection was experienced in two patients and one patient, from group II, developed wound hematoma. The outcome in both groups was variable on the short term but was almost the same on the long term follow up.</p> <p>Conclusion</p> <p>surgical management of symptomatic low grade spondylolisthesis should include neural decompression and surgical fixation. Reduction of slipped vertebral bodies is unnecessary as the ultimate outcome will be likely similar.</p
Continuous and transparent multimodal authentication: reviewing the state of the art
Individuals, businesses and governments undertake an ever-growing range of activities online and via various Internet-enabled digital devices. Unfortunately, these activities, services, information and devices are the targets of cybercrimes. Verifying the user legitimacy to use/access a digital device or service has become of the utmost importance. Authentication is the frontline countermeasure of ensuring only the authorized user is granted access; however, it has historically suffered from a range of issues related to the security and usability of the approaches. They are also still mostly functioning at the point of entry and those performing sort of re-authentication executing it in an intrusive manner. Thus, it is apparent that a more innovative, convenient and secure user authentication solution is vital. This paper reviews the authentication methods along with the current use of authentication technologies, aiming at developing a current state-of-the-art and identifying the open problems to be tackled and available solutions to be adopted. It also investigates whether these authentication technologies have the capability to fill the gap between high security and user satisfaction. This is followed by a literature review of the existing research on continuous and transparent multimodal authentication. It concludes that providing users with adequate protection and convenience requires innovative robust authentication mechanisms to be utilized in a universal level. Ultimately, a potential federated biometric authentication solution is presented; however it needs to be developed and extensively evaluated, thus operating in a transparent, continuous and user-friendly manner
- …