-
Seufert, M.: Statistical Methods and Models based on Quality of Experience Distributions. Quality and User Experience. 6, (2020).
Due to biased assumptions on the underlying ordinal rating scale in subjective Quality of Experience (QoE) studies, Mean Opinion Score (MOS)-based evaluations provide results, which are hard to interpret and can be misleading. This paper proposes to consider the full QoE distribution for evaluating, reporting, and modeling QoE results instead of relying on MOS-based metrics derived from results based on ordinal rating scales. The QoE distribution can be represented in a concise way by using the parameters of a multinomial distribution without losing any information about the underlying QoE ratings, and even keeps backward compatibility with previous, biased MOS-based results. Considering QoE results as a realization of a multinomial distribution allows to rely on a well-established theoretical background, which enables meaningful evaluations also for ordinal rating scales. Moreover, QoE models based on QoE distributions keep detailed information from the results of a QoE study of a technical system, and thus, give an unprecedented richness of insights into the end users’ experience with the technical system. In this work, existing and novel statistical methods for QoE distributions are summarized and exemplary evaluations are outlined. Furthermore, using the novel concept of quality steps, simulative and analytical QoE models based on QoE distributions are presented and showcased. The goal is to demonstrate the fundamental advantages of considering QoE distributions over MOS-based evaluations if the underlying rating data is ordinal in nature.
-
Wassermann, S., Seufert, M., Casas, P., Li, G., Kuang, L.: ViCrypt to the Rescue: Real-time, Machine-Learning-driven Video-QoE Monitoring for Encrypted Streaming Traffic. IEEE Transactions on Network and Service Management. 17, 2007 - 2023 (2020).
Video streaming is the killer application of the Internet today. In this paper, we address the problem of real-time, passive Quality-of-Experience (QoE) monitoring of HTTP Adaptive Video Streaming (HAS), from the Internet-Service-Provider (ISP) perspective - i.e., relying exclusively on in-network traffic measurements. Given the wide adoption of end-to-end encryption, we resort to machine-learning (ML) models to estimate multiple key video-QoE indicators (KQIs) from the analysis of the encrypted traffic. We present ViCrypt, an ML-driven monitoring solution able to infer the most important KQIs for HTTP Adaptive Streaming (HAS), namely stalling, initial delay, video resolution, and average video bitrate. ViCrypt performs estimations in real-time, during the playback of an ongoing video-streaming session, with a fine-grained temporal resolution of just one second. For this, it relies on lightweight, stream-like features continuously extracted from the encrypted stream of packets. Empirical evaluations on a large and heterogeneous corpus of YouTube measurements show that ViCrypt can infer the targeted KQIs with high accuracy, enabling large-scale passive video-QoE monitoring and proactive QoE-aware traffic management. Different from the state of the art, and besides real-time operation, ViCrypt is not bound to coarse-grained KQI-classes, providing better and sharper insights than other solutions. Finally, ViCrypt does not require chunk-detection approaches for feature extraction, significantly reducing the complexity of the monitoring approach, and potentially improving on generalization to different HAS protocols used by other video-streaming services such as Netflix and Amazon.
-
Wehner, N., Seufert, M., Schüler, J., Wassermann, S., Casas, P., Hoßfeld, T.: Improving Web QoE Monitoring for Encrypted Network Traffic through Time Series Modeling. ACM SIGMETRICS Performance Evaluation Review. (2020).
This paper addresses the problem of Quality of Experience (QoE) monitoring for web browsing. In particular, the inference of common Web QoE metrics such as Speed Index (SI) is investigated. Based on a large dataset collected with open web-measurement platforms on different device-types, a unique feature set is designed and used to estimate the RUMSI -- an efficient approximation to SI, with machine-learning based regression and classification approaches. Results indicate that it is possible to estimate the RUMSI accurately, and that in particular, recurrent neural networks are highly suitable for the task, as they capture the network dynamics more precisely.
-
Kunz, F., Hirth, M., Schweitzer, T., Linz, C., Goetz, B., Stellzig-Eisenhauer, A., Borchert, K., Böhm, H.: Subjective perception of craniofacial growth asymmetries in patients with deformational plagiocephaly. Springer Clinical Oral Investigations. 24, https://doi.org/10.1007/s00784-020-03417-y (2020).
-
Hoßfeld, T., Heegaard, P.E., Skorin-Kapov, L., Varela, M.: Deriving QoE in systems: from fundamental relationships to a QoE‑based Service‑level Quality Index. Springer Quality and User Experience. 5, https://rdcu.be/b483m (2020).
With Quality of Experience (QoE) research having made significant advances over the years, service and network providers aim at user-centric evaluation of the services provided in their system. The question arises how to derive QoE in systems. In the context of subjective user studies conducted to derive relationships between influence factors and QoE, user diversity leads to varying distributions of user rating scores for different test conditions. Such models are commonly exploited by providers to derive various QoE metrics in their system, such as expected QoE, or the percentage of users rating above a certain threshold. The question then becomes how to combine (a) user rating distributions obtained from subjective studies, and (b) system parameter distributions, so as to obtain the actual observed QoE distribution in the system? Moreover, how can various QoE metrics of interest in the system be derived? We prove fundamental relationships for the derivation of QoE in systems, thus providing an important link between the QoE community and the systems community. In our numerical examples, we focus mainly on QoE metrics. We furthermore provide a more generalized view on quantifying the quality of systems by defining a QoE-based Service-level Quality Index. This index exploits the fact that quality can be seen as a proxy measure for utility. Following the assumption that not all user sessions should be weighted equally, we aim to provide a generic framework that can be utilized to quantify the overall utility of a service delivered by a system.
-
Wamser, F., Alay, Ö., Metzger, F., Valentin, S.: IJNM Special Issue - Editorial - QoE-centric Analysis and Management of Communication Networks. International Journal of Network Management. Special Issue: QoE-centric Analysis and Management of Communication Networks, (2020).
The heterogeneity and variability of Internet applications has increased considerably in recent years. Applications such as video streaming are responsible for a large part of data traffic on the Internet. Internet telephony and video conferencing systems have become part of our daily lives. At the same time, the Internet of Things is striving to exceed previous expectations regarding the number of devices. Furthermore, the proliferation of video games, virtual reality applications, and 360° video applications is increasing. All this leads to specific but different requirements from applications to frameworks, service platforms, and networks. For each service, users desire special service criteria, such as smooth interactivity, fast downloads, high availability, or extensive content. Such requirements can usually be summarized under the term Quality of Experience, i.e., the overall satisfaction of a user with the system cur- rently in use. In the age of big data and dynamic networks, Quality of Experience is still looking for its place, and good solutions are in high demand. This Special Issue addresses the latest advances and challenges in analysis, design, modeling, measurement, and performance evaluation of Quality of Experience and Quality of Experience-oriented metrics and management.
-
Borchert, K., Seufert, A., Gamboa, E., Hirth, M., Hoßfeld, T.: In vitro vs in vivo: does the study’s interface design influence crowdsourced video QoE? Quality and User Experience. (2020).
Evaluating the Quality of Experience (QoE) of video streaming and its influence factors has become paramount for streaming providers, as they want to maintain high satisfaction for their customers. In this context, crowdsourced user studies became a valuable tool to evaluate different factors which can affect the perceived user experience on a large scale. In general, most of these crowdsourcing studies either use, what we refer to, as an in vivo or an in vitro interface design. In vivo design means that the study participant has to rate the QoE of a video that is embedded in an application similar to a real streaming service, e.g., YouTube or Netflix. In vitro design refers to a setting, in which the video stream is separated from a specific service and thus, the video plays on a plain background. Although these interface designs vary widely, the results are often compared and generalized. In this work, we use a crowdsourcing study to investigate the influence of three interface design alternatives, an in vitro and two in vivo designs with different levels of interactiveness, on the perceived video QoE. Contrary to our expectations, the results indicate that there is no significant influence of the study’s interface design in general on the video experience. Furthermore, we found that the in vivo design does not reduce the test takers’ attentiveness. However, we observed that participants who interacted with the test interface reported a higher video QoE than other groups.
-
Göritz, A., Borchert, K., Hirth, M.: Using Attention Testing to Select Crowdsourced Workers and Research Participants. Social Science Computer Review. (2019).
-
Linguaglossa, L., Lange, S., Pontarelli, S., Retvari, G., Rossi, D., Zinner, T., Bificulo, R., Jarschel, M., Bianchi, G.: Survey of Performance Acceleration Techniques for Network Function Virtualization. Proceedings of the IEEE. (2019).
The ongoing network softwarization trend holds the promise to revolutionize network infrastructures by making them more flexible, reconfigurable, portable, and more adaptive than ever. Still, the migration from hard-coded/hardwired network functions towards their software-programmable counterparts comes along with the need for tailored optimizations and acceleration techniques, so as to avoid, or at least mitigate, the throughput/latency performance degradation with respect to fixed function network elements. The contribution of this article is twofold. First, we provide a comprehensive overview of the host-based Network Function Virtualization (NFV) ecosystem, covering a broad range of techniques, from low level hardware acceleration and bump-in-the-wire offloading approaches, to high-level software acceleration solutions, including the virtualization technique itself. Second, we derive guidelines regarding the design, development, and operation of NFV-based deployments that meet the flexibility and scalability requirements of modern communication networks.
-
Schwind, A., Midoglu, C., Alay, Ö., Griwodz, C., Wamser, F.: Dissecting the performance of YouTube video streaming in mobile networks. International Journal of Network Management. (2019).
Video streaming applications constitute a significant portion of the Internet traffic today, with mobile accounting for more than half of the online video views. The high share of video in the current Internet traffic mix has prompted many studies that examine video streaming through measurements. However, streaming performance depends on many different factors at different layers of the TCP/IP stack. For example, browser selection at the application layer or the choice of protocol in transport layer can have significant impact on the video performance. Furthermore, video performance heavily depends on the underlying network conditions (eg, network and link layers). For mobile networks, the conditions vary significantly, since each operator has a different deployment strategy and configuration. In this paper, we focus on YouTube and carry out a comprehensive study investigating the influence of different factors on streaming performance. Leveraging the Measuring Mobile Broadband Networks in Europe (MONROE) test bed that enables experimentation with 13 different network configurations in four countries, we collect more than 1800 measurement samples in operational mobile networks. With this campaign, our goal is to quantify the impact of parameters from different layers on YouTube's streaming quality of experience (QoE). More specifically, we analyze the role of the browser (eg, Firefox and Chrome), the impact of transport protocol (eg, TCP or QUIC), the influence of network bandwidth, and signal coverage on streaming QoE. Our analysis reveals that all these parameters need to be taken into account jointly for network management practices, in order to ensure a high end‐user experience.
-
Metzger, F., Hoßfeld, T., Bauer, A., Kounev, S., Heegaard, P.E.: Modeling of Aggregated IoT Traffic and Its Application to an IoT Cloud. Proceedings of the IEEE. 107, 679-694 (2019).
As the Internet of Things (IoT) continues to gain traction in telecommunication networks, a very large number of devices are expected to be connected and used in the near future. In order to appropriately plan and dimension the network, as well as the back-end cloud systems and the resulting signaling load, traffic models are employed. These models are designed to accurately capture and predict the properties of IoT traffic in a concise manner. To achieve this, Poisson process approximations, based on the Palm–Khintchine theorem, have often been used in the past. Due to the scale (and the difference in scales in various IoT networks) of the modeled systems, the fidelity of this approximation is crucial, as, in practice, it is very challenging to accurately measure or simulate large-scale IoT deployments. The main goal of this paper is to understand the level of accuracy of the Poisson approximation model. To this end, we first survey both common IoT network properties and network scales as well as traffic types. Second, we explain and discuss the Palm–Khintiche theorem, how it is applied to the problem, and which inaccuracies can occur when using it. Based on this, we derive guidelines as to when a Poisson process can be assumed for aggregated periodic IoT traffic. Finally, we evaluate our approach in the context of an IoT cloud scaler use case.
-
Seufert, M., Wassermann, S., Casas, P.: Considering User Behavior in the Quality of Experience Cycle: Towards Proactive QoE-aware Traffic Management. IEEE Communications Letters. 23, 1145-1148 (2019).
The concept of Quality of Experience (QoE) of Internet services is widely recognized by service providers and network operators. They strive to deliver the best experience to their customers in order to increase revenues and avoid churn. Therefore, QoE is increasingly considered as an integral part of the reactive traffic management cycle of network operators. Additionally, QoE also constitutes a cycle of its own, which includes the user behavior and the service requirements. This work describes this QoE cycle, which is not widely taken into account yet, discusses the interactions of the two cycles, and derives implications towards an improved and proactive QoE-aware traffic management. A showcase on how network operators can obtain hints on the change of network requirements from detecting user behavior in encrypted video traffic is also presented in this paper.
-
Seufert, M., Wehner, N., Casas, P.: A Fair Share for All: TCP-inspired Adaptation Logic for QoE Fairness among Heterogeneous HTTP Adaptive Video Streaming Clients. IEEE Transactions on Network and Service Management. 16, 475-488 (2019).
-
Wassermann, S., Wehner, N., Casas, P.: Machine Learning Models for YouTube QoE and User Engagement Prediction in Smartphones. ACM SIGMETRICS Performance Evaluation Review. 46, 155-158 (2019).
Measuring and monitoring YouTube Quality of Experience is a challenging task, especially when dealing with cellular networks and smartphone users. Using a large-scale database of crowdsourced YouTube-QoE measurements in smartphones, we conceive multiple machine-learning models to infer different YouTube-QoE-relevant metrics and userbehavior- related metrics from network-level measurements, without requiring root access to the smartphone, video-player embedding, or any other reverse-engineering-like approaches. The dataset includes measurements from more than 360 users worldwide, spanning over the last five years. Our preliminary results suggest that QoE-based monitoring of YouTube mobile can be realized through machine learning models with high accuracy, relying only on network-related features and without accessing any higher-layer metric to perform the estimations.
-
Geissler, S., Herrnleben, S., Bauer, R., Grigorjew, A., Jarschel, M., Zinner, T.: The Power of Composition: Abstracting a Multi-Device SDN Data Path Through a Single API. IEEE Transactions on Network and Service Management. (2019).
-
Skorin-Kapov, L., Varela, M., Hoßfeld, T., Chen, K.-T.: A Survey of Emerging Concepts and Challenges for QoE Management of Multimedia Services. ACM Transactions on Multimedia Computing, Communications and Applications. (2018).
-
Hoßfeld, T., Skorin-Kapov, L., Heegaard, P.E., Varela, M.: A new QoE fairness index for QoE management. Quality and User Experience. (2018).
-
Burger, V., Zinner, T., Dinh-Xuan, L., Wamser, F., Tran-Gia, P.: A Generic Approach to Video Buffer Modeling using Discrete-Time Analysis. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM). 14, 23 (2018).
-
Chang, H.-S., Hsu, C.-F., Hoßfeld, T., Chen, K.-T.: Active Learning for Crowdsourced QoE Modeling. IEEE Transactions on Multimedia. (2018).
-
Schütz, A., Fertig, T., Weber, K., Vu, H., Hirth, M., Tran-Gia, T.: Vertrauen ist gut, Blockchain ist besser - Einsatzmöglichkeiten von Blockchain für Vertrauensprobleme im Crowdsourcing. HMD Praxis der Wirtschaftsinformatik. (2018).
Beim Crowdsourcing schreiben Unternehmen auf speziellen Marktplätzen Arbeiten aus, wo sie von einer großen Anzahl von anonymen Arbeitern bearbeitet werden können. Im Gegensatz zu traditionellen Arbeitsverhältnissen geht beim Crowdsourcing der Arbeitnehmer in Vorleistung. Der Auftraggeber entscheidet anhand der übermittelten Bearbeitung, ob der Arbeitnehmer entlohnt wird oder nicht. Vertrauen zwischen beiden Parteien ist daher ein entscheidender Erfolgsfaktor für Crowdsourcing. Dieser Beitrag beschreibt die Vertrauensprobleme im Crowdsourcing und stellt ein Konzept für ein neuartiges Reputationssystem für Crowdsourcing-Plattformen vor. Das Konzept soll helfen, das Vertrauen im Crowdsourcing zu erhöhen. Umgesetzt wird dieses Konzept mittels Smart Contracts auf der Ethereum Blockchain im Projekt CrowdPrecision. Die Profile von Auftraggeber und Arbeitnehmer werden mit einem Reputationsscore verknüpft, der transparent und fälschungssicher in der Blockchain gespeichert wird. Die einzelnen Aufträge werden über Smart Contracts abgebildet, die eine automatisierte Entlohnung und Bewertung unter Einhaltung der vereinbarten Bedingungen garantieren.
-
Dinh-Xuan, L., Popp, C., Burger, V., Wamser, F., Hoßfeld, T.: Impact of VNF Placements on QoE Monitoring in the Cloud. International Journal of Network Management. (2018).
-
Hoßfeld, T., Skorin-Kapov, L., Varela, M., Chen, K.-T.: Guest Editorial: Special Issue on “QoE Management for Multimedia Services”. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM). (2018).
-
Sieber, C., Hagn, K., Moldovan, C., Hoßfeld, T., Kellerer, W.: Towards Machine Learning-Based Optimal HAS. arXiv preprint arXiv:1808.08065. (2018).
-
Hoßfeld, T., Timmerer, C.: Quality of experience column: an introduction. ACM SIGMultimedia Records. 10 (2018).
-
Hoßfeld, T., Heegaard, P.E., Varela, M., Skorin-Kapov, L.: Confidence Interval Estimators for MOS Values. arXiv preprint arXiv:1806.01126. (2018).
-
Lorenz, C., Hock, D., Scherer, J., Durner, R., Kellerer, W., Gebert, S., Gray, N., Zinner, T., Tran-Gia, P.: An SDN/NFV-enabled Enterprise Network Architecture Offering Fine-Grained Security Policy Enforcement. IEEE Communications Magazine. 55, 217 - 223 (2017).
In recent years, the number of attacks and threat vectors against enterprise networks have been constantly increasing in numbers and variety. In addition, new challenges arise not only to the level of provided security, but also to the scalability and manageability of the deployed countermeasures such as firewalls and intrusion detection systems. Despite these attacks, the main security systems, e.g., network firewalls, have remained rather unchanged. Due to the tight integration into the physical network’s infrastructure, a dynamic resource allocation to adapt the security measures to the current network conditions is a difficult undertaking. Therefore, in this work, we analyze and compare different architectural design patterns for the integration of SDN/NFV-based security solutions into enterprise networks.
-
Hoffmann, M., Jarschel, M., Pries, R., Schneider, P., Jukan, A., Bziuk, W., Gebert, S., Zinner, T., Tran-Gia, P.: SDN and NFV as Enabler for the Distributed Network Cloud. Mobile Networks and Applications. (2017).
-
Metter, C., Seufert, M., Wamser, F., Zinner, T., Tran-Gia, P.: Analytical Model for SDN Signaling Traffic and Flow Table Occupancy and its Application for Various Types of Traffic. IEEE Transactions on Network and Service Management. 14, 603-615 (2017).
Software Defined Networking (SDN) has emerged as a promising networking paradigm overcoming various drawbacks of current communication networks. The control and data plane of switching devices is decoupled and control functions are centralized at the network controller. In SDN, each new flow introduces additional signaling traffic between the switch and the controller. Based on this traffic, rules are created in the flow table of the switch, which specify the forwarding behavior. To avoid table overflows, unused entries are removed after a predefined time-out period. Given a specific traffic mix, the choice of this time-out period affects the trade-off between signaling rate and table occupancy. As a result, network operators have to adjust this parameter to enable a smooth and efficient network operation. Due to the complexity of this problem caused by the various traffic flows in a network, a suitable abstraction is necessary in order to derive valid parameter values in time. The contribution of this work is threefold. Firstly, we formulate a simple analytical model that allows optimizing the network performance with respect to the table occupancy and the signaling rate. Secondly, we validate the model by means of simulation. Thirdly, we illustrate the impact of the time-out period on the signaling traffic and the flow table occupancy for different data-plane traffic mixes and characteristics. This includes scenarios with single application instances, as well as multiple application instances of different application types in an SDN-enabled network.
-
Schwarzmann, S., Zinner, T., Dobrijevic, O.: Quantitative Comparison of Application-Network Interaction: A Case Study of Adaptive Video Streaming. Springer Quality and User Experience. (2017).
Managing quality of experience (QoE) is now widely accepted as a critical objective for multimedia applications and the supporting communication systems. In general, QoE management encompasses: (i) monitoring of the key influence factors and QoE indicators, and (ii) deciding on the appropriate control actions as specified by the management goal. Many multimedia applications, e.g. video streaming and audio conferencing, are able to adjust their operational parameters so as to react to variations in the network performance. However, such an adaptation feature is mostly based on a local client view of the network conditions, which may lead to an unfair allocation of network resources among heterogeneous clients and, thus, an unfair QoE distribution. In order to tackle this issue, there is the call for a cooperation between the applications and the underlying network, which includes application-network interaction (App-Net) in terms of: (1) exchanging information on the monitored QoE indicators, and (2) coordinating the QoE control actions. Various App-Net mechanisms focusing on specific use cases and applications have been proposed to date. This paper gives an overview of App-Net mechanisms and proposes a generic App-Net model that provides the means to realize a coordinated QoE-centric management. Based on the App-Net model, we develop an evaluation methodology to compare three App-Net mechanisms for managing QoE of HTTP adaptive streaming (HAS) against a baseline HAS service. The aim of this quantitative comparison is to explore the trade-offs between QoE gains and the complexity of App-Net implementation, with respect to the number of monitoring and control messages, achieved video quality, and QoE fairness among heterogeneous clients. Our ultimate goal is to set up reproducible experiments that facilitate a holistic evaluation of different App-Net mechanisms.
-
Hoßfeld, T., Chan, S.-H.G., Mark, B.L., Timm-Giel, A.: Softwarization and Caching in NGN. Computer Networks. 125, 1-3 (2017).
-
Cofano, G., De Cicco, L., Zinner, T., Nguyen-Ngoc, A., Tran-Gia, P., Mascolo, S.: Design and Performance Evaluation of Network-assisted Control Strategies for HTTP Adaptive Streaming. ACM Transactions on Multimedia Computing, Communications, and Applications. (2017).
-
Zinner, T., Geissler, S., Lange, S., Gebert, S., Seufert, M., Tran-Gia, P.: A Discrete-Time Model for Optimizing the Processing Time of Virtualized Network Functions. Computer Networks. 125, 4-14 (2017).
The softwarization of networks promises cost savings and better scalability of network functions by moving functionality from specialized devices into commercial off-the-shelf hardware. Generalized computing hardware offers many degrees of adjustment and tuning, which can affect performance and resource utilization. One of these adjustments are interrupt moderation techniques implemented by modern network interface cards and operating systems. Using these, an administrator can optimize either for low latencies or low CPU overhead for processing of network traffic. In this work, an analytical model that allows the computation of relevant performance metrics like packet processing time and packet loss for generic virtualized network functions running on commodity hardware is presented. Based on this model, impact factors like average packet interarrival time, interarrival time distribution, and duration of the interrupt aggregation interval are studied. Furthermore, we significantly improve the computational tractability of this discrete-time model by proving and leveraging a property regarding its limit behavior. We also demonstrate that using this property does not affect the accuracy of the model in the context of realistic parameter combinations. Finally, the improved runtime for numerical evaluations allows administrators to dynamically adapt their interrupt mitigation settings to changing network conditions by recalculating optimal parameters.
-
Hoßfeld, T.: 2016 International Teletraffic Congress (ITC 28) Report. ACM SIGCOMM Computer Communication Review. (2017).
-
Seufert, M., Burger, V., Lorey, K., Seith, A., Loh, F., Tran-Gia, P.: Assessment of Subjective Influence and Trust with an Online Social Network Game. Computers in Human Behavior. 64, 233-246 (2016).
The deduction of influence and trust between two individuals only from objective data in online social networks (OSNs) is a rather vague approach. Subjective assessments via surveys produce better results, but are harder to conduct considering the vast amount of friendships of OSN users. This work presents a framework for personalized surveys on relationships in OSNs, which follows a gamification approach. A Facebook game was developed, which was used to subjectively assess social influence and interpersonal trust based on models from psychology. The results show that it is possible to obtain subjective opinions and (limited) objective data about relationships with an OSN game. Also an implicit assessment of influence and trust with subcategory questions is feasible in this case.
-
Wamser, F., Casas, P., Seufert, M., Moldovan, C., Tran-Gia, P., Hoßfeld, T.: Modeling the YouTube Stack: from Packets to Quality of Experience. Computer Networks. 109, 211-224 (2016).
YouTube is one of the most popular and volume-dominant services in today’s Internet, and has changed the Web for ever. Consequently, network operators are forced to consider it in the design, deployment, and optimization of their networks. Taming YouTube requires a good understanding of the complete YouTube stack, from the network streaming service to the application itself. Understanding the interplays between individual YouTube functionalities and their implications for traffic and user Quality of Experience (QoE) becomes paramount nowadays. In this paper we characterize and model the YouTube stack at different layers, going from the generated network traffic to the QoE perceived by the users watching YouTube videos. Firstly, we present a network traffic model for the YouTube flow control mechanism, which permits to understand how YouTube provisions video traffic flows to users. Secondly, we investigate how traffic is consumed at the client side, deriving a simple model for the YouTube application. Thirdly, we analyze the implications for the end user, and present a model for the quality as perceived by them. This model is finally integrated into a system for real time QoE-based YouTube monitoring, highly useful to operators to assess the performance of their networks for provisioning YouTube videos. The central parameter for all the presented models is the buffer level at the YouTube application layer. This paper provides an extensive compendium of objective tools and models for network operators to better understand the YouTube traffic in their networks, to predict the playback behavior of the video player, and to assess how well they do it in the practice with the satisfaction of their customers watching YouTube.
-
Tavakoli, S., Egger, S., Seufert, M., Schatz, R., Brunnström, K., García, N.: Perceptual Quality of HTTP Adaptive Streaming Strategies: Cross-Experimental Analysis of Multi-Laboratory and Crowdsourced Subjective Studies. IEEE Journal on Selected Areas in Communications. 34, 2141-2153 (2016).
Today’s packet-switched networks are subject to bandwidth fluctuations that cause for degradation of user experience of multimedia services. In order to cope with this problem, HTTP Adaptive Streaming (HAS) has been proposed in recent years as a video delivery solution for the future Internet and being adopted by an increasing number of streaming services such as Netflix and Youtube. HAS enables service providers to improve users’ Quality of Experience (QoE) and network resource utilization by adapting the quality of the video stream to the current network conditions. However, the resulting timevarying video quality caused by adaptation introduces a new type of impairment and thus novel QoE research challenges. Despite of various recent attempts to investigate these challenges, many fundamental questions regarding HAS perceptual performance are still open. In this paper, the QoE impact of different technical adaptation parameters including chunk length, switching amplitude, switching frequency and temporal recency are investigated. In addition, the influence of content on perceptual quality of these parameters is analyzed. To this end, a large number of adaptation scenarios have been subjectively evaluated in four laboratory experiments and one crowdsourcing study. Statistical analysis of the combined dataset reveals results that partly contradict widely held assumptions and provide novel insights in perceptual quality of adapted video sequences, e.g. interaction effects between quality switching direction (up/down) and switching strategy (smooth/abrupt). The large variety of experimental configurations across different studies ensures the consistency and external validity of the presented results that can be utilized for enhancing the perceptual performance of adaptive streaming services.
-
Casas, P., Seufert, M., Wamser, F., Gardlo, B., Sackl, A., Schatz, R.: Next to You: Monitoring Quality of Experience in Cellular Networks from the End-devices. IEEE Transactions on Network and Service Management. 13, 181-196 (2016).
A quarter of the world population will be using smartphones to access the Internet in the near future. In this context, understanding the Quality of Experience (QoE) of popular apps in such devices becomes paramount to cellular network operators, who need to offer high quality levels to reduce the risks of customers churning for quality dissatisfaction. In this paper we address the problem of QoE provisioning in smartphones from a double perspective, combining the results obtained from subjective lab tests with end-device passive measurements and QoE crowd-sourced feedback obtained in operational cellular networks. The study addresses the impact of both access bandwidth and latency on the QoE of five different services and mobile apps: YouTube, Facebook, Web browsing through Chrome, Google Maps, and WhatsApp. We evaluate the influence of both constant and dynamically changing network access conditions, tackling in particular the case of fluctuating downlink bandwidth, which is typical in cellular networks. As a main contribution, we show that the results obtained in the lab are highly applicable in the live scenario, as mappings track the QoE provided by users in real networks. We additionally provide hints and bandwidth thresholds for good QoE levels on such apps, as well as discussion on end-device passive measurements and analysis. The results presented in this paper provide a sound basis to better understand the QoE requirements of popular mobile apps, as well as for monitoring the underlying provisioning network. To the best of our knowledge, this is the first paper providing such a comprehensive analysis of QoE in mobile devices, combining network measurements with users QoE feedback in lab tests and operational networks.
-
Burger, V., Seufert, M., Hoßfeld, T., Tran-Gia, P.: Performance Evaluation of Backhaul Bandwidth Aggregation Using a Partial Sharing Scheme. Physical Communication. 19, 135-144 (2016).
To cope with the increasing demand of mobile devices and the limited capacity of cellular networks mobile connections are offloaded to WiFi. The access capacity is further increased, by aggregating bandwidth of WiFi access links. To analyse the performance of aggregated access links we model the most simple case of two cooperating systems interchanging capacities using an offloading scheme. The resulting analytic model is computed by means of a two-dimensional birth and death process. It can be used to seamlessly evaluate the performance of systems between partitioning and complete sharing. This allows to optimize the setting of thresholds dependent on the load of the cooperating system. Furthermore the benefit of aggregating bandwidth in different scenarios with homogeneous and heterogeneous workloads is quantified and the performance of more than two cooperating systems is evaluated by simulation.
-
Seufert, M., Lange, S., Hoßfeld, T.: More than Topology: Joint Topology and Attribute Sampling and Generation of Social Network Graphs. Computer Communications. 73, 176-187 (2016).
Graph sampling refers to the process of deriving a small subset of nodes from a possibly huge graph in order to estimate properties of the whole graph from examining the sample. Whereas topological properties can already be obtained accurately by sampling, current approaches do not take possibly hidden dependencies between node topology and attributes into account. Especially in the context of online social networks, node attributes are of importance as they correspond to properties of the social network's users. Therefore, existing sampling algorithms can be extended to attribute sampling, but still lack the capturing of structural properties. Analyzing topology (e.g., node degree, clustering coefficient) and attribute properties (e.g., age, location) jointly can provide valuable insights into the social network and allows for a better understanding of social processes. As major contribution, this work proposes a novel sampling algorithm which provides unbiased and reliable estimates of joint topological and attribute based graph properties in a resource efficient fashion. Furthermore, the obtained samples allow for the generation of synthetic graphs, which show high similarity to the original graph with respect to topology and attributes. The proposed sampling and generation algorithms are evaluated on real world social network graphs, for which they demonstrate to be effective.
-
Seufert, M., Griepentrog, T., Burger, V., Hoßfeld, T.: A Simple WiFi Hotspot Model for Cities. IEEE Communications Letters. 20, 384 - 387 (2016).
WiFi offloading has become increasingly popular. Many private and public institutions (e.g., libraries, cafes, restaurants) already provide an alternative free Internet link via WiFi, but also commercial services emerge to mitigate the load on mobile networks. Moreover, smart cities start to establish WiFi infrastructure for current and future civic services. In this work, the hotspot locations of ten diverse large cities are characterized, and a surprisingly simple model for the distribution of WiFi hotspots in an urban environment is derived.
-
Wamser, F.: Leistungsbewertung von Ressourcenmanagementstrategien für zelluläre und drahtlose Mesh-Netzwerke. Lecture Notes in Informatics (LNI). Ausgezeichnete Informatikdissertationen 2015, (2016).
Heutige Kommunikationsnetzwerke müssen eine große Anzahl an heterogenen Anwendungen und Diensten schultern. Dies gilt zusätzlich zu den Herausforderungen, dass sie kostengünstig sein sollen und schnelles, qualitativ-hochwertiges Internet anbieten müssen. Ein spezialisiertes Ressourcenmanagement kann in vielen dieser Fälle helfen und kann eine Win-Win-Situation für beide Parteien - den Benutzer und das Netzwerk - darstellen. In meiner Dissertation untersuche ich verschiedene neue Ressourcenmanagementansätze zur Leistungsoptimierung und Steigerung der Ressourceneffizienz in Zugangsnetzen. Die untersuchten Ansätze arbeiten auf verschiedenen Kommunikationsschichten und erfüllen unterschiedliche Ziele. Am Ende stellt diese Arbeit Empfehlungen für Netzbetreiber dar, wie ein Ressourcenmanagement für unterschiedliche Netzwerktypen und Ziele aussehen kann und welcher Nutzen in Vergleich mit dem erforderlichen Aufwand und der höheren Komplexität zu erwarten ist.
-
Hoßfeld, T., Heegaard, P.E., Varela, M., Möller, S.: QoE beyond the MOS: an in-depth look at QoE via better metrics and their relation to MOS. Quality and User Experience. 1, (2016).
-
Hoßfeld, T., Heegaard, P.E., Varela, M., Möller, S.: Formal definition of QoE metrics. arXiv preprint arXiv:1607.00321. (2016).
-
Hoßfeld, T., Skorin-Kapov, L., Heegaard, P.E., Varela, M.: Definition of QoE Fairness in Shared Systems. IEEE Communications Letters. (2016).
-
Metzger, F., Liotou, E., Moldovan, C., Hoßfeld, T.: TCP video streaming and mobile networks: Not a love story, but better with context. Computer Networks. 109, 246--256 (2016).
-
Lange, S., Gebert, S., Zinner, T., Tran-Gia, P., Hock, D., Jarschel, M., Hoffmann, M.: Heuristic Approaches to the Controller Placement Problem in Large Scale SDN Networks. IEEE Transactions on Network and Service Management - Special Issue on Efficient Management of SDN and NFV-based Systems. 12, 4 - 17 (2015).
Software Defined Networking (SDN) marks a paradigm shift towards an externalized and logically centralized network control plane. A particularly important task in SDN architectures is that of controller placement, i.e., the positioning of a limited number of resources within a network in order to meet various requirements. These requirements range from latency constraints to failure tolerance and load balancing. In most scenarios, at least some of these objectives are competing, thus no single best placement is available and decision makers need to find a balanced trade-off. This work presents POCO, a framework for Pareto-based Optimal COntroller placement that provides operators with Pareto optimal placements with respect to different performance metrics. In its default configuration, POCO performs an exhaustive evaluation of all possible placements. While this is practically feasible for small and medium sized networks, realistic time and resource constraints call for an alternative in the context of large scale networks or dynamic networks whose properties change over time. For these scenarios, the POCO toolset is extended by a heuristic approach that is less accurate, but yields faster computation times. An evaluation of this heuristic is performed on a collection of real world network topologies from the Internet Topology Zoo. Utilizing a measure for quantifying the error introduced by the heuristic approach allows an analysis of the resulting trade-off between time and accuracy. Additionally, the proposed methods can be extended to solve similar virtual functions placement problems which appear in the context of Network Functions Virtualization (NFV).
-
Hoßfeld, T., Seufert, M., Sieber, C., Zinner, T., Tran-Gia, P.: Identifying QoE Optimal Adaptation of HTTP Adaptive Streaming Based on Subjective Studies. Computer Networks. 81, 320-332 (2015).
HTTP Adaptive Streaming (HAS) technologies, e.g., Apple HLS or MPEG-DASH, automatically adapt the delivered video quality to the available network. This reduces stalling of the video but additionally introduces quality switches, which also influence the user-perceived Quality of Experience (QoE). In this work, we conduct a subjective study to identify the impact of adaptation parameters on QoE. The results indicate that the video quality has to be maximized first, and that the number of quality switches is less important. Based on these results, a method to compute the optimal QoE-optimal adaptation strategy for HAS on a per user basis with mixed-integer linear programming is presented. This QoE-optimal adaptation enables the benchmarking of existing adaptation algorithms for any given network condition. Moreover, the investigated concept is extended to a multi-user IPTV scenario. The question is answered whether video quality, and thereby, the QoE can be shared in a fair manner among the involved users.
-
Wamser, F., Blenk, A., Seufert, M., Zinner, T., Kellerer, W., Tran-Gia, P.: Modeling and Performance Analysis of Application-Aware Resource Management. International Journal of Network Management. 25, 223-241 (2015).
Application-aware resource management is the approach to tailor access networks to have characteristics beneficial for the running applications and services. This is achieved through the monitoring and integration of key performance indicators from the application layer within the network resource management. The aim is to increase user-perceived quality and network resource efficiency by traffic engineering with the help of these indicators. Using analytic and simulative approaches, this paper provides analysis methods for network operators to quantify the performance gains of alternative resource allocation algorithms that implement the application-aware concept. Network operators can use the proposed methods to evaluate possible performance gain trade-offs between investing in a pure capacity increase (over-provisioning) and the realization of an application-aware resource allocation. For this purpose, we model and analyze the application quality trade-offs of four algorithms for application-aware resource management at a single link in varying traffic situations. The algorithms are chosen with respect to different complexity and implementation level in order to cover the design space in a systematic way. The study of the algorithms focuses on the application-layer performance for the most used applications today, namely web browsing and video streaming with constant bit-rate as well as HTTP progressive streaming with variable bit-rate. Application quality trade-offs are analyzed in particular for a high resource utilization at a bottleneck link. The results confirm that application-aware resource management outperforms best-effort resource management in terms of QoE. Moreover, our study provides guidelines for the selection and configuration of the evaluated algorithms.
-
Hirth, M., Hoßfeld, T., Mellia, M., Schwartz, C., Lehrieder, F.: Crowdsourced Network Measurements: Benefits and Best Practices. Computer Networks Special Issue on Crowdsourcing. (2015).
Network measurements are of high importance both for the operation of networks and for the design and evaluation of new management mechanisms. Therefore, several approaches exist for running network measurements, ranging from analyzing live traffic traces from campus or Internet Service Provider (ISP) networks to performing active measurements on distributed testbeds, e.g., PlanetLab, or involving volunteers. However, each method falls short, offering only a partial view of the network. For instance, the scope of passive traffic traces is limited to an ISP’s network and customers’ habits, whereas active measurements might be biased by the population or node location involved. To complement these techniques, we pro- pose to use (commercial) crowdsourcing platforms for network measurements. They permit a controllable, diverse and realistic view of the Internet and pro- vide better control than do measurements with voluntary participants. In this study, we compare crowdsourcing with traditional measurement techniques, describe possible pitfalls and limitations, and present best practices to overcome these issues. The contribution of this paper is a guideline for researchers to understand when and how to exploit crowdsourcing for network measurements.
-
Seufert, M., Egger, S., Slanina, M., Zinner, T., Hoßfeld, T., Tran-Gia, P.: A Survey on Quality of Experience of HTTP Adaptive Streaming. IEEE Communications Surveys & Tutorials. 17, 469-492 (2015).
Changing network conditions pose severe problems to video streaming in the Internet. HTTP adaptive streaming (HAS) is a technology employed by numerous video services which relieves these issues by adapting the video to the current network conditions. It enables service providers to improve resource utilization and Quality of Experience (QoE) by incorporating information from different layers in order to deliver and adapt a video in its best possible quality. Thereby, it allows to take into account end user device capabilities, available video quality levels, current network conditions, and current server load. For end users, the major benefits of HAS compared to classical HTTP video streaming are reduced interruptions of the video playback and higher bandwidth utilization, which both generally result in a higher QoE. Adaptation is possible by changing the frame rate, resolution, or quantization of the video, which can be done with various adaptation strategies and related client- and server-side actions. The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this article as fundament to derive the QoE influence factors which emerge as a result of adaptation. The main contribution is a comprehensive survey of QoE related works from human computer interaction and networking domains which are structured according to the QoE impact of video adaptation. To be more precise, subjective studies which cover QoE aspects of adaptation dimensions and strategies are revisited. As a result, QoE influence factors of HAS and corresponding QoE models are identified, but also open issues and conflicting results are discussed. Furthermore, technical influence factors, which are often ignored in the context of HAS, affect perceptual QoE influence factors and are consequently analyzed. This survey gives the reader an overview of the current state of the art and recent developments. At the same time it targets networking researchers who develop new solutions for HTTP video streaming or assess video streaming from a user centric point of view. Therefore, the article is a major step towards truly improving HAS.
-
Hoßfeld, T., Metzger, F., Jarschel, M.: QoE for Cloud Gaming. IEEE Communications Society E-Letter. (2015).
Cloud Gaming combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users by processing the game in the cloud and streaming the contents to the player. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. However, at the same time the end device needs a broadband internet connection and the ability to display a video stream properly. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for Quality of Service (QoS) in terms of bandwidth and latency for the underlying network. In particular, there is a strong interest in the player’s Quality of Experience (QoE) by the involved stakeholders, ie, the game providers and the network operators. Given similar pricing schemes, players are likely to be influenced by expected and experienced quality. Thus, a provider is interested to understand QoE and to react on QoE problems by managing or adapting the service. There is also a strong academic interest, since QoE for cloud gaming as well as managing QoE for cloud gaming addresses a multitude of fascinating challenges in QoE. One might think that the topic of online video games is equally popular in research, but efforts are often solely focused on cloud gaming and its subjective QoE through user studies. Compared to plain video streaming, the inner properties of video games are not that straight-forward to observe from the outside. But to conduct proper measurements, it is essential to understand them.
-
Hoßfeld, T., Tran-Gia, P., Vukovic, M.: Special issue on crowdsourcing. Computer Networks: The International Journal of Computer and Telecommunications Networking. (2015).
-
Blenk, A., Basta, A., Kellerer, W., Zinner, T., Wamser, F., Tran-Gia, P.: Network Functions Virtualization (NFV) und Software Defined Networking (SDN): Forschungsfragen und Anwendungsfälle. ITG Mitgliederbeiträge / VDE dialog (invited article). 10-13 (2015).
-
Hoßfeld, T.: On Training the Crowd for Subjective Quality Studies. VQEG eLetter. 1, (2014).
“On Training the Crowd for Subjective Quality Studies” by Tobias Hossfeld from University of Würzburg presents new possibilities for quality evaluation by conducting subjective studies with the crowd of Internet users. The challenges of conducting training sessions for different methods of crowd sourcing are also elaborated in the article. The eLetter is online at the <a href='http://www.its.bldrdoc.gov/vqeg/eletter.aspx'>Video Quality Experts Group (VQEG) homepage</a>.
-
Hoßfeld, T., Timmerer, C.: Quality of Experience Assessment using Crowdsourcing. IEEE COMSOC MMTC R-Letter. 5, (2014).
A short review for the article 'Best Practices for QoE Crowdtesting: QoE Assessment with Crowdsourcing' Edited by Tobias Hoßfeld and Christian Timmerer Online at: http://committees.comsoc.org/mmc/r-letters/MMTC-RLetter-Jun2014.pdf Tobias Hoßfeld, Christian Keimel, Matthias Hirth, Bruno Gardlo, Julian Habigt, Klaus Diepold, Phuoc Tran-Gia, 'Best Practices for QoE Crowdtesting: QoE Assessment with Crowdsourcing', IEEE Trans. on Multimedia, vol. 16, no. 2, pp. 541–558, Feb. 2014.
-
Hoßfeld, T., Keimel, C., Hirth, M., Gardlo, B., Habigt, J., Diepold, K., Tran-Gia, P.: Best Practices for QoE Crowdtesting: QoE Assessment with Crowdsourcing. Transactions on Multimedia. 16, (2014).
Quality of Experience (QoE) in multimedia applications is closely linked to the end users’ perception and therefore its assessment requires subjective user studies in order to evaluate the degree of delight or annoyance as experienced by the users. QoE crowdtesting refers to QoE assessment using crowdsourcing, where anonymous test subjects conduct subjective tests remotely in their preferred environment. The advantages of QoE crowdtesting lie not only in the reduced time and costs for the tests, but also in a large and diverse panel of international, geographically distributed users in realistic user settings. However, conceptual and technical challenges emerge due to the remote test settings. Key issues arising from QoE crowdtesting include the reliability of user ratings, the influence of incentives, payment schemes and the unknown environmental context of the tests on the results. In order to counter these issues, strategies and methods need to be developed, included in the test design, and also implemented in the actual test campaign, while statistical methods are required to identify reliable user ratings and to ensure high data quality. This contribution therefore provides a collection of best practices addressing these issues based on our experience gained in a large set of conducted QoE crowdtesting studies. The focus of this article is in particular on the issue of reliability and we use video quality assessment as an example for the proposed best practices, showing that our recommended two-stage QoE crowdtesting design leads to more reliable results.
-
Hoßfeld, T., Burger, V., Hinrichsen, H., Hirth, M., Tran-Gia, P.: On the computation of entropy production in stationary social networks. Social Network Analysis and Mining. 4, (2014).
Completing their initial phase of rapid growth, social networks are expected to reach a plateau from where on they are in a statistically stationary state. Such stationary conditions may have different dynamical properties. For example, if each message in a network is followed by a reply in opposite direction, the dynamics is locally balanced. Otherwise, if messages are ignored or forwarded to a different user, one may reach a stationary state with a directed flow of information. To distinguish between the two situations, we propose a quantity called entropy production that was introduced in statistical physics as a measure for non-vanishing probability currents in nonequilibrium stationary states. The proposed quantity closes a gap for characterizing online social networks. As major contribution, we show the relation and difference between entropy production and existing metrics. The comparison shows that computational intensive metrics like centrality can be approximated by entropy production for typical online social networks. To compute the entropy production from real-world measurements, the need for Bayesian inference and the limits of naïve estimates for those probability currents are shown. As further contribution, a general scheme is presented to measure the entropy production in small-world networks using Bayesian inference. The scheme is then applied for a specific example of the R mailing list.
-
Schwerdel, D., Reuther, B., Zinner, T., Müller, P., Tran-Gia, P.: Future Internet Research and Experimentation: The G-Lab Approach. Computer Networks. (2014).
The German Lab (G-Lab) project aims to investigate architectural concepts and technologies for a new inter-networking architecture as an integrated approach between theoretic and experimental studies. Thus G-Lab consists of two major fields of activities: research studies of future network components and the design and setup of experimental facilities. Both are controlled by the same community to ensure that the experimental facility meets the demands of the researchers. Researchers gain access to virtualized resources or may gain exclusive access to resources if necessary. We present the current setup of the experimental facility, describing the available hardware, management of the platform, the utilization of the PlanetLab software and the user management. Moreover, a new approach to setup and deploy virtual network topologies will be described.
-
Zinner, T., Hoßfeld, T., Fiedler, M., Liers, F., Volkert, T., Khondoker, R., Schatz, R.: Requirement Driven Prospects for Realizing User-Centric Network Orchestration. Multimedia Tools and Applications. (2014).
The Internet’s infrastructure shows severe limitations when an optimal end user experience for multimedia applications should be achieved in a resource-efficiently way. In order to realize truly user-centric networking, an information exchange between applications and networks is required. To this end, network-application interfaces need to be deployed that enable a better mediation of application data through the Internet. For smart multimedia applications and services, the application and the network should directly communicate with each other and exchange information in order to ensure an optimal Quality of Experience (QoE). In this article, we follow a use-case driven approach towards user-centric network orchestration. We derive user, application, and network requirements for three complementary use cases: HD live TV streaming, video-on-demand streaming and user authentication with high security and privacy demands, as typically required for payed multimedia services. We provide practical guidelines for achieving an optimal QoE efficiently in the context of these use cases. Based on these results, we demonstrate how to overcome one of the main limitations of today’s Internet by introducing the major steps required for user-centric network orchestration. Finally, we show conceptual prospects for realizing these steps by discussing a possible implementation with an inter-network architecture based on functional blocks.
-
Hoßfeld, T., Keimel, C., Timmerer, C.: Crowdsourcing Quality-of-Experience Assessments. IEEE Computer. 47, 98-102 (2014).
Crowdsourced quality-of-experience (QoE) assessments are more costeffective and flexible than traditional in-lab evaluations but require careful test design, innovative incentive mechanisms, and technical expertise to address various implementation challenges.
-
Hoßfeld, T., Seufert, M., Sieber, C., Zinner, T., Tran-Gia, P.: Close to Optimum? User-centric Evaluation of Adaptation Logics for HTTP Adaptive Streaming. PIK - Praxis der Informationsverarbeitung und Kommunikation. 37, 275-285 (2014).
HTTP Adaptive Streaming (HAS) is the de-facto standard for over-the-top (OTT) video streaming services. It allows to react to fluctuating network conditions on short time scales by adapting the video bit rate in order to avoid stalling of the video playback. With HAS the video content is split into small segments of a few seconds playtime each, which are available in different bit rates, i.e., quality level representations. Depending on the current conditions, the adaptation algorithm on the client side chooses the appropriate quality level and downloads the respective segment. This allows to avoid stalling, which is seen as the worst possible disturbance of HTTP video streaming, to the most possible extend. Nevertheless, the user perceived Quality of Experience (QoE) may be affected, namely by playing back lower qualities and by switching between different qualities. Therefore, adaptation algorithms are desired which maximize the user’sQoEfor the currently available network resources. Many downloading strategies have been proposed in literature, but a solid user-centric comparison of these mechanisms among each other and with the global optimum is missing. The major contributions of this work are as follows. A proper analysis of the influence of quality switches and played out representations on QoE is conducted by means of subjective user studies. The results suggest that, in order to optimize QoE, first, the quality level of the video stream has to be maximized and second, the number of quality switches should be minimized. Based on our findings, a QoEoptimization problem is formulated and the performance of our proposed algorithm is compared to other algorithms and to the QoE-optimal adaptation.
-
Pham Ngoc, N., Nguyen Huu, T., Vu Quang, T., Tran Hoang, V., Truong Thu, H., Tran-Gia, P., Schwartz, C.: A new power profiling method and power scaling mechanism for energy-aware NetFPGA gigabit router. Computer Networks. (2014).
Today the ICT industry accounts for 2–4% of the worldwide carbon emissions that are estimated to double in a business-as-usual scenario by 2020. A remarkable part of the large energy volume consumed in the Internet today is due to the over-provisioning of network resources such as routers, switches and links to meet the stringent requirements on reliability. Therefore, performance and energy issues are important factors in designing gigabit routers for future networks. However, the design and prototyping of energy-efficient routers is challenging because of multiple reasons, such as the lack of power measurements from live networks and a good understanding of how the energy consumption varies under different traffic loads and switch/router configuration settings. Moreover, the exact energy saving level gained by adopting different energy-efficient techniques in different hardware prototypes is often poorly known. In this article, we first propose a measurement framework that is able to quantify and profile the detailed energy consumption of sub-components in the NetFPGA OpenFlow switch. We then propose a new power-scaling algorithm that can adapt the operational clock frequencies as well as the corresponding energy consumption of the FPGA core and the Ethernet ports to the actual traffic load. We propose a new energy profiling method, which allows studying the detailed power performance of network devices. Results show that our energy efficient solution obtains higher level of energy efficiency compared to some existing approaches as the upper and lower bounds of power consumption of the NetFPGA Openflow switch are proved to be 30% lower than ones of the commercial HP Enterprise switch. Moreover, the new switch architecture can save up to 97% of dynamic power consumption of the FPGA chip at lowest frequency mode.
-
Jarschel, M., Zinner, T., Hoßfeld, T., Tran-Gia, P., Kellerer, W.: Interfaces, Attributes, and Use Cases: A Compass for SDN. IEEE Communications Magazine. 52, 210-217 (2014).
The term Software Defined Networking (SDN) is prevalent in today’s discussion about future communication networks. As with any new term or paradigm, however, no consistent definition regarding this technology has formed. The fragmented view on SDN results in legacy products being passed off by equipment vendors as SDN, academics mixing up the attributes of SDN with those of network virtualization, and users not fully understanding the benefits. Therefore, establishing SDN as a widely adopted technology beyond laboratories and insular deployments requires a compass to navigate the multitude of ideas and concepts that make up SDN today. The contribution of this article represents an important step toward such an instrument. It gives a thorough definition of SDN and its interfaces as well as a list of its key attributes. Furthermore, a mapping of interfaces and attributes to SDN use cases is provided, highlighting the relevance of the interfaces and attributes for each scenario. This compass gives guidance to a potential adopter of SDN on whether SDN is in fact the right technology for a specific use case.
-
Metzger, F., Rafetseder, A., Romirer-Maierhofer, P., Tutschku, K.: Exploratory Analysis of a GGSN’s PDP Context Signaling Load. Journal of Computer Networks and Communications. (2014).
-
Hoefling, M., Menth, M., Hartmann, M.: A Survey of Mapping Systems for Locator/Identifier Split Internet Routing. IEEE Communications Surveys & Tutorials. 14, 1842 - 1858 (2013).
The locator/identifier split is a core principle of many recently proposed routing architectures for a scalable future Internet. It splits the function of today’s IP addresses into two separate pieces. End-hosts are addressed using identifiers which are not globally routable while network attachment points have globally routable locators assigned. In most architectures, either the sending host or an intermediate node has to query a mapping system to obtain locators for identifiers. Such a mapping system must be fast, reliable, secure, and may be able to relay data packets. In this paper, we propose requirements and a general taxonomy for mapping systems and use it to provide a survey on recent proposals. We address general aspects of mapping systems and point out remaining research opportunities.
-
Lehrieder, F., Menth, M.: RCFT: A Termination Method for Simple PCN-Based Flow Control. Journal of Network and Systems Management. (2013).
Pre-congestion notification (PCN) conveys information about load conditions in Differentiated Services IP networks to boundary nodes. This information is currently used for admission control and flow termination. Flow termination complements admission control, e.g., in case of failures when admitted traffic is rerouted and causes overload on backup paths. Existing approaches for PCN-based admission control and flow termination operate on ingress-egress aggregates and rely on a signalling protocol that regularly reports measured PCN feedback from all egress nodes to all ingress nodes. However, this signalling protocol is neither defined nor available, and the methods have also other intrinsic shortcomings that result from their operations on ingress-egress aggregates. While there is already a PCN-based admission control method that works without additional signalling of measured PCN feedback, a solid flow termination method with that property is still missing. In this paper we present the novel regular-check-based flow termination method (RCFT). It does not rely on measured PCN feedback, fills the identified gap, and allows for a PCN architecture without signalling of measured feedback. We explain RCFT in detail and investigate its termination behavior under various conditions. Moreover, we study the use of PCN-based flow control for on/off traffic. These results are of general nature and apply to any system using PCN-based flow termination.
-
Schwartz, C., Hoßfeld, T., Lehrieder, F., Tran-Gia, P.: Angry Apps: The Impact of Network Timer Selection on Power Consumption, Signalling Load, and Web QoE. Journal of Computer Networks and Communications. 1 - 22 (2013).
The popularity of smartphones and mobile applications has experienced a considerable growth during recent years and this growth is expected to continue in the future. Since smartphones have only very limited energy resources, battery efficiency is one of the deter- mining factors for a good user experience. Therefore, some smartphones tear down connec- tions to the mobile network soon after a completed data transmission to reduce the power consumption of their transmission unit. However, frequent connection re-establishments caused by apps which send or receive small amounts of data often lead to a heavy signalling load within the mobile network. One of the major contributions of this article is the investigation of the resulting trade-off between energy consumption at the smartphone and the generated signalling traffic in the mobile network. We explain that this trade-off can be controlled by the connection release timeout and study the impact of this parameter for a number of popular apps that cover a wide range of traffic characteristics in terms of bandwidth requirements and resulting signalling traffic. Finally, we study the impact of the timer settings on QoE for web traffic. This is an important aspect since connection establishments do not only lead to signalling traffic, but they also increase the load time of web pages.
-
Wamser, F., Hock, D., Seufert, M., Staehle, B., Pries, R., Tran-Gia, P.: Using Buffered Playtime for QoE-Oriented Resource Management of YouTube Video Streaming. Transactions on Emerging Telecommunications Technologies. 24, 288–302 (2013).
YouTube is the most important online platform for streaming video clips. The popularity and the continuously increasing number of users pose new challenges for Internet Service Providers (ISPs). In particular, in access networks where the transmission resources are limited and the providers are interested in reducing their operational expenditure, it is worth to efficiently optimize the network for popular services such as YouTube. In this paper, we propose different resource management mechanisms to improve the Quality of Experience (QoE) of YouTube users. In particular, we investigate the benefit of cross-layer resource management actions at the client and in the access network for YouTube video streaming. The proposed algorithms are evaluated in a wireless mesh testbed. The results show how to improve the YouTube QoE for the users with the help of client-based or network-based control actions.
-
Menth, M., Hartmann, M., Klein, D.: Global Locator, Local Locator, and Identifier Split (GLI-Split). Future Internet. 5, 67-94 (2013).
The locator/identifier split is an approach for a new addressing and routing architecture to make routing in the core of the Internet more scalable. Based on this principle, we developed the GLI-Split framework, which separates the functionality of current IP addresses into a stable identifier and two independent locators, one for routing in the Internet core and one for edge networks. This makes routing in the Internet more stable and provides more flexibility for edge networks. GLI-Split can be incrementally deployed and it is backward-compatible with the IPv6 Internet. We describe its architecture, compare it to other approaches, present its benefits, and finally present a proof-of-concept implementation of GLI-Split.
-
Hock, D., Wamser, F., Seufert, M., Pries, R., Tran-Gia, P.: OC²E²AN: Optimized Control Center for Experience Enhancements in Access Networks. PIK - Praxis der Informationsverarbeitung und Kommunikation. 36, 40 (2013).
-
Klein, D., Tran-Gia, P., Hartmann, M.: Aktuelles Schlagwort: Big Data. Informatik-Spektrum. 36, 319-323 (2013).
Big Data ist neben Cloud Computing und Crowdsourcing eine der wichtigsten neuen Technologie-Treiber und wird daher im Aktuellen Schlagwort näher beleuchtet. Zu Beginn gehen wir auf die Definition von Big Data ein und erläutern die Unterschiede zu traditionellen Verfahren. Im Anschluss daran stellen wir zugrundeliegende Technologien vor und geben einen kurzen Überblick über wissenschaftliche Herausforderungen in diesem Bereich.
-
Tran-Gia, P., Hoßfeld, T., Hartmann, M., Hirth, M.: Crowdsourcing and its Impact on Future Internet Usage. it - Information Technology. 55, 139-145 (2013).
Crowdsourcing is an emerging service platform and business model in the Internet. In contrast to outsourcing where a job is performed by a designated contractor, with Crowdsourcing, jobs are outsourced to a large, anonymous crowd of workers, the so-called human cloud. The rise of Crowdsourcing and its seamless integration in current workflows may have a huge impact on the Internet and on society, and will be a guiding paradigm that can form the evolution of work in the years to come. In this article, we discuss applications and use cases of Crowdsourcing to demonstrate the impact on Internet usage. Novel measurement approaches are presented and the impact of Crowdsourcing on Internet traffic is evaluated by measuring the activity of a particular Crowdsourcing platform. New technical solutions are necessary for the operation of efficient, distributed Crowdsourcing platforms. Special attention is drawn to the integration of machine clouds and human crowds, and appropriate inter-cloud solutions. Finally, we discuss current research challenges from a scientific and from the platform provider’s point of view.
-
Casas, P., Seufert, M., Schatz, R.: YOUQMON: A System for On-line Monitoring of YouTube QoE in Operational 3G Networks. ACM SIGMETRICS Performance Evaluation Review. 41, 44-46 (2013).
YouTube is changing the way operators manage network performance monitoring. In this paper we introduce YOUQMON, a novel on-line monitoring system for assessing the Quality of Experience (QoE) undergone by HSPA/3G customers watching YouTube videos, using network-layer measurements only. YOUQMON combines passive traffic analysis techniques to detect stalling events in YouTube video streams, with a QoE model to map stallings into a Mean Opinion Score reflecting the end-user experience. We evaluate the stalling detection performance of YOUQMON with hundreds of YouTube video streams, and present results showing the feasibility of performing real-time YouTube QoE monitoring in an operational mobile broadband network.
-
Zinner, T., Hoßfeld, T., Tran-Gia, P., Kellerer, W.: Software defined Networks - Das Internet flexibler gestalten und dynamischer steuern. ITG Mitgliederbeilage / VDE dialog (invited article). 6-9 (2013).
Vor allem aufgrund seiner starren Architektur und mangelnden Ressourcennutzung ist die Flexibilität der aktuellen Internettechnologie eingeschränkt. Dies könnte sich durch Anwendung von Software Defined Networks (SDN) ändern. Hierbei wird die Steuerung der Netze und Datenflüsse von den bisherigen Netzkomponenten auf eine zentrale logische Einheit übertragen.
-
Hoßfeld, T., Hirth, M., Tran-Gia, P.: Crowdsourcing - Modell einer neuen Arbeitswelt im Internet. Informatik Spektrum, Wirtschaftsinformatik & Management. 5, (2013).
Das Internet hat bereits viele höchst erfolgreiche Geschäftsmodelle hervorgebracht. Jedoch basierten bisher die meisten neuen Anwendungen oder Dienstleistungen auf technischen Neuerungen, beispielsweise schnelleren Rechner, kürzeren Verbindungsdauern oder auf neuartigen algorithmischen Ansätzen, etwa der Page-Rank-Algorithmus von Google. Erst mit den Sozialen Medien wie Youtube oder Facebook wurden die Nutzer ein integraler Bestandteil der Wertschöpfungskette, ohne die das „Produkt“ nicht funktioniert. Einen ähnlich starken Einfluss auf den Erfolg eines Unternehmens haben Nutzer in Geschäftsmodellen, die auf dem Crowdsourcing-Paradigma beruhen, welches im Folgenden genauer beleuchtet werden soll.
-
Nguyen, H.-T., Pham Ngoc, N., Truong, T.-H., Tran Ngoc, T., Nguyen Minh, D., Giang Nguyen, V., Nguyen, T.-H., Ngo Quynh, T., Hock, D., Schwartz, C.: Modeling and Experimenting Combined Smart Sleep and Power Scaling Algorithms in Energy-aware Data Center Networks. Simulation Modelling Practice and Theory. 39, (2013).
-
Rafetseder, A., Metzger, F., Pühringer, L., Tutschku, K., Zhuang, Y., Cappos, J.: Sensorium – A Generic Sensor Framework. PIK - Praxis der Informationsverarbeitung und Kommunikation. 36, (2013).
-
Hock, D., Hartmann, M., Menth, M., Pióro, M., Tomaszewski, A., Żukowski, C.: Comparison of IP-Based and Explicit Paths for One-to-One Fast Reroute in MPLS Networks. Telecommunication Systems (TS) Journal. 52, 947-958 (2013).
Primary and backup paths in MPLS fast reroute (FRR) may be established as shortest paths according to the administrative link costs of the IP control plane, or as explicitly calculated arbitrary paths. In both cases, the path layout can be optimized so that the maximum link utilization for a specific traffic matrix and for a set of considered failure scenarios is minimized. In this paper, we propose a linear program for the optimization of the path layout for explicitly calculated paths, which can either produce single paths and route entire traffic along those paths, or generate multiple paths and spread the traffic among those paths providing load balancing. We compare the resulting lowest maximum link utilization in both cases with the lowest maximum link utilization that can be obtained by optimizing unique IP-based paths. Our results quantify the gain in resource efficiency usage provided by optimized explicit multiple paths or explicit single paths as compared to optimized IP-based paths. Furthermore, we investigate if explicit path layouts cause an increased configuration effort compared to IP-based layouts and if yes, to what extend.
-
Menth, M., Lehrieder, F.: Performance of PCN-Based Admission Control under Challenging Conditions. IEEE/ACM Transactions on Networking. 20(2), 422-435 (2012).
Pre-congestion notification (PCN) is a packet marking technique for IP networks to notify egress nodes of a socalled PCN domain whether the traffic rate on some links exceeds certain configurable bounds. This feedback is used by decision points for admission control (AC) to block new flows when the traffic load is already high. PCN-based AC is simpler than other AC methods because interior routers do not need to keep perflow states. Therefore, it is currently being standardized by the IETF. We discuss various realization options and analyze their performance in the presence of flash crowds or with multipath routing by means of simulation and mathematical modeling. Such situations can be aggravated by insufficient flow aggregation, long round-trip times, on/off traffic, delayed media, inappropriate marker configuration, and smoothed feedback.
-
Fiedler, M., Hoßfeld, T., Norros, I., Rodrigues, J., Rogério Pereira, P.: The Network of Excellence Euro-NF and its Specific Joint Research Projects. ICST Global Community Magazine. (2012).
This article presents the European FP7 Network of Excellence “Euro-NF” (Networks of the Future) and reviews its set of activities. Specific attention is paid to the concept of Specific Joint Research Projects (SJRP), a series of small but focused projects, integrating at least three Euro-NF partners and targeting joint seminal work, publications as well as full-size follow-up projects. Further to the description of the SJRP concept, a set of three selected SJRP from different areas are presented in detail with respect to motivation, goal, contents, results, and impact.
-
Hirth, M., Hoßfeld, T., Tran-Gia, P.: Analyzing Costs and Accuracy of Validation Mechanisms for Crowdsourcing Platforms. Mathematical and Computer Modelling. (2012).
Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. Thus, this paper we analyze two widely used crowd-based approaches to validate the submitted work. Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks
-
Lehrieder, F., Dán, G., Hoßfeld, T., Oechsner, S., Singeorzan, V.: Caching for BitTorrent-like P2P Systems: A Simple Fluid Model and its Implications. IEEE/ACM Transactions on Networking. 20(4), (2012).
Peer-to-peer file-sharing systems are responsible for a significant share of the traffic between Internet service providers (ISPs) in the Internet. In order to decrease their peer-to-peer-related transit traffic costs, many ISPs have deployed caches for peer-to-peer traffic in recent years. We consider how the different types of peer-to-peer caches—caches already available on the market and caches expected to become available in the future—can possibly affect the amount of inter-ISP traffic. We develop a fluid model that captures the effects of the caches on the system dynamics of peer-to-peer networks and show that caches can have adverse effects on the system dynamics depending on the system parameters. We combine the fluid model with a simple model of inter-ISP traffic and show that the impact of caches cannot be accurately assessed without considering the effects of the caches on the system dynamics. We identify scenarios when caching actually leads to increased transit traffic. Motivated by our findings, we propose a proximity-aware peer-selection mechanism that avoids the increase of the transit traffic and improves the cache efficiency.We support the analytical results by extensive simulations and experiments with real BitTorrent clients.
-
Hoßfeld, T., Schatz, R., Varela, M., Timmerer, C.: Challenges of QoE Management for Cloud Applications. IEEE Communications Magazine. April issue, (2012).
Cloud computing is currently gaining enormous momentum due to a number of promised benefits: ease of use in terms of deployment, administration and maintenance, high scalability and flexibility to create new services. However, as more personal and business applications migrate to the Cloud, the service quality will become an important differentiator between providers. In particular, Quality of Experience (QoE) as perceived by users has the potential become the guiding paradigm for managing quality in the Cloud. In this article, we discuss technical challenges emerging from shifting services to the Cloud, as well as how this shift impacts QoE and QoE management. Thereby, a particular focus is on multimedia Cloud applications. Together with a novel QoE-based classification scheme of cloud applications, these challenges drive the research agenda on QoE management for Cloud applications.
-
Jarschel, M., Schlosser, D., Scheuring, S., Hoßfeld, T.: Gaming in the clouds: QoE and the users’ perspective. Mathematical and Computer Modelling. (2012).
Cloud Gaming is a new kind of service, which combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users remotely from a data center. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. The end device only needs a broadband internet connection and the ability to display High Definition (HD) video. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for service quality in terms of bandwidth and latency for the underlying network. In this paper we present the results of a subjective user study we conducted into the user-perceived quality of experience (QoE) in Cloud Gaming. We design a measurement environment, that emulates this new type of service, define tests for users to assess the QoE, derive Key Influence Factors (KFI) and influences of content and perception from our results.
-
Pussep, K., Lehrieder, F., Gross, C., Oechsner, S., Guenther, M., Meyer, S.: Cooperative Traffic Management for Video Streaming Overlays. Computer Networks. 56(3), 1118–1130 (2012).
Peer-to-Peer (P2P) based overlays often ignore the boundaries of network domains and make traffic management challenging for network operators. Locality-aware techniques are a promising approach to alleviate this impact, but often benefit only network operators and fail to provide similar benefits to end-users and overlay providers. This is especially severe with video streaming overlays that are responsible for large amount of Internet traffic. In this paper we present and evaluate a collaborative approach where a network operator measures the behavior of overlay users and promotes a subset of them in terms of up- and download bandwidth. This creates an incentive to users and overlay providers to cooperate by using locality awareness according to the operator's policies. We evaluate our approach both with a real application and via extensive simulations to analyze the user selection metrics and the impact on different network operators. Our study shows that this cooperative traffic management approach leads to a situation that is beneficial for users, content providers, and network operators.
-
Biernacki, A., Metzger, F., Tutschku, K.: On the influence of network impairments on YouTube video streaming. Journal of Telecommunications and Information Technology. (2012).
Video sharing services like YouTube have become very popular which consequently results in a drastic shift of the Internet traffic statistic. When transmitting video content over packet based networks, stringent quality of service (QoS) constraints must be met in order to provide the comparable level of quality to a traditional broadcast television. However, the packet transmission is influenced by delays and losses of data packets which can have devastating influence on the perceived quality of the video. Therefore, we conducted an experimental evaluation of HTTP based video transmission focusing on how they react to packet delay and loss. Through this analysis we investigated how long video playback is stalled and how often re-buffering events take place. Our analysis revealed threshold levels for the packet delay, packet losses and network throughput which should not be exceeded in order to preserve smooth video transmission.
-
Hoßfeld, T., Liers, F., Schatz, R., Staehle, B., Staehle, D., Volkert, T., Wamser, F.: Quality of Experience Management for YouTube: Clouds, FoG and the AquareYoum. PIK - Praxis der Informationverarbeitung und -kommunikation (PIK). (2012).
Over the last decade, Quality of Experience (QoE) has become a new, central paradigm for understanding the quality of networks and services. In particular, the concept has attracted the interest of communication network and service providers, since being able to guarantee good QoE to customers provides an opportunity for differentiation. In this paper we investigate the potential as well as the implementation challenges of QoE management in the Internet. Using YouTube video streaming service as example, we discuss the different elements that are required for the realization of the paradigm-shift towards truly user-centric network orchestration. To this end, we elaborate QoE management requirements for two complementary network scenarios (wireless mesh Internet access networks vs. global Internet delivery) and provide a QoE model for YouTube taking into account impairments like stalling and initial delay. We present two YouTube QoE monitoring approaches operating on the network and the end user level. Finally, we demonstrate how QoE can be dynamically optimized in both network scenarios with two exemplary concepts, AquareYoum and FoG, respectively. Our results show how QoE management can truly improve the user experience while at the same time increase the efficiency of network resource allocation.
-
Hoßfeld, T., Hirth, M., Tran-Gia, P.: Aktuelles Schlagwort: Crowdsourcing. Informatik Spektrum. 35, (2012).
Seit der Öffnung des Internets für die Allgemeinheit Anfang der 90er Jahre hat eine rasante Entwicklung stattgefunden. Neue Paradigmen wie Peer-to-Peer (P2P), Web 2.0 oder Cloud Computing führen zu neuartigen Diensten und Anwendungen, welche bei den Anwendern längst etabliert sind und einen Großteil des Datenverkehrs im Internet ausmachen. Beispiele hierfür sind unter anderem P2P-Anwendungen wie BitTorrent zum Austausch riesiger Datenmengen, Skype für Sprach- und Videokonferenzen, Soziale Medien wie Facebook oder Twitter, Cloud Anwendungen wie DropBox als synchronisiertes Netzwerk-Dateisystem für verteilte Rechner oder Cloud Gaming. Aktuell taucht ein neues Schlagwort im Internet auf: „Crowdsourcing“. Einige Aufgaben und Probleme, die für Menschen relativ einfach zu lösen sind, können derzeit selbst von modernen Machine Clouds noch nicht algorithmisch bewältigt werden. Hierzu zählen etwa Text- und Bilderkennung, das Verifizieren, Analysieren und Kategorisieren von Videoinhalten, das Schaffen von Wissen, das Verbessern und Kreieren von Produkten oder wissenschaftliche Forschung. Diese stellen Anwendungsgebiete von Crowdsourcing dar. Statt (oder zusätzlich zu) Machine Clouds wird die Masse der Internetnutzer in die Wertschöpfungskette eingebunden. Man spricht hier auch von Human Clouds. Neben Sozialen Medien ist Crowdsourcing eine der wichtigsten aktuell aufstrebenden Technologien und Geschäftsmodelle im Internet, die die Zukunft des Arbeitens und der Arbeitsorganisation von Grund auf verändern wird. Die wirtschaftliche und gesellschaftliche Bedeutung von Crowdsourcing-Plattformen wächst ständig und fördert die Entstehung neuer Formen der Arbeitsorganisation. Jobs in Crowdsourcing-Plattformen besitzen eine viel kleinere Granularität im Vergleich zu denen im traditionellen Outsourcing bzw. Outtasking Bereich. Das Aktuelle Schlagwort beleuchtet den Begriff „Crowdsourcing“ näher und wird zunächst wichtige Begriffe einführen, bevor die Anwendungsgebiete von Crowdsourcing sowie dessen Bedeutung in der Praxis und zukünftige Weiterentwicklung betrachtet werden.
-
Felipe Botero, J., Hesselbach, X., Duelli, M., Schlosser, D., Fischer, A., de Meer, H.: Energy Efficient Virtual Network Embedding. IEEE Communication Letters. PP, (2012).
Waste of energy due to over-provisioning and over-dimensioning of network infrastructures has recently stimulated the interest on energy consumption reduction by Internet Service Providers (ISPs). By means of resource consolidation, network virtualization based architectures will enable energy saving. In this letter, we extend the well-known virtual network embedding problem (VNE) to energy awareness and propose a mixed integer program (MIP) which provides optimal energy efficient embeddings. Simulation results show the energy gains of the proposed MIP over the existing cost-based VNE approach.
-
Duelli, M., Ott, J., Qin, X., Weber, E.: MuLaNEO: Planung und Optimierung von Mehrdienstnetzen. PIK - Praxis der Informationsverarbeitung und Kommunikation. 34, 138--139 (2011).
Telekommunikationsanbieter benötigen Netze, die mehrere Dienste/Technologien unterstützen und im Rahmen gegebener Anforderungsprofile ausfallsicher und zugleich kostengünstig sind. Die Planung solcher Netze kann auf ein kombinatorisches Optimierungsproblem mit exponentieller Komplexität zurückgeführt werden. In diesem Beitrag stellen wir eine offene und modulare Software für die vereinfachte Entwicklung und den Vergleich solcher Planungsverfahren vor.
-
Zinner, T., Tutschku, K., Nakao, A., Tran-Gia, P.: Performance Evaluation of Packet Re-ordering on Concurrent Multipath Transmissions for Transport Virtualization. JCNDS Special Issue on: Network Virtualization - Concepts and Performance Aspects. Vol. 6, 322-340 (2011).
From the viewpoint of communication networks Network Virtualization (NV) extends beyond pure operational issues and addresses many impasses of the current Internet. The idea of Transport Virtualization (TV) progresses the capabilities of NV and enables the independence from a specific network transport resource. The independence is achieved by pooling multiple transport resources and selecting the best resources for exclusive or concurrent use. However, the application and selection of concurrent paths is rather complex and introduces inevitable packet re-ordering due to different stochastic delay characteristics on the used paths. Packets arriving at the destination out-of-order have to be stored in a re-sequencing buffer before reassembled packets are forwarded to the application. We provide a simulation framework based on discrete event simulation which allows an evaluation of the re-sequencing buffer occupancy. Further, we perform an analysis of the fundamental behaviors and factors for packet re-ordering in concurrent multipath transmissions.
-
Lehrieder, F., Oechsner, S., Hoßfeld, T., Staehle, D., Despotovic, Z., Kellerer, W., Michel, M.: Mitigating Unfairness in Locality-Aware Peer-to-Peer Networks. International Journal of Network Management (IJNM), Special Issue on Economic Traffic Management. 21(1), (2011).
Locality-awareness is considered as a promising approach to increase the efficiency of content distribution by peer-to-peer (P2P) networks, e.g., BitTorrent. It is intended to reduce the inter-domain traffic which is costly for Internet service providers (ISPs) and to simultaneously increase the performance from the viewpoint of the P2P users, i.e., to shorten download times. This win-win situation should be achieved by a preferred exchange of information between peers which are located closely to each other in the underlying network topology. A set of studies shows that these approaches can lead to a win-win situation under certain conditions, and to a win-no lose situation in most cases. However, the scenarios used mostly assume homogeneous peer distributions. This is not the case in practice according to recent measurement studies. Therefore, we extend previous work in this paper by studying scenarios with real-life, skewed peer distributions. We show that even a win-no lose situation is difficult to achieve under those conditions and that the actual impact for a specific peer heavily depends on the used locality-aware peer selection and the specific scenario. This contradicts the principle of economic traffic management (ETM) which aims for a solution where all involved players benefit and consequently have an incentive to adopt locality-awareness. Therefore, we propose and evaluate refinements of current proposals, achieving that all users of P2P networks can be sure that their application performance is not reduced. This mitigates the unfairness introduced by current proposals which is a key requirement for a broad acceptance of the concept of locality-awareness in the user community of P2P networks.
-
Hoßfeld, T., Lehrieder, F., Hock, D., Oechsner, S., Despotovic, Z., Kellerer, W., Michel, M.: Characterization of BitTorrent Swarms and their Distribution in the Internet. Computer Networks. 55(5), 1197-1215 (2011).
The optimization of overlay traffic resulting from applications such as BitTorrent is a challenge addressed by several recent research initiatives. However, the assessment of such optimization techniques and their performance in the real Internet remains difficult. Despite a considerable set of works measuring real-life BitTorrent swarms, several characteristics of those swarms relevant for the optimization of overlay traffic have not yet been investigated. In this work, we address this lack of realistic swarm statistics by presenting our measurement results. In particular, we provide a statistical characterization of the swarm sizes, the distribution of peers over autonomous systems (ASs), the fraction of peers in the largest AS, and the size of the shared files. To this end, we consider different types of shared content and identify particular characteristics of regional swarms. The selection of the presented data is inspired by ongoing discussions in the IETF working group on application layer traffic optimization (ALTO). Our study is intended to provide input for the design and the assessment of ALTO solutions for BitTorrent, but the applicability of the results is not limited to that purpose.
-
Pióro, M., Zotkiewicz, M., Staehle, B., Staehle, D., Yuan, D.: On Max-Min Fair Flow Optimization in Wireless Mesh Networks. Ad Hoc Networks, Special Issue on Models and Algorithms for Wireless Mesh Networks. (2011).
The paper is devoted to WMN modeling using mixed-integer programming (MIP) formulations that allow to precisely characterize the link data rate capacity and transmission scheduling within time slots. Such MIP models are formulated for several cases of the modulation and coding schemes (MCS) assignment. We present a general way of solving the MMF traffic objective for WMN and use it for the formulated capacity models. Thus the paper combines WMN radio link modeling with a non-standard way of dealing with uncertain traffic, a combination that has not been to our knowledge considered so far in terms of exact optimization models. This combination, involving integer programming, forms the main contribution of the paper. We discuss several ways of solving the considered MMF problems and present an extensive numerical study that illustrates the running time efficiency of the different solution approaches, and the in uence of the MCS selection options and the number of time slots on the traffic performance of a WMN.
-
Staehle, B., Wamser, F., Hirth, M., Stezenbach, D., Staehle, D.: AquareYoum: Application and Quality of Experience-Aware Resource Management for YouTube in Wireless Mesh Networks. PIK - Praxis der Informationsverarbeitung und Kommunikation. (2011).
The browser has become the users’ interface to a plethora of Internet applications which are accessible from nearly everywhere and every device. The price for the simple and cheap access over the Internet is often a reduced end-user quality of experience (QoE). The reason for this is that the network ignores the content of the packets it transports and thereby neither knows which services it supports, nor if and which quality requirements have to be given. In addition, the needs of the applications can be time varying, and the network might not be able to give strict quality guarantees. We therefore advocate the idea of an application-network interaction in order to dynamically adapt the network resources if a QoE degradation is imminent. The software suite AquareYoum implements this approach and enables a smooth YouTube video playback in a congested wireless mesh Internet access network by dynamically selecting the least congested Internet gateway.
-
Hoßfeld, T., Tran-Gia, P.: EuroView 2010: Visions of Future Generation Networks. Computer Communications Review CCR. Volume 41, Number 2, (2011).
On August 2nd – 3rd, 2010, the EuroView 2010 workshop on 'Visions of Future Generation Networks' was held at the University of Würzburg. The event was sponsored by the European Network of Excellence Euro-NF, the German Information Technology Society ITG, and the International Teletraffic Congress ITC. EuroView 2010 brought together Internet and network technology researchers, network providers, as well as equipment and device manufacturers. In 2010, the focus was on 'Future Internet Design and Experimental Facilities' and on current efforts towards a Future Internet. Special sessions were organized reflecting the latest results of selected testbed expert groups as well as current and future national and international collaborative projects: (1) the German G-Lab project offering a national platform for Future Internet studies, (2) the Future Internet Activities in the European Framework FP7 organized by Max Lemke, and (3) the GENI project in US organized by Aaron Falk. A keynote talk was given by Lawrence Landweber on the challenges and paradigms emerging in the Future (Inter)Network.
-
Menth, M., Lehrieder, F.: PCN-Based Marked Flow Termination. Computer Communications. 34, 2082-2093 (2011).
Pre-congestion notification (PCN) uses packet metering and marking to notify boundary nodes of a Differentiated Services IP network if configured rate thresholds have been exceeded on some links. This feedback is used for PCN-based admission control and flow termination. While admission control is rather well understood, flow termination is a new flow control function and useful especially in case of failures or during flash crowds. We present marked flow termination as a new class of termination algorithms which terminate overload traffic gradually and that work well with multipath routing. We study their termination behavior, give recommendation for their configuration, and discuss their benefits and shortcomings.
-
Meier, S., Barisch, M., Kirstädter, A., Schlosser, D., Duelli, M., Jarschel, M., Hoßfeld, T., Hoffmann, K., Hoffmann, M., Kellerer, W., Khan, A., Jurca, D., Kozu, K.: Provisioning and Operation of Virtual Networks. Electronic Communications of the EASST, Kommunikation in Verteilten Systemen 2011. 37, (2011).
In today’s Internet, requirements of services regarding the underlying transport network are very diverse. In the future, this diversity will increase and make it harder to accommodate all services in a single network. A possible approach to keep up with this diversity in future networks is the deployment of isolated, custom tailored networks on top of a single shared physical substrate. The COMCON (COntrol and Monitoring of COexisting Networks) project aims to define a reference architecture for setup, control, and monitoring of virtual networks on a provider- and operator-grade level. In this paper, we present the building blocks and interfaces of our architecture.
-
Fischer, A., Felipe Botero, J., Duelli, M., Schlosser, D., Hesselbach, X., de Meer, H.: ALEVIN - A Framework to Develop, Compare, and Analyze Virtual Network Embedding Algorithms. Electronic Communications of the EASST, Kommunikation in Verteilten Systemen 2011. 37, (2011).
Network virtualization is recognized as an enabling technology for the Future Internet. Applying virtualization of network resources leads to the problem of mapping virtual resources to physical resources, known as “Virtual Network Embedding” (VNE). Several algorithms attempting to solve this problem have been discussed in the literature, so far. However, comparison of VNE algorithms is hard, as each algorithm focuses on different criteria. To that end, we introduce a framework to compare different algorithms according to a set of metrics, which allow to evaluate the algorithms and compute their results on a given scenario for arbitrary parameters.
-
Hock, D., Hartmann, M., Schwartz, C., Menth, M.: ResiLyzer: Ein Werkzeug zur Analyse der Ausfallsicherheit in paketvermittelten Kommunikationsnetzen. PIK - Praxis der Informationsverarbeitung und Kommunikation. 34, 158-159 (2011).
-
Żukowski, C., Tomaszewski, A., Pióro, M., Hock, D., Hartmann, M., Menth, M.: Compact node-link formulations for the optimal single path MPLS Fast Reroute layout. Advances in Electronics and Telecommunications. 2, (2011).
This paper discusses compact node-link formulations for MPLS fast reroute optimal single path layout. We propose mathematical formulations for MPLS fast reroute local protection mechanisms. In fact, we compare one-to-one (also called detour) local protection and many-to-one (also called facility backup) local protection mechanisms with respect to minimized maximum link utilization. The optimal results provided by the node-links are compared with the suboptimal results provided by algorithms based on non-compact linear programming (path generation) approach and IP-based approach.
-
Wamser, F., Pries, R., Staehle, D., Heck, K., Tran-Gia, P.: Traffic characterization of a residential wireless Internet access. Special Issue of the Telecommunication Systems (TS) Journal. 48: 1-2, (2010).
Traffic characterization is an important means for Internet Service Providers (ISPs) to adapt and to optimize their networks to the requirements of the customers. Most network measurements are performed in the backbone of these ISPs, showing both, residential and business Internet traffic. However, the traffic characteristics of business and home users differ significantly. Therefore, we have performed measurements of home users at a broadband wireless access service provider in order to reflect only home user traffic characteristics. In this paper, we present the results of these measurements, showing daily traffic fluctuations, flow statistics as well as application distributions. The results show a difference to backbone traffic characteristics. Furthermore, we observed a shift from web and Peer-to-Peer (P2P) file sharing traffic to streaming applications.
-
Menth, M., Martin, R., Hartmann, M., Spörlein, U.: Efficiency of Routing and Resilience Mechanisms in Packet-Switched Communication Networks. European Transactions on Telecommunications. 21, 108-120 (2010).
In this work we compare the efficiency of various routing and resilience mechanisms. Their path layout determines the utilization of links in the network under normal operation and in failure scenarios. For the comparison, the performance measure is the maximum utilization rS of all links for a set of protected failures S. A routing mechanism is considered more efficient than another if it leads to a lower maximum link utilization rS. We consider standard and optimized versions of IP routing and rerouting, optimized routing using explicit paths and end-to-end protection switching, as well as standard and optimized versions of MPLS fast reroute. The results show that routing optimization reduces the maximum link utilization significantly both with and without failure protection. The optimization potential for resilient routing is limited by the applied mechanism and depends heavily on the network structure and the set of protected failure scenarios S.
-
Menth, M., Hartmann, M., Martin, R., Cicic, T., Kvalbein, A.: Loop-Free Alternates and Not-Via Addresses: A Proper Combination for IP Fast Reroute? Computer Networks. 54, 1300-1315 (2010).
The IETF currently discusses fast reroute mechanisms for IP networks (IP FRR). IP FRR accelerates the recovery in case of network element failures and avoids micro-loops during re-convergence. Several mechanisms are proposed. Loop-free alternates (LFAs) are simple but cannot cover all single link and node failures. Not-via addresses can protect against these failures but are more complex, in particular, they use tunneling techniques to deviate backup traffic. In the IETF it has been proposed to combine both mechanisms to merge their advantages: simplicity and full failure coverage. This work analyzes LFAs and classifies them according to their abilities. We qualitatively compare LFAs and not-via addresses and develop a concept for their combined application to achieve 100% single failure coverage, while using simple LFAs wherever possible. The applicability of existing LFAs depends on the resilience requirements of the network. We study the backup path length and the link utilization for both IP FRR methods and quantify the decapsulation load and the increase of the routing table size caused by not-via addresses. We conclude that the combined usage of both methods has no advantage compared to the application of not-via addresses only.
-
Fiedler, M., Hoßfeld, T., Tran-Gia, P.: A Generic Quantitative Relationship between Quality of Experience and Quality of Service. IEEE Network Special Issue on Improving QoE for Network Services. (2010).
Quality of Experience (QoE) ties together user perception, experience and expectations to application and network performance, typically expressed by Quality of Service (QoS) parameters. Quantitative relationships between QoE and QoS are required in order to be able to build effective QoE control mechanisms onto measurable QoS parameters. On this background, this paper proposes a generic formula in which QoE and QoS parameters are connected through an exponential relationship, called IQX hypothesis. The formula relates changes of QoE with respect to QoS to the current level of QoE, is simple to match, and its limit behaviours are straighforward to interpret. It validates the IQX hypothesis for streaming services, where QoE in terms of Mean Opinion Scores (MOS) is expressed as functions of loss and reordering ratio, the latter of which is caused by jitter. For web surfing as the second application area, matchings provided by the IQX hypothesis are shown to outperform previously published logarithmic functions. We conclude that the IQX hypothesis is a strong candidate to be taken into account when deriving relationships between QoE and QoS parameters.
-
Menth, M., Lehrieder, F.: PCN-Based Measured Rate Termination. Computer Networks. 54(13), (2010).
Overload in a packet-based network can be prevented by admitting or blocking new flows depending on its load conditions. However, overload can occur in spite of admission control due to unforseen events, e.g., when admitted traffic is rerouted in the network after a failure. To restore quality of service for the majority of admitted flows in such cases, flow termination has been proposed as a novel control function. We present several flow termination algorithms that measure so-called pre-congestion notification (PCN) feedback. We analyze their advantages and shortcomings in particular under challenging conditions. The results improve the understanding of PCN technology which is currently being standardized by the Internet Engineering Task Force (IETF).
-
Menth, M., Hartmann, M., Hoefling, M.: FIRMS: A Mapping System for Future Internet Routing. IEEE Journal on Selected Areas in Communications (JSAC), Special Issue on Internet Routing Scalability. 28, 1326-1331 (2010).
The locator/identifier split is a design principle for new routing architectures that make Internet routing more scalable. To find the location of a host, it requires a mapping system that returns appropriate locators in response to map-requests for specific identifiers. In this paper, we propose FIRMS, a 'Future Internet Routing Mapping System'. It is fast, scalable, reliable, secure, and it is able to relay initial packets. We introduce its design, show how it deals with partial failures, explain its security concept, and evaluate its scalability.
-
Menth, M., Lehrieder, F., Briscoe, B., Eardley, P., Moncaster, T., Babiarz, J., Charny, A., (Joy) Zhang, X., Taylor, T., Chan, K.-H., Satoh, D., Geib, R., Karagiannis, G.: A Survey of PCN-Based Admission Control and Flow Termination. IEEE Communications Surveys & Tutorials. 12(3), (2010).
Pre-congestion notification (PCN) provides feedback about load conditions in a network to its boundary nodes. The PCN working group of the IETF discusses the use of PCN to implement admission control (AC) and flow termination (FT) for prioritized realtime traffic in a DiffServ domain. Admission control (AC) is a well-known flow control function that blocks admission requests of new flows when they need to be carried over a link whose admitted PCN rate already exceeds an admissible rate. Flow termination (FT) is a new flow control function that terminates some already admitted flows when they are carried over a link whose admitted PCN rate exceeds a supportable rate. The latter condition can occur in spite of AC, e.g., when traffic is rerouted due to network failures. This survey gives an introduction to PCN and is a primer for this new technology. It presents and discusses the multitude of architectural design options in an early stage of the standardization process in a comprehensive and streamlined way before only a subset of them is standardized by the IETF. It brings PCN from the IETF to the research community and serves as historical record.
-
Ciszkowski, T., Mazurczyk, W., Kotulski, Z., Hoßfeld, T., Fiedler, M., Collange, D.: Towards Quality of Experience-based Reputation Models for Future Web Service Provisioning. Special Issue of the Springer Telecommunication Systems Journal: Future Internet Services and Architectures - Trends and Visions, print available in 2013. 51, 283-295 (2010).
This paper concerns the applicability of reputations systems for assessing Quality of Experience (QoE) for web services in the Future Internet. Reputation systems provide mechanisms to manage subjective opinions in societies and yield a general scoring of a particular behavior. Thus, they are likely to become an important ingredient of the Future Internet. Parameters under evaluation by a reputation system may vary greatly and, particularly, may be chosen to assess the users' satisfaction with (composite) web services. Currently, this satisfaction is usually expressed by QoE, which represents subjective users' opinions. The goal of this paper is to present a novel framework of web services where a reputation system is incorporated for tracking and predicting of users' satisfaction. This approach is a beneficial tool which enables providers to facilitate service adaptation according to users' expectations and maintain QoE at a satisfactory level. Presented reputation systems operate in an environment of composite services that integrate client and server-side. This approach is highly suitable for effective QoE differentiating and maximizing user experience for specific customer profiles as even the service and network resources are shared.
-
Pries, R., Staehle, D., Staehle, B., Tran-Gia, P.: On Optimization of Wireless Mesh Networks using Genetic Algorithms. International Journal On Advances in Internet Technology. 1&2, (2010).
-
Wamser, F., Mittelstädt, D., Staehle, D., Tran-Gia, P.: Impact of Electrical and Mechanical Antenna Downtilt on a WiMAX System with Fractional Frequency Reuse. FREQUENZ - Journal of RF-Engineering and Telecommunications. September/October, (2010).
In an interference-limited mobile WiMAX network, efficient cell planning and tuning is an essential task to ensure a functioning network. This includes the selection of the optimal settings for the antenna to, on the one hand provide a good coverage and on the other hand achieve cell isolation. Two different antenna tilting methods are examined in this article, namely the mechanical and the electrical vertical downtilt. The evaluation is done with an advanced WiMAX IEEE 802.16e simulator. In particular, the impact of the antenna configuration on WiMAX fractional frequency reuse (FFR) is studied. The results show a high dependency of FFR to the downtilt configuration, since the inter-cell interference level changes significantly with different settings.
-
Dán, G., Stamoulis, G.D., Hoßfeld, T., Oechsner, S., Cholda, P., Stankiewicz, R., Papafili, I.: Interaction Patterns between P2P Content Distribution Systems and ISPs. to be published in IEEE Communications Magazine. (2010).
Peer-to-peer (P2P) content distribution systems are a major source of traffic in the Internet, but the application layer protocols they use are mostly unaware of the underlying network in accordance with the layered structure of the Internet’s protocol stack. Nevertheless, the need for improved network efficiency and the business interests of Internet service providers (ISPs) are both strong drivers towards a cross-layer approach in peer-to-peer protocol design, calling for P2P systems that would in some way interact with the ISPs. Recent research shows that the interaction, which can rely on information provided by both parties, can be mutually beneficial. In this paper first we give an overview of the kinds of information that could potentially be exchanged between the P2P systems and the ISPs, and discuss their usefulness and the ease of obtaining and exchanging them. We also present a classification of the possible approaches for interaction based on the level of involvement of the ISPs and the P2P systems, and we discuss the potential strengths and the weaknesses of these approaches.
-
Pries, R., Hock, D., Staehle, D.: QoE based Bandwidth Management Supporting Real Time Flows in IEEE 802.11 Mesh Networks. PIK - Praxis der Informationsverarbeitung und Kommunikation. 32, (2010).