(Created page with " == Abstract == Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop...")
 
m (Scipediacontent moved page Draft Content 552372298 to Xu et al 2018f)
 
(No difference)

Latest revision as of 21:30, 3 February 2021

Abstract

Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop a novel experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model, just as a human learns a new skill (such as driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in communication networks; and present a novel and highly effective DRL-based control framework, DRL-TE, for a fundamental networking problem: Traffic Engineering (TE). The proposed framework maximizes a widely-used utility function by jointly learning network environment and its dynamics, and making decisions under the guidance of powerful Deep Neural Networks (DNNs). We propose two new techniques, TE-aware exploration and actor-critic-based prioritized experience replay, to optimize the general DRL framework particularly for TE. To validate and evaluate the proposed framework, we implemented it in ns-3, and tested it comprehensively with both representative and randomly generated network topologies. Extensive packet-level simulation results show that 1) compared to several widely-used baseline methods, DRL-TE significantly reduces end-to-end delay and consistently improves the network utility, while offering better or comparable throughput; 2) DRL-TE is robust to network changes; and 3) DRL-TE consistently outperforms a state-ofthe-art DRL method (for continuous control), Deep Deterministic Policy Gradient (DDPG), which, however, does not offer satisfying performance.

Comment: 9 pages, 12 figures, paper is accepted as a conference paper at IEEE Infocom 2018


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/infocom.2018.8485853
https://ieeexplore.ieee.org/document/8485853,
https://arxiv.org/abs/1801.05757,
https://ui.adsabs.harvard.edu/abs/2018arXiv180105757X/abstract,
https://doi.org/10.1109/INFOCOM.2018.8485853,
https://experts.syr.edu/en/publications/experience-driven-networking-a-deep-reinforcement-learning-based-,
https://academic.microsoft.com/#/detail/2963549123
Back to Top

Document information

Published on 01/01/2018

Volume 2018, 2018
DOI: 10.1109/infocom.2018.8485853
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?