Iroko A Data Center Emulator for Reinforcement Learning Fabian

15 Slides2.16 MB

Iroko A Data Center Emulator for Reinforcement Learning Fabian Ruffy, Michael Przystupa, Ivan Beschastnikh University of British Columbia https://github.com/dcgym/iroko

Reinforcement Learning and Networking 2

The Data Center: A perfect use case DC challenges are optimization problems Traffic control Resource management Routing Operators have complete control Automation possible Lots of data can be collected Cho, Inho, Keon Jang, and Dongsu Han. "Credit-scheduled delay-bounded congestion control for datacenters.“ SIGCOMM 2017 3

Two problems Typical reinforcement learning is not viable for data center operators! Fragile stability Questionable reproducibility Unknown generalizability Prototyping RL is complicated Cannot interfere with live production traffic Offline traces are limited in expressivity Deployment is tedious and slow 4

Our work: A platform for RL in Data Centers Iroko: open reinforcement learning gym for data center scenarios Inspired by the Pantheon* for WAN congestion control Deployable on a local Linux machine Can scale to topologies with many hosts Approximates real data center conditions Allows arbitrary definition of Reward State Actions *Yan, Francis Y., et al. "Pantheon: the training ground for Internet congestion-control research.“ ATC 2018 5

Iroko in one slide Policy OpenAI Gym Reward Model State Model Data Collectors Traffic Pattern Action Model Topology Rack Dumbbell Fat-Tree 6

Use Case: Congestion Control Ideal data center should have: Low latency, high utilization No packet loss or queuing delay Fairness CC variations draw from the reactive TCP Queueing latency dominates Frequent retransmits reduce goodput Data center performance may be unstable 7

Predicting Networking Traffic Bandwidth Allocation 10 10 Flow Pattern Data Collection Policy 10 10 Switch 10 10 10 8

Predicting Networking Traffic Bandwidth Allocation 10 Flow Pattern Data Collection Policy 3.3 10 10 10 3.3 3.4 Switch 10 10 10 9

Predicting Networking Traffic Bandwidth Allocation 3.3 10 Flow Pattern Data Collection Policy 10 10 Switch 3.3 3.4 10 10

Can we learn to allocate traffic fairly? Two environments: env iroko: centralized rate limiting arbiter Agent can set the sending rate of hosts PPO, DDPG, REINFORCE env tcp: raw TCP Contains implementations of TCP algorithms TCP Cubic, TCP New Vegas, DCTCP Goal: Avoid congestion 11

Experiment Setup 50000 timesteps Linux default UDP as base transport 5 runs ( 7 hours per run) Bottleneck at central link 12

Results – Dumbbell UDP

Results - Takeaways Challenging real-time environment Noisy observation Exhibits strong credit assignment problem RL algorithms show expected behavior for our gym Achieve better performance than TCP New Vegas More robust algorithms required to learn good policy DDPG and PPO achieve near optimum REINFORCE fails to learn good policy 14

Contributions Data center reinforcement learning is gaining traction but it is difficult to prototype and evaluate Iroko is a platform to experiment with RL for data centers intended to train on live traffic early stage work but experiments are promising available on Github: https://github.com/dcgym/iroko 15

Back to top button