CATCH: Context-based Meta Reinforcement Learning for Transferrable Architecture Search
Published at ECCV2020
Xin Chen*, Yawen Duan*, Zewei Chen, Hang Xu,
Zihao Chen, Xiaodan Liang, Tong Zhang, Zhenguo Li
Abstract
Neural Architecture Search (NAS) achieved many breakthroughs in recent years. In spite of its remarkable progress, many algorithms are restricted to particular search spaces. They also lack efficient mechanisms to reuse knowledge when confronting multiple tasks. These challenges preclude their applicability, and motivate our proposal of CATCH, a novel Context-bAsed meTa reinforcement learning (RL) algorithm for transferrable arChitecture searcH. The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces. CATCH utilizes a probabilistic encoder to encode task properties into latent context variables, which then guide CATCH's controller to quickly "catch" top-performing networks. The contexts also assist a network evaluator in filtering inferior candidates and speed up learning. Extensive experiments demonstrate CATCH's universality and search efficiency over many other widely-recognized algorithms. It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified. This is the first work to our knowledge that proposes an efficient transferrable NAS solution while maintaining robustness across various settings.
CATCH vs. other methods
Video
The main algorithm flow of CATCH
Team

The University of Hong Kong

The University of Hong Kong

Huawei Noah's Ark Lab

Huawei Noah's Ark Lab

Huawei Noah's Ark Lab

Sun Yat-sen University

The Hong Kong University of Technology and Science

Huawei Noah's Ark Lab