go back

Volume 15, No. 5

Hippo: Sharing Computations in Hyper-Parameter Optimization

Authors:
Ahnjae Shin (Seoul National University)* Joo Seong Jeong (Seoul National University) Do Yoon Kim (Seoul National University) SOYOUNG JUNG (Seoul National University) Byung-Gon Chun (Seoul National University)

Abstract

Hyper-parameter optimization is crucial for pushing the accuracy of a deep learning model to its limits. However, a hyper-parameter optimization job, referred to as a study, involves numerous trials of training a model using different training knobs, and therefore is very computation-heavy, typically taking hours and days to finish. We observe that trials issued from hyper-parameter optimization algorithms often share common hyper-parameter sequence prefixes. Based on this observation, we propose TreeML, a hyper-parameter optimization system that reuses computation across trials to reduce the overall amount of computation significantly. Instead of treating each trial independently as in existing hyper-parameter optimization systems, TreeML breaks down the hyper-parameter sequences into stages and merges common stages to form a tree of stages (a stage tree). TreeML maintains an internal data structure, search plan, to manage the current status and history of a study, and employs a critical path based scheduler to minimize the overall study completion time. TreeML applies to not only single studies but multi-study scenarios as well. Evaluations show that TreeML’s stage-based execution strategy outperforms trial-based methods for several models and hyper-parameter optimization algorithms, reducing end-to-end training time by up to 2.76× (3.53×) and GPU-hours by up to 4.81× (6.77×), for single (multiple) studies.

PVLDB is part of the VLDB Endowment Inc.

Privacy Policy