Abstract
Evolutionary neural architecture search (ENAS) and differentiable architecture search (DARTS) are all prominent algorithms in neural architecture search, enabling the automated design of deep neural networks. To leverage the strengths of both methods, there exists a framework called continuous ENAS, which alternates between using gradient descent to optimize the supernet and employing evolutionary algorithms to optimize the architectural encodings. However, in continuous ENAS, there exists a premature convergence issue accompanied by the small model trap, which is a common issue in NAS. To address this issue, this paper proposes a self-adaptive differential evolution algorithm for neural architecture search (SaDENAS), which can reduce the interference caused by small models to other individuals during the optimization process, thereby avoiding premature convergence. Specifically, SaDENAS treats architectures within the search space as architectural encodings, leveraging vector differences between encodings as the basis for evolutionary operators. To achieve a trade-off between exploration and exploitation, we integrate both local and global search strategies with a mutation scaling factor to adaptively balance these two strategies. Empirical findings demonstrate that our proposed algorithm achieves better performance with superior convergence compared to other algorithms.
Original language | English |
---|---|
Article number | 101736 |
Journal | Swarm and Evolutionary Computation |
Volume | 91 |
DOIs | |
Publication status | Published - Dec 2024 |
Publication type | A1 Journal article-refereed |
Keywords
- Differentiable architecture search
- Differential evolution
- Evolutionary computation
- Neural architecture search
Publication forum classification
- Publication forum level 2
ASJC Scopus subject areas
- General Computer Science
- General Mathematics