Searching Stateful Spaces

Optimizing a nonlinear, multidimensional, stateful system is equivalent to performing a search in the space of the (performance affecting) actions and system states.

Recurrent neural networks (RNN) have proved to be extremely efficient at searching function spaces. But, they come with a baggage.

For a given stateful transformation Y(t) = F(X(t), S(t)), there’s an RNN space – a function space in its own right, inversely defined by the original system function F.

The question then is: how to search for the optimal RNN? The one that would feature the fastest convergence and, simultaneously, minimal cost/loss?

The ability to quickly search RNN domain becomes totally crucial in presence of real-time requirements (e.g., when optimizing performance of a running storage system), where the difference between “the best” and “the rest” is the difference between usable and unusable…

  • PDF (full paper)
  • Keywords: RNN, reinforcement learning, hybrid storage, meta-learning, NFL