Searching Stateful Spaces

Optimizing a nonlinear, multidimensional, stateful system is equivalent to performing a search in the space of the (performance affecting) actions and system states.

Recurrent neural networks (RNN) have proved to be extremely efficient at searching function spaces. But, they come with a baggage.

For a given stateful transformation Y(t) = F(X(t), S(t)), there’s an RNN space – a function space in its own right, inversely defined by the original system function F.

The question then is: how to search for the optimal RNN? The one that would feature the fastest convergence and, simultaneously, minimal cost/loss?

The ability to quickly search RNN domain becomes totally crucial in presence of real-time requirements (e.g., when optimizing performance of a running storage system), where the difference between “the best” and “the rest” is the difference between usable and unusable…

  • PDF (full paper)
  • Keywords: RNN, reinforcement learning, hybrid storage, meta-learning, NFL

2 thoughts on “Searching Stateful Spaces

  1. […] Part I of this post stipulates that selecting the optimal neural network architecture is, or rather, can be a search problem. There are techniques to do massive searches. Training a neural network (NN) can be counted as one such technique, where the search target belongs to the function space defined by both this environment and this NN architecture. The latter includes a certain (and fixed) number of layers and number of neurons per each layer. The question then is, would it be possible to use a neural network to search for the optimal NN architecture? To search for it in the entire NN domain, defined only and exclusively by the given environment? […]

    Like

Comments are closed.