Automatic estimation of gaze direction information is important for certain applications of human-robot and human-computer interaction. Depending on the properties of the specific application, it may be required to derive this information in real time from low resolution visual inputs, with as much precision as possible. In this paper we present an algorithm for transforming head pose estimates to gaze direction estimates. The main contribution of this study lies in the fact that it makes a clear distinction between head pose and gaze direction. Unlike some of the previous works in this field, we do not correct the head pose to correspond to a possible attention fixation point in accordance with the experiment scenario. Instead we propose using a concrete and environment-independent method for this purpose. To transform the head pose estimates into gaze direction, a Gaussian process regression model is proposed and the reasons validating this choice are discussed in detail.