In order to get grasp of what are our users are doing I try to watch at least three FullStory screencasts per day. FullStory screencasts contain only what the user sees and movements of user's mouse, while they don't record any sound (and certainly don't record what user is thinking about). But looking at screencasts it still feels like you'd be reading user's mind. Having user navigate your application and, even more importantly, having user's mouse movements captured provide a very good insight into what user is trying to achieve. What came as a really big surprise to me is the prevalence of users to use mouse as a helper to read information off the screen. Almost all users read with the mouse which makes FullStory screencasts much more interesting since you get to see exactly what the user is doing even for the case of our application which is focused on displaying information not capturing it.
I don't know whether creators of FullStory started building their service with full understanding of importance of showing mouse movements or did they realize that only after building some prototypes. In any case, showing user actions through mouse movements and clicks managed to bind disparate screenshots into a coherent narrative that tells a compelling story. And that's something every product must achieve in order to become successful.