Purposeful practice needed for Apple Vision Pro
The Apple Vision Pro makes use of some new mechanics for input - gaze and gestures. Apple has done a good job of implementing these and they are supposed to be natural enough that users don't need much help. Just stumbling around using them with a few hints at the beginning is supposed to give the experience that makes them easy and second nature to use.
I'm dissatisfied with my rate of learning from such an approach. I really want to use purposeful practice to up my skill level more quickly and confidently. That means I need some way of systematically practicing while being immediately scored on my performance.
One way is to use games which have a scoring mechanism, although usually the score only indirectly reflects the particular skill of interest. However, some simple games may serve this purpose much better. On Vision Pro, for example, the game Agile 3 works pretty well - the skills involved are just gaze and tap (and attention), and the score of each short round reflects the skill level of gaze and tap directly. With a little external book keeping I can keep track of my progress.
In the fps gaming world there are dedicated "aim trainers", such as Kovaaks or AimLab, which are relentlessly focused on increasing mouse pointing and firing skills by purposeful practice. Experienced gamers find that skill levels - scores - increase only slowly over periods of weeks or more, showing that increasing performance is not a simple task after the initial learning period. The trainers are sophisticated with different sections focusing on different subtle details of the aiming skill set with historical scores to track improvements over time.
Agile 3 could start down this road, for example, by having a setting for the size of the target, requiring more accuracy for smaller targets. Or there could be a set of small (invisible?) buttons immediately surrounding the target which would allow a finer accuracy grade than a binary hit or miss score, as well an indication of the direction of the error. I understand that for privacy reasons Apple does not give developers access to gaze location directly, only whether specified target has been hit, necessitating this indirect measurement.
A different task would be to move objects around in 3D space, scored on accuracy and time, thus expanding the practice skill set. Or resizing and moving windows. You get the idea - focus on the skills necessary for effortless and confident use of the interface elements.
I would certainly like to have such a practice lab to refine my interactions with Vision Pro apps. I do not yet have a feeling for how reliable the basic Apple interface really is when my shortcomings are not an issue; this would help distinguish the two.
Some questions
There are a lot of questions that I have about the details of the Apple interaction system, as much out of curiosity as any operational need.
Is the gaze tracking algorithm fixed or adaptive to the individual? I imagine there is quite a bit of individual variation in user characteristics and taking that into account could tighten things up considerably for individuals.
Alternatively, are personal settings possible? Just as mouse sensitivity setting is provided now.
What are statistics on accuracy, reaction time, etc.? - all the classic psychophysics.
I would love to see an Apple paper about all this - they surely have done careful studies which inform the decisions they make about settings. What were the tradeoffs chosen and are we now pretty much at the effective limits or is there significant improvement to come?