I have been thinking of approaches to the evaluation of an AR authoring tool
we are developing (albeit slowly).
I want to start be making a list of 'typical' AR applications to set as
For example: "make a 'magic book'-style assembly manual for this
dissasembled piece of furniture".
"make a guide for this historical location. Must include the following
information about each of these items"
Can someone sugest a few? And/or point me to some papers that outline
evaluation tasks used for testing AR systems?
I want to test for two main things.
1. the system's applicability for general AR tasks.
2. the system's effectiveness for designing and implementing an AR project
compared to another authoring tool, or set of tools.
Of course, it would be easy to dream up tasks that would show off the the
particular features of our particular system, but I'd like to test its
applicability to more general AR applications.
At the moment, I am making the thing to do the kind of things I like doing,
but that's not going to be so useful for what other people will want to do,
so I need to make it more general.
Anyway, any suggestions or ideas greatly appreciated.
ATR Media Information Science Laboratories