Low Vision News

For low vision specialists and those who consult them

Journal Article: Functional Tests for Low Vision

I have just read a good paper by Dougherty and colleagues in the August issue of Optometry and Vision Science discussing the development of some new functional tests for low vision research.

A key question of low vision rehabilitation is “how useful is intervention x, y or z?”. This is surprisingly difficult to answer, even when x is as simple as prescribing magnification.

Historically, we answered this question by looking at easy to measure clinical tests, such as visual acuity, contrast sensitivity, reading speed or glare sensitivity. More recently, questionnaires have been used to determine the impact of rehabilitation on vision-related quality of life, using instruments such as the Visual Function Questionnaire (VFQ) or Massof Activity Inventory.

Dougherty and colleagues have used a third approach: to develop some real world tasks, which can be performed under controlled conditions, to measure visual function. These are:
*Reading rate at set print sizes
*Finding a number in a telephone directory
*Identifying prescription medication from a medicine bottle
*Reading utility bills
*Finding cooking time on a food packet
*Sorting coins to total a specified amount
*Identifying a playing card
* Recognising facial expression

To investigate the use of these tasks as a research outcome, the authors investigated the effect of a single low vision appointment on these variables. They found measurable changes on many of the tests, with the biggest improvements in reading medicine bottle labels and cooking instructions.

Using functional tests to measure visual performance is not new, but the development of a standardised battery of visual function tests is an excellent development. It would be extremely beneficial for the field if these tests were adopted by other research groups so that outcomes can be compared between different rehabilitation approaches, in different clinical settings, in different countries.

Advertisements

6 responses to “Journal Article: Functional Tests for Low Vision

  1. Peter Meijer September 1, 2009 at 1:08 pm

    This is indeed interesting work, although seemingly limited to assessing reading and reading-like visual recognition activities, perhaps mostly relevant to low vision subjects near the legal blindness limit (above or below). What about functional tests for assessing ambulatory vision?

    I do not know of any standardized tests involving, say, negotiating your way without touch between some obstacles such as “randomly” placed chairs and tables, passing doorways, detecting step-downs and holes, etc. That would seem more relevant to judging to what extent someone should use a cane or a guide dog (O&M practices). That is still about low vision, but likely further below the legal blindness limit. This would also be relevant for assessing the compensatory value of still crude artificial vision devices, including retinal implants and sensory substitution devices.

    Thanks,

    Peter Meijer

    • lowvisionnews September 1, 2009 at 6:28 pm

      Hi Peter

      I agree that mobility outcomes are extremely important to measure, but they are difficult to standardise and measure in most clinic or lab settings.

      The absolute gold standard for this is the PAMELA (Pedestrian Accessibility and Movement Environment Lab) lab at University College London. This lab comprises moveable obstacles, kerbs, pavements etc with carefully controlled lighting conditions. It has been used for several vision studies, most notably the UCL/Moorfields gene therapy trials (see video 2 on this link: http://content.nejm.org/cgi/content/full/358/21/2231).

      Other, slightly cheaper, approaches are using virtual reality environments (there is one in Eli Peli’s lab at Harvard, and Kathy Turano has used this technique too), and taking people onto a real road junction and asking them when it would be safe to cross roads (see eg Duane Geruschat’s work at Johns Hopkins, or Shirin Hassan at Indiana).

      I think the work by Dougherty and colleagues is a step in the right direction (away from using visual acuity as our only outcome) but there is further to go!

      Best wishes
      lowvisionnews

  2. Peter Meijer September 1, 2009 at 8:36 pm

    Thanks “lowvisionnews”! I had seen that gene therapy trial video before, but had not been aware that it was part of a dedicated lab. Nice. One would only wish/hope that it were possible to specify an about-office-sized space that can be replicated at acceptable cost at labs around the world.

    Virtual environments can have some value for assessing existing visual skills, but in training for new artificial vision devices one needs natural tactile and proprioceptive feedback to guide learning and sensorimotor recalibration. My specific interest would be in visually simplified and non-cluttered environments with good contrast between objects and background for training and evaluation purposes when learning to exploit occlusion, parallax, visual perspective, shading, etc for ambulatory vision.

  3. Peter Meijer September 1, 2009 at 8:57 pm

    P.S. I have a very basic online example of a virtual environment for navigation at

    http://www.seeingwithsound.com/3dmaze.htm

    but like I indicated the main thing missing here is tactile and proprioceptive feedback (and of course even this simple maze is already way too large and costly to build in reality).

  4. G F Mueden September 1, 2009 at 9:53 pm

    Delighted to see this. My failing sight has found fault with the legibility of much I must read: newspapers, magazines and reports. I have been searching for Legibility Testing, Grading and Standards, but with little results. It was pressed upon me by professionals that there are too many variables and we have a poor handle of what the population can read other than as measured with a high contrast near vision test card. That, of course, is unreal because it omits the effect of sensitivity to contrast.

    I am working on a way to do a census of the population (by sampling) and while it is imperfect, it is better than nothing. My test card will be made up of real printed samples of all sorts (fonts, size, colors, backgrounds, etc), more thana hundred per card as a selftest to be done by individuals, using glasses if they use them, doing it where they do most of their reading, and indicating the samples that that can read without difficulty.

    Distribute it widely to include all sorts of people.

    The individual tests would be tabulated and for each sample the percent finding it acceptable woould be determined.

    The graded examples would be a catalog of printing with the percent of the population that could read each example, This could form the basis of setting minimuum standards of legibility and serve as a guide for publishers.

    It is not a six sigma problem, It will be a real practical test of what people can read. I would like to see McGraw-Hill do it. It would be a wonderful comtribution to the larger community and might even cause their magazines easier to read.
    Lighthouse might doit. The National Eye institute might do it. [Ought to do it?]

    Let’s talk about it. gfmueden@verizon.net

  5. G F Mueden September 1, 2009 at 10:15 pm

    Please forgive my errors The font in this text entry box does not accommodate my software’s couce of a bolder one as does the publishe result.

    “that that” should be “that they”
    and there are a bunch of periods and commas that ought to be fixed.

    I might add that my test might even ahve diagnostiv value when a person’s cluster has outliers from the center.

    ===gm===

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: