Low Vision News

For low vision specialists and those who consult them

Category Archives: Devices

The iPad for people with visual impairment

About a year ago, I wrote some blog posts about the Sony Reader and the Kindle for people with visual impairment. This was subsequently written up as a letter in the British Journal of Ophthalmology. I’ve also written briefly on iBooks for the iPhone.

Of course, the big new gadget for electronic books is the Apple iPad. I still haven’t bought one (I can’t really imagine what I’d use it for) but several patients I have seen recently really like it as a low vision reading device. I finally had chance to measure some screen parameters on a friend’s iPad yesterday.

The good news is that, unlike the electronic paper based devices, the screen is very high contrast: its maximum luminance is approximately 270 candelas per metre squared, and its minimum is around 0.5 cd/m sq. This means that the maximum Michelson contrast which can be displayed is very nearly 100%: making this a far better option for reading with reduced contrast sensitivity than a newspaper (Michelson contrast around 70%), a paperback book (about 75%) or a electronic paper device such as the Kindle (60%).

Whereas the iBooks application does not support reversed contrast, many third party readers (such as Stanza) will allow contrast to be reversed, so that text can be displayed as white on a black background. This text format is often preferred by people with media opacities, such as those who have cataract or corneal dystrophies, and those who suffer from increased glare (such as people with retinitis pigmentosa). These readers also allow increased text size (the maximum x-height on iBooks is 3.5mm, equivalent to about 2.4M (about two and a half times newsprint).

I have also been surprised by who has used an iPad: even some of those who refuse to use a computer have mastered downloading and reading books on an iPad: and it’s also useful for viewing photographs and web pages with the ‘pinch to enlarge’ touch gesture.

For visually impaired users, I think the iPad certainly beats the competition for now. However, as someone with good contrast sensitivity and visual acuity who likes physical books and newspapers I’m not sure I’ll be buying one yet.


iBooks on the iPhone – quick review

I have previously written on the use of the Sony Reader and Amazon Kindle as low vision aids. Of course the new ‘must-have’ gadget is the iPad. I have had several patients who have mentioned how useful their iPad is and how easily they can read with it, but have not yet had chance to systematically play with one.

Apple have also rolled out the iBook software to the iPhone. Again, I have not had chance to look at this with a photometer yet, but here is a ‘quick review’ of what iBooks are like for the visually impaired user.

On the iPhone, there is a choice of 9 magnification levels and six fonts (Baskerville, Cochin, Georgia, Palatino, Times New Roman, and Verdana). There is also a Sepia mode which reduces the screen contrast but may also reduce glare.

The largest font size on my iPhone 3GS is equivalent to N25 (3.2M): three times the size of newsprint. As a very rough guide, if you can read newsprint at a struggle you will be able to read this size text fluently. At this size you get about 12 words per page, and the screen refresh is fairly quick. It is possible to display text in reversed contrast, although this has to be set in the iPhone settings rather than in the app itself.

Unfortunately, none of the fonts available are proportionally spaced, and the maximum text size of N25 still isn’t great for people with poor vision. However, the backlit screen and increased font size compared to other eReaders means that iBooks on the iPhone appears to be a better choice for people with reduced contrast sensitivity or visual acuity. Of course, every person with low vision is different and I would certainly suggest trying each device before committing to buying one.

A full report on iBooks on the iPhone and iPad will follow in time….

iPhone magnifier app

My Apple iPhone is one of my favourite gadgets, and I was interested to see that there is now a magnifier app known as ‘iCanSee’ available through iTunes for 59p (or 99¢).

It has a zoom function which claims to magnify up to 4x, although when I tested it the maximum magnification setting only equated to 2x at the closest working distance which did not blur the image. The autofocus works pretty well on my 3GS (I don’t think other versions of the iPhone have focusing cameras so the app may not work as well on those). On my phone it focuses down to a minimum distance of about 10cm and the motion blur when the phone is moved along a line of text isn’t bad at all. I should mention that it’s pretty unusable unless you enable the full screen mode. It’s also pretty good for distance magnification, although it’s much harder to hold a phone steady than a monocular telescope.

It’s a shame it doesn’t have any image processing abilities, like contrast reversal or contrast enhancement, as I imagine the iPhone processor should be fast enough to allow this. It’s also a shame that higher magnification isn’t possible.

At present I think the app will be more useful to presbyopes who have forgotten their reading glasses than to people with visual impairment. However I think we will see more sophisticated software like this in the future: the next generation of electronic magnifiers will probably ‘piggyback’ onto existing technology rather than requiring new hardware. After all, I’ve yet to see an electronic magnifier company produce anything as attractive as an iPhone…

Peripheral prisms for hemianopia

Nearly 1% of people over 50 years of age have visual field problems following a stroke or brain injury. The classic pattern of this visual field loss is to lose half of the visual field from each eye, which is known as homonymous (meaning the same side) hemianopia (meaning a loss of half of the visual field). The effect of this is to not be able to see anything to the left or right side of the central point of the vision. As you can imagine this has a significant impact on reading, navigating, and walking in crowded environments.

One strategy to overcome this is to make more eye movements into the blind side of the vision – for example, if the right side of the visual field is missing, the person makes repeated head and eye movements to their right in order to move the region of good vision to the right hand side. However this is a difficult strategy to learn. An obvious alternative approach is to use prism lenses to move the image of objects which fall on the non-seeing side into the healthy part of the visual field. A problem with this is that they create central double vision – which can be more disabling than having missing visual field.

For the past 10 years or so, Dr Eli Peli from Harvard Medical School has been using an alternative approach – a peripheral prism system which is fitted on the top and bottom of a spectacle lens, leaving the central region clear. The idea of this system is that things which fall into the top or bottom of the nonseeing visual field are moved into the seeing region, alerting the person to make a head or eye movement to examine what is there.

A report on this system appeared in Archives of Ophthalmology last year, reporting that 74% of people who were fitted with the prisms still used them six weeks after they were dispensed, and that about half of the people who were fitted with the prisms still used them one year later. People with hemianopia report these glasses as being particularly useful for avoiding obstacles at home, in shops and shopping centres, and when walking in unfamiliar environments.

More information on this system can be found at www.hemianopia.org.

It is good news that peripheral prism spectacles seem to be useful for walking and navigation for a reasonable proportion of people with hemianopia. Unfortunately they are less useful for reading and computer work, which in my experience is a very frequent complaint of people with this type of visual field loss. However given that 1% of the older population has heminaopia, and that the population are ageing, I would not be surprised if adaptive devices to help reading with visual field loss are developed in the future.

Telescope training: is there evidence?

At the moment there is lots of interest in implanted miniature telescopes for people with low vision (where the lens inside the eye is surgically replaced with a magnifying telescope). I will discuss these in another post, but this topic started me thinking about conventional, hand-held monocular telescopes, and the relative benefits of these.

In particular, I am interested in the role of training people to use these telescopes. There are three key things telescopes can be used for: spotting (“what is the name on this street sign?”); tracking (“what is the number on this moving bus?”); and scanning (“I know there is a sign somewhere near here, where is it?”).

Most people can probably manage to spot something with a telescope with practice and no training, as long as the principles of telescopes are explained to them (eg. you need to adjust the length of the tube to focus it; you need to make sure you’re standing still when you’re using it; you need to hold it as close to your eye as you can). However, more complex tasks such as tracking and scanning are difficult to perform without some guided practice or training.

In the UK, some local charities for the visually impaired perform this training, but there are still plenty of people who do not receive training and who just “pick up their telescope” and go home and use it. In the USA telescope training is commonly performed by rehabilitation workers, but there are still people with telescopes who have not received training.

Unless my quick literature search didn’t find it, it doesn’t seem that there has been a systematic study on telescope training to answer some key questions such as:
– what can people with low vision do with a telescope without training?
– how much training is needed to be able to perform certain tasks with a monocular telescope?
– how much better is performance with a telescope once training has been performed?

Once we have answers to questions like these, we can make a much stronger case to funding agencies that device training should be made available to more people. This would also make a nice project for someone looking to do a Masters or PhD in vision rehabilitation too.

Aesthetics of low vision aids

As a low vision clinician, one of the most dispiriting experiences is demonstrating that a telescope or other low vision aid can enable a task to be performed, only to hear “yes, but I don’t think I’d use it” (or worse, laughter when you show someone a device and explain it may be useful).

It is particularly difficult when children accept a device but their parents are reluctant to let the child use it. In some cases this is because they think that it will lead to their child being bullied; in others because they think their child will become dependent on it; and sometimes because it will change other peoples’ perception of their child.

There are obviously many reasons which underlie this response, such as the psychological reactions to sight loss (accepting that extra help beyond ‘nornal’ glasses is required is a huge, almost life-changing adjustment), but I think some responsibility must lie with device manufacturers.

Electronic devices are great in this regard: systems like the SenseView and compact+ look more like PDAs or video game players than assistive devices. This will only improve as technology advances: at envision Bob Massof showed images of the earliest LVES head mounted electronic magnifier and it’s ten times the size of today’s best devices.

Whilst optical devices must by their nature be fairly big and bulky, that doesn’t mean they can’t look cool as well. I would welcome an illuminated stand magnifier that didn’t look like a child’s torch, or an increase in the number of pleasingly designed telescopes like the Eschenbach microlux. I hope that device manufacturers think more about this.

There won’t be any updates to lowvisionnews for the next ten days or so, but I will be back with some more scientific posts then…

What is the future of assistive devices for the visually impaired?

At the forthcoming Envision conference, a pre-conference discussion will be held on “Current Trends in Low Vision and Vision Rehabilitation Research: Where and How Should Scientists be Focusing their Efforts?”. Unfortunately I will not be at Envision until after this session has taken place, but it led me to think about what may happen in the future with assistive devices for people with low vision. Here’s what I came up with…

One of the most frequent complaints I hear from people in clinic is “I walk past my friends in the street and they think I’m being rude because I don’t recognise them”. It may sound a bit science-fictiony but I see no reason why we can’t have a small assistive device which identifies faces and communicates to the user who the face belongs to: “I think David is walking towards you, on the right hand side”. The new version of iPhoto on the Mac has a pretty good face identification system which appears to have some Bayesian learning incorporated: I am sure this technology could be adapted to become a low vision aid.

The possibilities are endless. It could include some expression recognition software, and could be linked to your diary on your iPhone: how about a camera mounted on specs with an earphone saying “David is coming towards you. It was his birthday last week. He looks pretty mad.” Then the user can choose whether to pretend not to walk past David or not!

I’m sure readers have other ideas for the future of devices for low vision users…why not share your thoughts here. Who knows, maybe your ideas will become reality!

Journal article: A versatile optoelectronic aid for low vision patients

There is an interesting article in this month’s Ophthalmic and Physiological Optics by María Dolores Peláez-Coca, Fernando Vargas-Martín and colleagues which presents an electronic low vision aid which performs image processing on the fly (i.e. in real time). The system is called SERBA (an acronym for the Spanish of Reconfigurable Electro-optic System for Low-vision Aid). The device fits on a baseball cap for portable use and is relatively inconspicuous.

One of the manipulations it performs is a digital zoom, however I am more impressed by the augmented view strategy which it incorporates. This enables the user to see a compressed view superimposed upon the “real” view of the scene. The idea is that someone with peripheral visual field loss (as in glaucoma or retinitis pigmentosa) can see a compressed view of a scene within the intact visual field, and can use this view to locate objects of interest. The area of interest can then be seen with the residual field.

The system was shown to expand the visual field by between 1.7 and 4.1 times, as assessed on a tangent screen test.

Of course the real test would be to see how useful it is for people with reduced visual fields in the real world: for catching a bus or watching a sporting event, say. However the lab tests of this system are encouraging and I think it will be very interesting to follow the progress of the SERBA.

USA visit report 2/3: Brainport and the OCT-SLO

During my stay in the US, I spent a day visiting Drs Aries Arditi and Bill Sieple at Lighthouse International in New York. Lighthouse is a very well established centre for low vision rehab, advocacy and research (and also includes dance and music studios for the visually impaired, a pre-school, a reading centre, a braille unit, and many other services). Whilst there I saw two very interesting devices.

First, I had a go with a Brainport unit. This system translates visual information (captured through a camera mounted onto spectacles) into electrical pulses on a 20×20 grid. This grid is placed into your mouth, so you can feel the shape of the scene you’re looking at through impulses on your tongue. I must admit that it just made me dribble like a village idiot, but apparently that reflex subsides after time, and people can differentiate large letters with this system (after training). I can see that this may be a useful adjunct to another mobility aid for people with no vision (it could, for example, help find a high contrast doorframe in a plain corridor).

The second instrument I saw was the OTI/OPKO OCT-SLO. This combines an Optical Coherence Tomography (OCT) device with a Scanning Laser Ophthalmoscope (SLO). OCT creates cross-sectional images of the retina (it can detect subretinal fluid for example, which is very important in differentiating wet from dry macular degeneration). It also measures retinal thickness. The SLO captures a high quality greyscale retinal image even without pupil dilation. What makes this system different is it also includes a OLED display for the subject to observe. This means it can be used for retinal-specific microperimetry: it can test the function of specific parts of the retina. It even includes a module to check visual acuity at different retinal areas (although it can’t measure acuity better than 20/70 (6/21) due to pixel resolution).

It’s a nice instrument and would be good to have in a clinical environment where you are interested in structure and function. Add-on modules enable you to do anterior chamber and optic disc imaging as well. The image quality and display quality are good: far superior to the Nidek MP-1 microperimeter, for example. It appeals to me less as a research instrument as the software is very sealed and you can’t make any changes to it yourself (for example, you can’t switch the VGA input as you can on the MP-1). Also, the visible raster of the SLO imaging system is quite distracting when you sit as a patient.

Anyway, it was an interesting visit to the Lighthouse. Thanks to Aries and Bill for hosting my visit!

USA visit report 1/3: Indoor navigation aids

I have spent the last week in the US on a work trip to the Low Vision Research Labs at the University of Minnesota, where I do a little bit of work with Dr Gordon Legge (a pioneer of low vision research). It has been a great trip – aside from some good collaboration I managed to visit a couple of different research labs in the US and I’ll report on those here. As an aside, it was great being in the twin cities – they are two of my favourite cities in the world. In between work activities I managed to go to the taste of Minnesota festival (where I had the local delicacy of pork chop on a stick), to lots of good restaurants and to catch up with friends who live in Minneapolis.

One of the people who work in the Low Vision Research Lab is Paul Beckmann. Dr Beckmann’s particular interest is in indoor navigation for the visually impaired and blind. Whereas outdoor navigation can be assisted with GPS and similar systems, indoor navigation remains a problem for many people with low vision – particularly in unfamiliar environments like office buildings, hotels or museums. Dr Beckmann has been developing a “magic flashlight” system which finds tags around the walls of buildings and gives an audio description to the user of the location and nearby features over an earphone. It also incorporates options to get further information, so for example you may find you are at room 121, then that this is the office of Mrs Jones, then that Mrs Jones is the human resources manager with responsibility for clerical staff.

Although the system is still in prototype stage, it is already very impressive. Paul is currently piloting the system for users with low vision and people with no vision. If you are interested in volunteering to trial this device and live in the Twin Cities area I’m sure Dr Beckmann would be delighted to hear from you!