Haptic Display Glove

Akshay Baweja
10 min readDec 16, 2019

Introduction

The haptic glove display designed to act as a sensory substitution device. Primarily, it communicates navigation directions from point A to point B in the form of vibration patterns to the person riding a bike or driving a car. The display comprises 16 haptic feedback motors placed on a glove that relays the information to the user as static, spatial, and sweeping haptic feedback. The haptic feedback also varies in intensity to communicate soft or hard feedback. Moreover, it can act as an extension to our present human sensory system in ways that we can perceive data directly on our body without having our visual attention diverted on screens. The addition of a 9-DoF IMU to this glove will enable its use in virtual reality applications, enabling bidirectional communication between the system and the user. Other uses include data perception, music perception, interactive video feed, and a haptic display for the vision-impaired people.

Concepts

Sensory Substitution

The idea of sensory substitution was pioneered by Paul Bach-Y-Rita in 1969 in his experimental project, Vision Substitution by Tactile Image Projection. Sensory substitution can be defined as a change of characteristics and one sensory modality into stimuli of another sensory modality. Dr. Bach-Y-Rita, in his research on Vision Substitution by Tactile Image Projection, presented a concept where he modified a dental chair and added a twenty by twenty array of solenoids on the back of the chair. A television camera was used to record the objects presented in front of the camera. A commutator converted these recorded images from the camera feed and mechanically projected the objects on to the skin of the back of the blind subjects. The blind subjects were supposed to guess what was being projected. Bach-Y-Rita conducted the experiment on a total of six blind subjects. Initially, the subjects were able to differentiate between vertical, horizontal, diagonal, and curved lines. After spending a couple of hours in the chair being trained on different patterns, the subjects were able to differentiate between different shapes like a square, triangle, circle, etc. and approximately after a day of training the subjects were able to differentiate between much more complex objects such as a telephone, a cup, toy horse, and others. With repeated presentation, the latency or time-to-recognition of these objects fell markedly.

Figure 1. Vision Substitution by Tactile Image Projection by Dr. Paul Bach-Y-Rita in 1969. The T.V. Camera records the object and sends the stream to commutator which translates this image into motion of vibrators which then projects this image on to the skin of the back of user sitting on the chair.

According to neuroscience, the brain communicates via a combination of electrical and chemical signals, together known as electrochemical signals. These electrochemical signals are what we can term as the language of the brain and body. Every bit of information is exchanged in the body is in the form of these signals. Different human body senses like eyes(vision), skin(touch), ears(sound), and tongue(taste) perceive the reality around them and translates this perception of reality as electrochemical signals which are passed onto the brain. This suggests that the brain can be considered as a general-purpose computing device that processes the incoming information and then correlates this information with desired actions or motor movements. This concept of the brain as a general-purpose computing device also suggests that every sensory organ is analogous to a plug and play device that generates electrochemical signals as output.

The perception of reality can be different for different beings and animals; we as humans perceive reality as what we can sense and process. For instance, for a normal human being, reality is a world full of colors and reflections, sound (speech and noise), touch and feel, taste and smell, and their correlation with each other. Now, consider a blind person; for him/her, the reality is entirely different. He/She doesn’t perceive reality as colors and reflections; the correlation between visuals and other senses isn’t existent because it can’t be experienced by the person. Let us consider our animal kingdom, and the reality changes completely with different animals. For example -

  • A bat uses echolocation i.e., it generates sound waves and perceives the reflected sound waves to approximate obstacle location while navigating. The perception of reality for a bat is about reflected sound waves.
  • A snake uses heat pits i.e., the ability to sense infrared thermal radiations, which allows them to see in the absence of light and detect warm objects from several meters away. For snakes, the world is about the intensity of thermal radiations from different objects.
  • Birds use magneto-reception i.e., using Earth’s electromagnetic field to navigate and find directions. They perceive the world as electromagnetic radiations.
  • A dog has odor receptors, which allows him to find the direction. The whole reality for a dog is about odors and the correlation between different odors.

If we lose a sense, it doesn’t restrict us from perceiving the world; it remodels our reality. The brain still receives information from different senses and reconstructs the correlation of motor movements to incorporate modified electrochemical signals. Similarly, in the concept of sensory substitution, as the substitute sense is being used, the correlation between the substitute sense starts getting better and better with continued usage.

Cortical Homunculus

A cortical homunculus is a distorted representation of the human body, based on a neurological “map” of the areas and proportions of the human brain dedicated to processing motor functions, or sensory functions, for different parts of the body.

The brain maps each sensory receptor onto the cortex rather than considering the area of the body where the sensor is located. The more receptors there are in a given area of skin, the larger that area’s map will be represented on the surface of the cortex. As a result, the size of each body region in the homunculus is related to the density of sensory receptors.

Figure 2. Sensory Homunculus of a human body illustrating how brain maps different sensory organs of the body according to their allocated proportions in the cortex.

Figure 2 illustrates that the “hands” dominate the sensory homunculus proportionally as compared to other parts of the human body. Hence, we can conclude that the hands are the most sensitive parts of human body. Theoretically, hands should have a much better spatial resolution and sensitivity as compared to other parts of the body.

In medical science, the term graphesthesia is the ability to recognize writing on the skin purely by the sensation of touch. Graphesthesia is done in order to test for certain neurological conditions such as; lesions in brainstem, spinal cord, sensory cortex or thalamus. This activity allows the user to write single numbers on the palm with a toothpick. The patient provides a verbal response identifying the figure that was drawn. This test of graphesthesia is performed on subject’s hand for the reason that human hand have a better spatial resolution than rest of the body.

System Design

Learning and Design Goals

The goal was to build a feedback device which effectively communicates navigational information to the user. Primarily, bike riders and car drivers to help them perceive navigation directions without having their primary senses, i.e., vision and auditory senses, from getting distracted by screen based displays or audio feedbacks.

The broader objective of this feedback device is to test the spatial resolution of human hand and allow user to consume data and expressions which were otherwise represented through screen based visuals. The notion of non-screen based displays enables user to perceive the reality in a new perspective.

Considering the key concepts discussed above in this paper, i.e., the concept of sensory substitution, graphesthesia and the fact that hand dominates the sensory homunculus a glove with haptic feedback was intended to be designed and implemented. This glove would be used to test the spatial resolution of the hand and justify the idea of wearable haptic display. Moreover, to experiment and test how perception of information contrasts with feedback intensity and intervals between two points of an information.

Technical Implementations

The glove is conceptualized as a 4x4 display comprising of disc vibration motors as haptic feedback devices. The matrix will be spread out through the hand terminating before the wrist line begins.

Figure 3 (a). Proposed position and placement of disc vibration motors on hand. Each circle in the above illustration represents one disc vibration motor with its corresponding identification number.

For experimenting the concept of communicating navigational directions to the user, four directions, namely, north, south, east, and west, are conceptualized which would be indicated by rapidly switching different vibration motors on and off.

Figure 3 (b). Initial implementation and layout. The right half of the image shows planned layout of vibrations motors which are used for vibration and placement of Arduino Nano

Apart from directions, a basic figure indicating a circular motion is also conceptualized to be presented to user using these vibration motors, which will be a key motion perception illustration in testing the spatial resolution of hand. Another interesting key point to experiment in the process would be time interval between two frames or at what frame rate does the skin starts to perceive the shift in vibration of motors as if something is being drawn on user’s hand.

The glove implements a total of sixteen disc vibration motors in a 4x4 matrix, which are equidistant on to one another vertically. This vertical placement is calculated in order to ensure that the vibrations generated by two motors do not interfere with each other.

Figure 4. Schematic for connection 16 vibration motors (M-1 to M-16) to Arduino Nano

The sixteen motors are connected to a ATMega328 microcontroller, an 8 bit AVR RISC-based microcontroller having 32KB ISP flash memory, 1KB EEPROM and 2KB SRAM, with each motor connected connected individually. The microcontroller is able to receive commands and data stream for motor control using, Universal Serial Bus (USB). However, the circuit has the ability to connect to external communication modules such as Bluetooth and WiFi which further connects to a primary computing device such as a mobile phone over one of previous communication protocols.

Software Implementation

Figure 5. Implementation Block Diagram

Each vibration motor can be varied in intensity from very soft to very hard vibration. This intensity of vibration depends upon the on time of the vibration motor. The lightest being an on time of 40 ms and hardest being an on time of around 250 ms.

The intended gesture is split up into frames which are then translated into a linear string of ASCII Characters where the ASCII value indicates vibration intensity. This defines a vibration range for disc vibration motors from 1 to 255. The string begins from motor 1 and then goes in an incremental manner upto motor 16 as illustrated in figure 3.

This string of data is enclosed between ‘*’ (Asterisk) which serves as data reception start and end indicators for the microcontroller. This tells the microcontroller when to start data reception the string and when to stop data reception. The string of incoming sixteen characters is used by the microcontroller to determine the vibration intensity for each disc vibration motor by translating the characters to their equivalent ASCII values.

Figure 6. The motion of indicating a ‘move straight’ gesture. The time interval between each frame is 120 ms and on time for each frame is 100 ms.

Consider figure 6, illustrates different frames that sum upto a ‘move forward’ gesture that would be experienced by the user. The on-time for each frame is 100 ms while the delay between each frame is 120 ms. The characters received by microcontroller for the motors which are to be turned on would be ‘d’ (lowercase letter d), as on ASCII Table it represents a value of 100.

Initial User Testing for Haptic Glove. User — Sheenam Khuttan

Aesthetic look and feel

For disc vibration motors to be felt on skin even when slightest of vibration is generated, the disc vibration motors must maintain a good contact with the skin. The lesser the space and material between skin and disc vibration motor the better the vibration are experienced. Also, whenever a human slides a glove on his/her hand it stretches.

Considering above stated facts, the haptic display glove is designed to be of Dri-FIT active wear material. Dri-FIT material’s unique high-performance microfiber construction supports the body’s natural cooling system by wicking away sweat and dispersing it evenly throughout the surface of the garment so that it evaporates quickly. This also makes the glove a stretchable yet tight fit on the hand.

Figure 7. Haptic display glove. (a) shows the glove with exposed layer of electronics while (b) shows the glove with electronics being concealed under layer of fabric giving the glove look and feel of a normal glove.

Despite having electronic components laid all around the hand’s surface, it us designed to look like any normal glove. To hide the electronics on hand, another layer of fabric is stitched on top of electronics. Thus, hiding away the layer having electronics and giving glove a normal looking feel.

Results

Video illustrating use case for the haptic glove to receive navigation directions while riding a bike

Human vision is a cognitive process. We “see” with our brain and not eyes. For our brain to interpret a series of images as a motion picture, the minimum number of discrete images displayed must be at least sixteen frames. If we go below that, the brain perceives those as still images; go above it, the brain perceives it as motion picture.

Similarly, to “see” through the skin, the minimum number of discrete frames projected on the skin must be seven frames in a second. If we go below that, the brain interprets these vibrations or haptic feedback as static vibrations, going above seven frames per second makes the brain perceive it as a motion on the skin.

The result drawn is based on hit and trial methods based on user testing data by different users. Another notable fact in user testing was the intensity of vibration was varying for everyone. A key take away from this in the future development of this prototype would be including an intensity control for customization of vibration intensity.

--

--

Akshay Baweja

A creative technologist interested in exploring non-screen based human computer interactions