Creating a prosthetic hand that can move like a “hand” and operate like a “hand” is much more difficult that it would seem. It is not just about being able to move and create a shape that fits the purpose, as an example keeps the fingers sufficiently open to go around a egg and then closing them to pick it up. It is also about “knowing” that you cannot squeeze an egg unless you want to break it.
People having a prosthetic hand need to look at what the hand is doing and signal the hand what to do. Even this is quite difficult since you need to have the sensation of the pressure on the fingers to finely tune the pressure on the object.
Advances in this area have led to embed pressure sensors in the artificial fingers and in the joints and to relate their signals, electrically, to the muscles of the arm to send the sensation to the brain. Even with this advanced prosthetics the person wearing them needs to pay close attention, quite differently from the seamless actions we perform continuously with our hands.
Now a study at Newcastle University published in an open paper on the Journal of Neural Engineering reports on the embedding of a video camera in a prosthetic hand whose images can be interpreted (using a deep learning software) to understand what the hand is supposed to do and activate the required movements. The goal is to achieve a seamless operation of the hand, like we are used to with our hands.
It is yet another example of a symbioses between human and machine. Actually, one may see that this innovation can be used to improve the performances of an autonomous robot.