Communications is an integral part of any smart entity. Without communication an entity cannot be smart (it won’t be aware of its context, hence it would not be able to adapt to it). Better communications foster smarter behavior, it is an integral component of smart machines (and smarter humans/machines cooperation/interaction).
There are many forms of communications broadly split into direct and indirect communications, the former implying an explicit exchange of information among parties, the second being the result of an awareness of a party of the dynamics of its ambient leading to a change in its behaviour.
We, humans, have created very sophisticated communications and communication tools that in the last decade have formed an interconnected web on the planet making it possible to communicate seamlessly across distance and more and more enabling communication beyond the human species, with objects, machines and artificial intelligence.
Depending on the needs, different communication tools (infrastructures, protocols, interfaces and devices) can be used. This goes both for human and for machines.
Machine communication has a greater variety than human communications. We are constrained by our senses (their capabilities) which places both an upper and lower boundary to the volume of information (data) and their quality (form).
Sensors, in general, require higher network energy efficiency (since powering sensors is not always easy, particularly for ambient sensors based on energy scavenging), on the other hand they do not need to transfer large bulk of data and often latency is not a big deal (a delay of 500ms in most situation is acceptable). In some applications there may be a high density, hence requiring a pervasive coverage and management of thousands of IoTs in a single area (think about IoT application in an industrial environment, an assembly line)
In case of human-machine communications we have been relying so far on human senses, creating machine interfaces that our senses can detect and interact with (screens, keyboard, voice control…). In the last decades research has been focussing on creating direct Brain to Computer Interfaces, leveraging on sensors that can detect the brain’s electrical activity and software able to interpret it.
However, in spite of some amazing demonstrations (people able to control a robotic arm with their thoughts) we are still far from a real meaningful communications. Most progress have been done in virtual motorial communication, like directly controlling a pointer with your brain or a robot movement. This is because it is relatively straightforward to pinpoint the areas of the brain controlling our limbs and therefore by thinking “I need to move the hand to pick up that glass” you are actually sending an electrical pattern to the area controlling the movement of your arm and hand and this pattern can be detected by a signalling processing software and translated into commands to a robotic arm.
We are not to the point of detecting a thought like “I am thirsty”. One of the stumbling block is that there are significant similarities in the patterns generated by people when they want to move a limb, not so in the patterns generated by a general thought like “I am thirsty”. Even in the case of similarity for the movement of a limb, the signal processing software needs to be trained in order to detect the pattern specific to that particular person. As a matter of fact the training goes both ways: the person also needs to be trained to “think” in such a way that the computer could understand.
A human mind upload is still a science fiction topic. However, machines are getting better and better in picking up “meanings” as I will be discussing later.
The reverse communication, from the computer to the brain, is today limited to micro commands, since it is based on electrical spikes that can be delivered to certain micro area of the brain or on optogenetics –still at experimental stage- using genetically modified neurones and light pulses delivered through optical fibres implanted in the brain. Both require invasive surgery and obviously are restricted to relieve specific pathologies (like epilepsy). All communication from a machine (computer, robot, cyberspace) today can only take place through the mediation of our senses. Implants are possible at the sensory level, like electrical stimulation of the aural nerve or a retinal chip implant (like ARGUS II). Haptic feedback is also being used in interfaces. Virtual Reality and Augmented Reality technologies can link the cyberspace to us, but that happens through our senses.
Indirect communications is valuable among life forms (a living being “gets” the ambient situation and changes its behaviour accordingly) and with computer vision and other types of ambient sensors it is becoming important in machine to machine communication as well as for human-machine communication.
Indirect communication is based on:
Awareness: here is taking a broader sense, including not just the awareness of the situation but also the awareness of how the situation might evolve and what can condition the evolution in a specific direction. Both the awareness of the situation and of its possible evolution (even more this second than the former) can be done at different level of intelligence. We are seeing some interesting evolution in the capability of machines to become aware of the emotional state of a person, and even of a crowd, by observing their faces and their behaviour. In this sense we can say that a machine can “read our mind”.
Adaptation follows, at different degrees of effectiveness, the understanding of the situation and aims at changing the behaviour of the entity. Clearly, adaptation is a continuous process constantly re-evaluating the benefit it brings to the entity. As machines (robots) are becoming more and more software driven, flexibility becomes possible and a new technology area, that of evolutionary robotics, is looking into this.
At higher level of intelligence it is expected that machines can come to understand how the environment and its components react to specific actions and may become able to influence the behaviour of the environment components in the direction that is most beneficial to that entity. Clearly this is a cat and mouse game, since each entity in the long run will acquire this capability and the interaction will become ever more complex.