
For quite some time the “chip” business was restricted to very few companies that were able to invest huge capitals in the design and manufacturing and because of that huge investment required a big market to sell their products. That led to the design and production of “one-size-fits-all”, in other words of chips that could be used in any application. The downside of this trend was that on the one hand the chips available were usually under-utilised (a given application area was unlikely to need all the stuff packed in the chip) and on the other hand they were not customised (made efficient) to a specific task.
The world is changing, also thanks to a revolution in chip design (the big foundries are still there). Companies can design ASIC -Application Specific Integrated Circuits- that can be much more efficient for the task they have to serve.
The announcement of Tesla, a car manufacturing company, of the development of D1, is a point in case. The chip embeds 50 billion transistors and has been designed mimicking some functionalities of the Dojo supercomputer that Tesla is using to train AI software (Tesla cars are computers on wheels with a brain made of AI). Tesla needed a more effective way to train its AI software and the D1 has been designed to focus specifically on AI training, with 362 TFLOPS (at 16 Floating point instruction) of processing power (watch the clip).
The chip is not the usual “chip” you may imagine in terms of size, it looks more like a full board (as shown in the clip). It is made up of functional unites replicated and connected in such a way to maximise the efficiency from the point of view of AI training (where the software needs to explore several paths, confront them and pick up the ones that looks best, again and again till it converge on a solution pattern).