Using a specifically designed deep learning neural network, researchers were able to develop a new type of soft robot – one with optimised sensor placement resulting in superior efficiency.

The majority of work we see handed off to robots are technically complex or accuracy-centric tasks. The high level of precision outperforms conventional human operation in many ways. Despite this, the mechanically rigid and precise world of robotics systems means that interaction with humans can be unsafe.

Soft robots are the likely solution to this problem.

Designed to replicate the nature of biological matter, soft robots will be an area of huge development within the coming decade, as human-AI interactions increase, and the potential utilities of robots in our everyday lives are further realised.

With this in mind, a team of researchers from MIT have taken a further step into the world of algorithm-assisted engineering. A deep learning algorithm was designed with the task of aiding engineers working create more effective soft robot designs.

The research – which will be further presented at Apple’s IEEE – shows how the algorithm has helped with the engineering of soft robots. According to the team, the system first learns about a given task. From that, it can then generate what it thinks will be the best design for a robot capable of solving the task.

The algorithms primary benefit – at this stage at least – is to aid in the placement of sensors on a robots body to best suit its environment and task. This is incredibly important when it comes to efficiently navigating a particular problem – such as moving through small spaces – as any slight misplacement of a sensor could have unintended drawbacks.

Unlike conventional robots – which are bound in movement to particular axis due to their metal structure – soft robots are capable of advanced types of movement, some of which spontaneously suit the situation or task. This is also problematic for designers when attempting to place forms of visual navigation such as cameras on the robot, which could be blocked should the robot decide to deform itself in a particular manner.

One answer is to place sensors around the body. According to the researchers, their algorithm was fed data on both the robots design and the task it would be completing. Looking for areas of strength and weakness, the algorithm could then determine how many sensors would be necessary, and where to place them.

After this, the team ran the algorithm against human roboticists to see how it could perform. The roboticists were tasked with picking where sensors should be placed. This was ran against the algorithm, which picked where it thought the sensors would be most effective. After running simulations on both, the results pointed to the algorithmic sensor placement being far superior to the human one, as it was able to notice slight subtleties in how the robot would operate.

Although this technology is new, it’s a glimpse into how AI could become more independent or self sufficient in the future.

Source: MIT

0 0 votes
Article Rating