OpenAI-funded 1X humanoid robots make impressive progress in autonomous operation (video)

OpenAI-funded 1X humanoid robots make impressive progress in autonomous operation (video)

“There are no computer graphics, video acceleration or scripted playback in the demo. Everything is controlled by a neural network. At speed 1X”.

These are the words the Norwegian artificial intelligence and robotics company describes the work of its androids in a new video. OpenAI previously financially supported 1X by investing $25 million as part of a Series A funding round – the subsequent Series B worth already $100 million showed how important ChatGPT’s developer focus is today.

Compared to the versions that Tesla or Agility are working on, the 1X robots look somewhat “unarmed” – the humanoid, named Eve, does not yet have dexterous and human-like hands (but there are something like claws), as well as legs ( the robot simply rolls on a pair of drive wheels, balancing on a third small wheel behind).

In fact, the bipedal version of the 1X already has — and it’s a Neo with apparently well-designed arms. Probably, the company believes that for the initial general purpose work, hands with aesthetic “musical” fingers will not be needed, and it is better to ride on wheels on the concrete floors of factory warehouses.

At the same time, enabling walking on two legs or delicate manipulation of objects is not the main obstacle to the launch of working androids. It’s more about learning tasks quickly and then doing them autonomously (Optimus Tesla, for example, raised a lot of questions about this when he folded his shirt in a recent demo).

In this context, you can look at the advantage of 1X in this video:

The above tasks are not extremely difficult, but a bunch of robots cope with them quite successfully and autonomously: grabbing things or lifting them from the floor, putting them in boxes, etc. They also open doors themselves, approach charging stations and connect to the power grid.

Essentially, the company trained 30 Eve bots on a number of separate tasks using simulated video training and teleoperation. The behavior model was then adjusted according to the capabilities of a certain environment – warehouse tasks, general door manipulation, etc. — and then finally trained the bots in the specific tasks they were supposed to perform.

Source: New Atlas

Join the competition of ITS authors! Win cool prizes from our Logitech partners – professional gaming steering wheel and low-profile gaming keyboards.

Related Posts

UK to regulate cryptocurrency memes: illegal advertising

Britain’s financial services regulator has issued guidance to financial services companies and social media influencers who create memes about cryptocurrencies and other investments to regulate them amid…

unofficial renders of the Google Pixel 9 and information about the Pixel 9 Pro XL

The whistleblower @OnLeaks and the site 91mobiles presented the renders of the Google Pixel 9 phone. Four images and a 360° video show a black smartphone with…

Embracer to sell Gearbox (Borderlands) to Take-Two (Rockstar and 2K) for $460 million

Embracer continues to sell off assets – the Swedish gaming holding has just confirmed the sale of The Gearbox Entertainment studio to Take-Two Interactive. The sum is…

photo of the new Xbox X console

The eXputer site managed to get a photo of a new modification of the Microsoft Xbox game console. The source reports that it is a white Xbox…

Israel Deploys Massive Facial Recognition Program in Gaza, – The New York Times

The Technology section is powered by Favbet Tech The images are matched against a database of Palestinians with ties to Hamas. According to The New York Times,…

Twitch has banned chest and buttock broadcasts of gameplay

Twitch has updated its community rules and banned the focus of streams on breasts and buttocks. According to the update, starting March 29, “content that focuses on…

Leave a Reply

Your email address will not be published. Required fields are marked *