Twitter
Advertisement

The tech that recognizes friends in photos now revolutionizes driving

Latest News
article-main
FacebookTwitterWhatsappLinkedin

The Consumer Electronics Show (CES)--the biggest annual technology showcase on the planet--kicked off in Las Vegas a few hours ago. From next-gen displays to wearable computing to the every type of consumer product that will have some form of compute power and Web connectivity, this week will see some of the most bleeding-edge technologies being unveiled.

Kicking off the event was Jen-Hsun Huang, President and CEO of Nvidia -- the company behind all those pixel-pushing gaming systems and graphically advanced mobile processors. Going by what was unveiled, this year should witness an onslaught of connected systems; not just with traditional computing devices but also in cars and consumer objects.

At the centre of Huang’s keynote speech was their new Tegra X1 Graphics Processing Unit: a 64-bit mobile processor that happens to be the first 1 Teraflop mobile chip. The implications of this kind of computing horsepower became evident during the speech and his demonstrations, where he showcased the future of in-car control, navigation and entertainment systems, based on this new chip. It appears to be a processing behemoth, with a 256-core GPU (codenamed Maxwell) and an 8-core CPU, which together facilitate the highest levels of visual computing ability, which they hope will revolutionise cars and driving in the near term.
 
This new processor forms the foundation of their newly-announced Drive CX platform--a system that uses two Tegra X1 chips that in tandem deliver 2.3 teraflops of mobile supercomputing power, can process 12 real-time camera inputs, and uses a revolutionary new neural network based vision system that enables the system to train itself autonomously and learn visual cues when driving. Nvidia ostensible already has their graphics chips in 6.2 million cars including ones from Audi, BMW, Porsche and Tesla. This new system will enable upcoming cars to recognize everything from road signs to pedestrians to other cars--even the specific type of vehicle.
 
Apart from the amped up visual aspect of this new digital cockpit--it can render a range of ‘textures’ like aluminium, copper, even bamboo, for the instrumentation cluster dials--the system also uses a type of visual processing called surround vision that processes live feeds from several fisheye cameras around the car, and stitches imagery together to make sense of the immediate environment for autonomous driving decisions.
 
The Drive CX system utilises neural networks for image processing, leveraging the power of their GPUs for advanced image processing for recognising details and features in images. The system is able to, in real time, recognize multiple features in driving environments, including partially obscured pedestrians and objects and can even detect special traffic instances such as the flashing lights of ambulances or school buses. The neural learning system is able to ‘teach itself’ and refine recognition accuracy all on its own. It is a deep learning chip that learns continuously: in case of any incorrect recognitions, it communicates with its connected image repository in the cloud, makes adjustments to itself, and every recognition after that is improved. This ability has enabled its visual systems to evolve at a hugely quicker rate--what used to take several months of machine training can now happen in days and weeks.

Also part of the this other-worldly car-of-the-future demonstration was the ability of the vehicle to enter a parking lot, drive around, find an empty spot and park itself. And with a paired smartphone, the car will be able to find its way back to you--like an automated valet.

Stay tuned as we bring you the best of CES 2015 right here, and find all of the latest updates on our Twitter feed on #DNATech.

Find your daily dose of news & explainers in your WhatsApp. Stay updated, Stay informed-  Follow DNA on WhatsApp.
Advertisement

Live tv

Advertisement
Advertisement