Object recognition: how it’s used and where it’s going

Avatar
By Ben Allen September 27, 2017
object recognition

With Apple’s introduction of Face ID, speculation has abounded as to what direction it might lead technology in. The biggest potential, it seems, lies in its ability to detect objects that are not actually faces.

Object recognition is on the cards in many ways and making progress. But given the step forward in facial recognition Apple just took, taking it from something which didn’t really work to something incredibly adept, other forms of such technology are beginning to look a little out of date. Consumers would be forgiven for thinking “if you can detect an individual face why can’t you detect an object?”

It’s not an unfair question, although there are a lot more different objects in the world and the computer would need experience with all of them, unlike faces where it only needs to know one and the rest are assumed to be not the one it should unlock the phone for.

This might come across as strange – unpredictable, at the very least – but retail is becoming the first area to really push forward with the use of object recognition. Earlier this year Yoobic received $5.3 million in Series A funding for their technology which allows retail management to get an idea of whether their stores are adhering to their branding guidelines without having to visit the stores. The user can then use this information, turned into data, to make adjustments and test changes.

Microsoft have used something similar and discussed it in their developer blog. Though their frameworks are less specific they were able to train up some of their capabilities to be able to recognize when a store display was laid out correctly or not.

A company called BrainChip even manufactures circuit boards which are specifically for object recognition. Although they are apparently tailored for law enforcement and intelligence agencies, which leads to a few more concerning thoughts about the kind of uses it could be put to.

The LookUp app has managed to make decent progress using nothing more than the camera. Acting like an online dictionary which let’s you point your phone at things in a “what’s that?” kind of way, the app then trawls its databases and delivers an answer. It feels like this might be the first step into a consumer use of object recognition which might be useful. It would certainly help in the process of building a database of objects which computers can recognise. The LookUp team might be sitting on a very saleable database before long.

The uses for object recognition for companies must be huge, any time anyone is sent anywhere just to check up on something you could potentially use object recognition software. As far as consumers go, it’s unclear whether the hardware for something like Face ID will be usable by app companies for other purposes, and even so the hardware is only on the front of the phone. But if an engine could be trained to use it, it could be very powerful. On the other hand we may have to wait until we’re all wandering around wearing smart, designer glasses before object recognition takes a defining step in technology.