Happily disgusted or sadly angered: Can Microsoft guess what you feel – by
Microsoft has begun a public beta for a new application programming interface (API) that can identify emotions as expressed on a person’s face in a still image.
Chris Bishop, head labroratist at Microsoft Research Cambridge revealed that developers and businesses can get their hands on tools based on Microsoft products to bring AI like speech recognition, vision and language understanding into their apps.
It’s the sort of thing Galgon sees working as a lightweight form of authentication: Not as secure as a password or fingerprint but useful as one signal to see if someone is who they say they are.
Microsoft released the first set of Microsoft Project Oxford tools last spring, drawing interest from well-known Fortune 500 companies startups, according to the software company. Because the tool can only handle static images at the moment, emotions such as happiness can be detected with a higher level of confidence than other emotions such as contempt or disgust.
Microsoft released MyMoustache earlier this week in honor of Movember, which uses the technology to identify and rate facial hair.
Microsoft announces new services in its Project Oxford suite of developer tools based on machine learning and artificial intelligence. The suite will also have face-tracking tools that will log where people are in each frame of a video so users can analyze what’s going on.
Developers can now take advantage of an emotion detection service that looks at a photo and lists an array of emotions that it detects on the subjects’ faces.
It will also update its face APIs and add a custom recognition tool for noisy public places – like a shop floor. As with the age estimator, the company has made a public demo available for anyone who wants to try it, but it is a little more involved than the age estimator, providing a few concise data on the analyzed photo.
Speaker recognition can detect particulars of an individual voice, and could be used as a secure measure since individual voices are so unique. It will also roll out a new video feature that would let users cut smartphone videos to include only when people start moving in the shots later this year, as well a spell-checking API service that will update the dictionary with new slang words and brand names. This tool will also be available as a beta by the end of the year.