We recently tested Rekognition’s image detection capabilities with photos from a cell phone camera, so next we decided to automate the process with photos from a Raspberry Pi camera module. We used a motion sensor we had set up in one of our previous projects to begin work on our desired motion-activated identity verification system. With that in mind, we set the Raspberry Pi and camera up on a desk in one of our office rooms, pointed it at the door, and got to work.
In a slight change of pace from our recent explorations of Internet of Things platforms, we started digging into Amazon’s recently announced Rekognition service. Rekognition is a powerful image processing service capable of detecting faces, people, objects, and subtle photo elements such as scene and sentiment.
As with any augmented reality device, the HoloLens has the ability to overlay content on the physical world. The content can take on a number of different forms such as simple 2D gauges or complex 3D models. No matter what the content type is, it has to be positioned somewhere in the physical world. This is where content locking comes in. We have put together an example of each type of content lock option that is available for the HoloLens along with a few notes on each.
We’ve begun to explore potential use cases for virtual reality, and decided to start with creating a virtual office tour – which we believe can be an effective recruitment tool for new employees.
This is part of a running commentary for a project that our Labs team is working on. We are interested in non-visual user interfaces. Broadly, interfaces that consider the use of people’s other modes of interacting with the world: speaking, hearing, feeling, and gesturing. So we decided to start playing around with the Amazon Echo.