How to use an iPhone trueDepth Camera to detect a real human and a photograph? - ios

How can I use the depth data captured using iPhone true-depth Camera to distinguish between a real human 3D face and a photograph of the same?
The requirement is to use it for authentication.
What I did: Created a sample app to get a continuous stream of AVDepthData of what is in front of the camera.

You can make use of AVCaptureMetadataOutput and AVCaptureDepthdataOutput to detect face and then take the required action

Related

iOS: Can I get AVMetadataFaceObject for video from gallery?

For real-time camera processing I use AVCaptureMetadataOutput for getting information about faces.
Can I get AVCaptureMetadataOutput (in particularly, AVMetadataFaceObject) for video from gallery?
Thank you!
AVCaptureMetadataOutput is, as the "Capture" in the name might suggest, only for use in media capture. It's a software interface for hardware-based detectors — e.g. the image signal processor in your device has fast (but not detailed or precise) face detection built-in so that the camera can do face-based autofocus.
If you want to process an already recorded video to detect faces, there are other APIs for that, leveraging software-based detection. The latest and best such API is the Vision framework with its VNDetectFaceRectanglesRequest and VNDetectFaceLandmarksRequest classes.
Here's an Apple sample code project showing the Vision face detection API in action for live video from the camera. (It also shows Vision's object tracking API to enable following the same face as it moves from frame to frame after detection.) The key difference to make it work with prerecorded video would be to replace use of the AVCapture system with an AVAssetReader/AVAssetReaderOutput to sequentially acquire pixel buffers from the video file and feed them to Vision.

How to use the iPhone X faceID data

Is it possible to use iphone X faceID data to create a 3D model of the user face? If yes, can you please give tell me where should I look? I was not reallw able to found something related to this. I found a video on the WWDC about true depth and ARKit but I am not sure that it would help.
Edit:
I just watched a WWDC video and its says that ARKit provides a detailed 3D geometry face. Do you think it's precise enough to create a 3D representation of a person face? Maybe combined with an image? Any idea?
Yes and no.
Yes, there are APIs for getting depth maps captured with the TrueDepth camera, for face tracking and modeling, and for using Face ID to authenticate in your own app:
You implement Face ID support using the LocalAuthentication framework. It's the same API you use for Touch ID support on other devices — you don't get any access to the internals of how the authentication works or the biometric data involved, just a simple yes-or-no answer about whether the user passed authentication.
For simple depth map capture with photos and video, see AVFoundation > Cameras and Media Capture, or the WWDC17 session on such — everything about capturing depth with the iPhone 7 Plus dual back camera also applies to the iPhone X and 8 Plus dual back camera, and to the front TrueDepth camera on iPhone X.
For face tracking and modeling, see ARKit, specifically ARFaceTrackingConfiguration and related API. There's sample code showing the various basic things you can do here, as well as the Face Tracking with ARKit video you found.
Yes, indeed, you can create a 3D representation of a user's face with ARKit. The wireframe you see in that video is exactly that, and is provided by ARKit. With ARKit's SceneKit integration you can easily display that model, add textures to it, add other 3D content anchored to it, etc. ARKit also provides another form of face modeling called blend shapes — this is the more abstract representation of facial parameters, tracking 50 or so muscle movements, that gets used for driving avatar characters like Animoji.
All of this works with a generalized face model, so there's not really anything in there about identifying a specific user's face (and you're forbidden from trying to use it that way in the App Store — see §3.3.52 "If your application accesses face data..." in the developer program license agreement).
No, Apple provides no access to the data or analysis used for enrolling or authenticating Face ID. Gaze tracking / attention detection and whatever parts of Apple's face modeling have to do with identifying a unique user's face aren't parts of the SDK Apple provides.

Is there front facing camera support with ARKit?

How can we access Front Facing Camera Images with ARCamera or ARSCNView and is it possible to record ARSCNView just like Camera Recording?
Regarding the front-facing camera: in short, no.
ARKit offers two basic kinds of AR experience:
World Tracking (ARWorldTrackingConfiguration), using the back-facing camera, where a user looks "through" the device at an augmented view of the world around them. (There's also AROrientationTrackingConfiguration, which is a reduced quality version of world tracking, so it still uses only the back-facing camera.)
Face Tracking (ARFaceTrackingConfiguration), supported only with the front-facing TrueDepth camera on iPhone X, where the user sees an augmented view of theirself in the front-facing camera view. (As #TawaNicolas notes, Apple has sample code here... which, until iPhone X actually becomes available, you can read but not run.)
In addition to the hardware requirement, face tracking and world tracking are mostly orthogonal feature sets. So even though there's a way to use the front facing camera (on iPhone X only), it doesn't give you an experience equivalent to what you get with the back facing camera in ARKit.
Regarding video recording in the AR experience: you can use ReplayKit in an ARKit app same as in any other app.
If you want to record just the camera feed, there isn't a high level API for that, but in theory you might have some success feeding the pixel buffers you get in each ARFrame to AVAssetWriter.
As far as I know, ARKit with Front Facing Camera is only supported for iPhone X.
Here's Apple's sample code regarding this topic.
If you want to access the UIKit or AVFoundation cameras, you still can, but separately from ARSCNView. E.g., I'm loading UIKit's UIImagePickerController from an IBAction and it is a little awkward to do so, but it works for my purposes (loading/creating image and video assets).

Raw Depth map SDK for IPhone X

I did some search and found various examples, documentation on iPhone X Face ID and how it can be used for various stuff like authentication, animated emojis.
Wanted to check if there is an API/SDK to get the raw depth map from iPhone X sensor to the app?
From my understanding the depth calculation is done based on the projected pattern. This can be used to get depth profile of any object in front of the sensor. (Might be dependent on the texture of the object.)
You'll need at least the iOS 11.1 SDK in Xcode 9.1 (both in beta as of this writing). With that, builtInTrueDepthCamera becomes one of the camera types you use to select a capture device:
let device = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front)
Then you can go on to set up an AVCaptureSession with the TrueDepth camera device, and can use that capture session to capture depth information much like you can with the back dual camera on iPhone 7 Plus and 8 Plus:
Turn on depth capture for photos with AVCapturePhotoOutput.isDepthDataDeliveryEnabled, then snap a picture with AVCapturePhotoSettings.isDepthDataDeliveryEnabled. You can read the depthData from the AVCapturePhoto object you receive after the capture, or turn on embedsDepthDataInPhoto if you just want to fire and forget (and read the data from the captured image file later).
Get a live feed of depth maps with AVCaptureDepthDataOutput. That one is like the video data output; instead of recording directly to a movie file, it gives your delegate a timed sequence of image (or in this case, depth) buffers. If you're also capturing video at the same time, AVCaptureDataOutputSynchronizer might be handy for making sure you get coordinated depth maps and color frames together.
As Apple's Device Compatibility documentation notes, you need to select the builtInTrueDepthCamera device to get any of these depth capture options. If you select the front-facing builtInWideAngleCamera, it becomes like any other selfie camera, capturing only photo and video.
Just to emphasize: from an API point of view, capturing depth with the front-facing TrueDepth camera on iPhone X is a lot like capturing depth with the back-facing dual cameras on iPhone 7 Plus and 8 Plus. So if you want a deep dive on how all this depth capture business works in general, and what you can do with captured depth information, check out the WWDC17 Session 507: Capturing Depth in iPhone Photography talk.

Detect presence of objects using OpenCV in live iphone camera

Can anyone help me to detect realtime objects in iPhone camera using OpenCV?
My actual objective is to give an alert to users while an object interfering on a specific location of my application camera view.
My current thinking is to capture an image with respect to my camera overlay view which represents a specific location of my camera view. And then I process that image using OpenCV to detect objects by colors. If there I can identify an object in a specific image. I will give an alert to user in camera overlay itself. I coudn't know how I can detect an object from UIImage.
Please direct me if anyone knows some other good way to achieve my goal. Thanks in advance.
I solved my issue by the following way,
Created an image capture module with AVFoundation classes (AVCaptureSession)
Capturing simultaneous image buffer through a timer working along with camera module.
Processing captured frames to find objects through OpenCV
(Cropping, grayscale, threshold, feature detection etc...)
Referral Link: http://docs.opencv.org/doc/tutorials/tutorials.html
Alerting user through animated camera overlay view
Anyway the detection of objects through image processing is not much accurate. We need to have a object sensor (like a depth sensor in Kinet camera or similar) to detect objects in real scenario in live streaming, or may be we have to create AI for it perfect working.

Resources