Drone Data Collection & Processing

Module 7

Image detection
Step 1: Install Open CV
!pip install face_recognition opencv-python matplotlib

Downloading face_recognition_models-0.3.0.tar.gz (100.1 MB)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.1/100.1 MB 7.8 MB/s eta 0:00:00

It will continue until it shows

Successfully built face-recognition-models Installing collected packages: face-recognition-models, face_recognition Successfully installed face-recognition-models-0.3.0 face_recognition-1.3.0

Step 2: Face Recognition Using face_recognition and OpenCV
from IPython.display import display, Javascript

from google.colab.output import eval_js

import numpy as np

import cv2

import base64

def take_photo(filename='photo.jpg', quality=0.8):

js = Javascript('''

async function takePhoto(quality) {

const div = document.createElement('div');

const capture = document.createElement('button');

capture.textContent = 'Capture';

div.appendChild(capture);

const video = document.createElement('video');

video.style.display = 'block';

const stream = await navigator.mediaDevices.getUserMedia({video: true});

document.body.appendChild(div);

div.appendChild(video);

video.srcObject = stream;

await video.play();

await new Promise((resolve) => capture.onclick = resolve);

const canvas = document.createElement('canvas');

canvas.width = video.videoWidth;

canvas.height = video.videoHeight;

canvas.getContext('2d').drawImage(video, 0, 0);

stream.getVideoTracks()[0].stop();

div.remove();

return canvas.toDataURL('image/jpeg', quality);

}

''')

display(js)

data = eval_js(f'takePhoto({quality})')

image_bytes = base64.b64decode(data.split(',')[1])

np_arr = np.frombuffer(image_bytes, np.uint8)

img = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)

cv2.imwrite(filename, img)

return img

Step 3: Allow access to Recognition
import face_recognition print("Capture KNOWN face")

known_frame = take_photo('known.jpg')

known_image = face_recognition.load_image_file('known.jpg')

known_encoding = face_recognition.face_encodings(

known_image,

num_jitters=50,

model='large'

)[0]

Step 4: Test image
print("Capture TEST face")

test_frame = take_photo('test.jpg')

Step 5: Prediction "Recognized" or "Unrecognized"
import matplotlib.pyplot as plt

face_locations = face_recognition.face_locations(test_frame)

face_encodings = face_recognition.face_encodings(

test_frame,

face_locations,

num_jitters=23,

model='large'

) for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):

match = face_recognition.compare_faces([known_encoding], face_encoding)[0]

label = "Recognized" if match else "Unrecognized"

color = (0, 255, 0) if match else (0, 0, 255)

cv2.rectangle(test_frame, (left, top), (right, bottom), color, 2)

cv2.putText(test_frame, label, (left, top - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.9, color, 2)

print("Enter..." if match else "Unrecognized")

plt.imshow(cv2.cvtColor(test_frame, cv2.COLOR_BGR2RGB))

plt.axis('off')

plt.show()

UAV Control Block Diagram
Desired XYZ Reference
Position Controller
$\dot{x}$ $\dot{y}$ $\dot{z}$
Attitude Controller
Vu
Rotor Speed Controller
ω
UAV Dynamics
Sensor
X, Y, Z, $\dot{x}, \dot{y}, \dot{z}, \phi, \theta, \psi, \dot{\phi}, \dot{\theta}, \dot{\psi}$
XYZ Coordinates
PWM
Motor Dynamics
Thrust
Frame Geometry
Lift
Torque
Dynamics
Position
Velocity
Acceleration
Attitude
Angular velocity
Drone Live Monitoring Platform
Live Drone Data Collection & Processing Platform
Latitude: --
Longitude: --
Altitude: -- m
Speed: -- m/s
Battery: --%

DRone data collection and processing by Dr Aishwarya Dhara
Back
Share -