How does it work? As shown in the figure below…
>Use opencv to detect the face
>Get an image of the forehead
>Apply a filter to convert it into a grayscale image [can be skipped]
> Find the average intensity of each green pixel in each frame
>Save the average to an array
>Apply FFT (the smallest library I have used) to extract the heartbeat from the FFT spectrum (here, I need some help)
Here, I need help extracting the heartbeat from the FFT spectrum. Can anyone help me. Here, is a similar application developed in python, but I cannot explain this code so I develop the same during development. Any Anyone can help me underestimate the heartbeat extraction part of this python code.
//---------import required ilbrary ------- ----
import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import java.util.*;
import ddf.minim.analysis.*;
import ddf.minim.*;
//----------create objects---------- -----------------------
Capture video; // camera object
OpenCV opencv; // opencv object
Minim minim;
FFT fft;
//IIRFilter filt;
//--------- Create ArrayList--------------- -----------------
ArrayListpoop = new ArrayList();
float[] sample;
int bufferSize = 128;
int sampleRate = 512;
int bandWidth = 20;
int centerFreq = 80;
//----------------- ----------------------------------
vo id setup() {
size(640, 480); // size of the window
minim = new Minim(this);
fft = new FFT( bufferSize, sampleRate);
video = new Capture(this, 640/2, 480/2); // initializing video object
opencv = new OpenCV(this, 640/2, 480/2); // initializing opencv object
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); // loading haar cscade file for face detection
video.start(); // start video
}
void draw( ) {
background(0);
// image(video, 0, 0 ); // show video in the background
opencv.loadImage(video);
Rectangle[ ] faces = opencv.detect();
video.loadPixels();
//------------ Finding faces in the video -------- ---
float gavg = 0;
for (int i = 0; inoFill();
stroke(#FFB700); // yellow rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // creating rectangle around the face (YELLOW)< br /> stroke(#0070FF); //blu e rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height-2*faces[i].height/3); // creating a blue rectangle around the forehead
//-------------------- storing forehead white rectangle part into an image --------- ----------
stroke(0, 255, 255);
rect(faces[i].x+faces[i].width/2-15, faces[i ].y+15, 30, 15);
PImage img = video.get(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30 , 15); // storing the forehead aera into a image
img.loadPixels();
img.filter(GRAY); // converting capture image rgb to gray
img.updatePixels( );
int numPixels = img.width*img.height;
for (int px = 0; pxfinal color c = img.pixels[px];
final color luminG = c>>010 & 0xFF;
final float luminRangeG = luminG/255.0;
gavg = gavg + luminRangeG;
}
//-------------------------------- -------- ----------------
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg );
}
else poop.remove(0);
}
sample = new float[poop.size()];
for (int i=0 ;iFloat f = (float) poop.get(i);
sample[i] = f;
}
< br /> if (sample.length>=bufferSize) {
//fft.window(FFT.NONE);
fft.forward(sample, 0);
// bpf = new BandPass(centerFreq, bandwidth, sampleRate);
// in.addEffect(bpf);
float bw = fft.getBandWidth(); // returns the width of each frequency band in the spectrum (in Hz ).
println(bw); // returns 21.5332031 Hz for spectrum [0] & [512]
for (int i = 0; i{
// println( "Freq" + max(sample));
stroke(0, 255, 0);
float x = map(i, 0, fft .specSize(), 0, width);
line( x, height, x, height-fft.getBand(i)*100);
// text(" FFT FREQ "+ fft.getFreq(i), width/2-100, 10*(i+1));
// text("FFT BAND" + fft.getBand(i), width/2+ 100, 10*(i+1));
}
}
else {
println(sample.length + "" + poop.size());
}
}
void captureEvent(Capture c) {
c.read();
}
int bufferSize = 128;
During the drawing method, samples are stored in the array until the buffer is filled in order to apply the FFT. Then the buffer remains full. To insert new samples, the oldest samples will be deleted. gavg is the average gray channel color .
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
Responding to stool samples
sample = new float[poop.size()];
for (int i=0;iFloat f = (float) poop.get(i);
sample[i] = f;
}
Now you can apply FFT to the sample array
fft.forward(sample, 0);
Only the spectrum result is displayed in the code. The heartbeat frequency must be calculated.
For each band in fft, you must find the maximum value, which is the frequency of the heartbeat.
for(int i = 0; i < fft.specSize(); i++)
{ // draw the line for frequency band i, scaling it up a bit so we can see it
heartBeatFrequency = max(heartBeatFrequency,fft.getBand(i)) ;
}
Then get the bandwidth to know the frequency.
float bw = fft.getBandWidth();
Adjust the frequency.
heartBeatFrequency = fft.getBandWidth() * heartBeatFrequency ;
I am trying to create one that can use you An application for detecting the heartbeat of the computer webcam. I started to write the code and developed this code in 2 weeks, so far I have completed
How does it work? As shown in the figure below…
>Use opencv to detect the face
>Get an image of the forehead
>Apply a filter to convert it into a grayscale image [can be skipped]
> Find the average intensity of each green pixel in each frame
>Save the average to an array
>Apply FFT (the smallest library I have used) to extract the heartbeat from the FFT spectrum (here, I need some help)
Here, I need help extracting the heartbeat from the FFT spectrum. Can anyone help me. Here, is a similar application developed in python, but I cannot explain this code so I develop the same during development. Any Anyone can help me underestimate the heartbeat extraction part of this python code.
//---------import required ilbrary ------- ----
import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import java.util.*;
import ddf.minim.analysis.*;
import ddf.minim.*;
//----------create objects---------- -----------------------
Capture video; // camera object
OpenCV opencv; // opencv object
Minim minim;
FFT fft;
//IIRFilter filt;
//--------- Create ArrayList--------------- -----------------
ArrayListpoop = new ArrayList();
float[] sample;
int bufferSize = 128;
int sampleRate = 512;
int bandWidth = 20;
int centerFreq = 80;
//----------------- ----------------------------------
void s etup() {
size(640, 480); // size of the window
minim = new Minim(this);
fft = new FFT( bufferSize, sampleRate);
video = new Capture(this, 640/2, 480/2); // initializing video object
opencv = new OpenCV(this, 640/2, 480/2); // initializing opencv object
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); // loading haar cscade file for face detection
video.start(); // start video
}
void draw() {
background(0);
// image(video, 0, 0 ); // show video in the background
opencv.loadImage(video);
Rectangle[] faces = opencv.detect();
video.loadPixels();
//------------ Finding faces in the video --------- -
float gavg = 0;
for (int i = 0; inoFill();
stroke(#FFB700); / / yellow rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // creating rectangle around the face (YELLOW)
stroke(#0070FF); //blue re ctangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height-2*faces[i].height/3); // creating a blue rectangle around the forehead
//-------------------- storing forehead white rectangle part into an image ---------- ---------
stroke(0, 255, 255);
rect(faces[i].x+faces[i].width/2-15, faces[i] .y+15, 30, 15);
PImage img = video.get(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15); // storing the forehead aera into a image
img.loadPixels();
img.filter(GRAY); // converting capture image rgb to gray
img.updatePixels() ;
int numPixels = img.width*img.height;
for (int px = 0; pxfinal color c = img.pixels[px];
final color luminG = c>>010 & 0xFF;
final float luminRangeG = luminG/255.0;
gavg = gavg + luminRangeG;
}
//--------------------------------- ----------- ------------
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
}
sample = new float[poop.size()];
for (int i=0;iFloat f = (float) poop.get(i);
sample[i] = f;
}
if (sample.length>=bufferSize) {
//fft.window(FFT.NONE);
fft.forward(sample, 0);
// bpf = new BandPass(centerFreq, bandwidth, sampleRate);
// in.addEffect(bpf);
float bw = fft.getBandWidth(); // returns the width of each frequency band in the spectrum (in Hz).
println(bw); // returns 21.5332031 Hz for spectrum [0] & [512]
for (int i = 0; i{
// println( "Freq" + max(sample));
stroke(0, 255, 0);
float x = map(i, 0, fft.specSize() , 0, width);
line( x, height, x, height-fft.getBand(i)*100);
// text("FFT FREQ "+ fft.getFreq(i), width/2-100, 10*(i+1));
// text("FFT BAND" + fft.getBand(i), width/2+100 , 10*(i+1));
}
}
else {
println(sample.length + "" + poop.size());
}
}
void captureEvent(Capture c) {
c.read();
}
FFT is applied in a window with 128 samples.
int bufferSize = 128;
During the drawing method, the sample Store in the array until the buffer is filled to apply the FFT. Then the buffer remains full. To insert new samples, the oldest samples will be deleted. gavg is the average gray channel color.
< pre>gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0) ;
Respond to stool samples
sample = new float[poop.size()];
for (int i=0;i < poop.size();i++) {
Float f = (float) poop.get(i);
sample[i] = f;
}
Now You can apply FFT to the sample array
fft.forward(sample, 0);
Only the spectrum results are displayed in the code. The heartbeat frequency must be calculated.
For each band in fft, you must find the maximum value, which is the frequency of the heartbeat.
for(int i = 0; i{ // draw the line for frequency band i, s caling it up a bit so we can see it
heartBeatFrequency = max(heartBeatFrequency,fft.getBand(i));
}
Then get the bandwidth to know the frequency.
float bw = fft.getBandWidth();
Adjust the frequency.
heartBeatFrequency = fft.getBandWidth( ) * heartBeatFrequency ;