====== Video capture examples ======
The examples below lean heavily on documentation and code from the [[http://www.processing.org/reference/libraries/video/Capture.html|Processing 2.0 library reference]] and at [[http://www.learningprocessing.com/|Learning Processing]].
===== List available capture devices =====
You'll need at least one available device to do video capture. The following sketch will print out a list of available capture devices to the console. On Linux (the only platform I tested) different resolutions show up as different devices.
/** Print out a list of available capture (i.e., video) devices
* @author Mithat Konar
*/
import processing.video.*;
Capture cam;
void setup() {
String[] cameras = Capture.list();
if (cameras.length == 0) {
println("There are no cameras available for capture.");
exit();
}
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {
println("camera " + i + ": " + cameras[i]);
}
}
===== Show video =====
There are two ways you can set up video capture: by requesting capture parameters or specifying the device.
==== By requesting capture parameters ====
/** Show video using requested capture parameters
* @author Mithat Konar
*/
import processing.video.*;
Capture cam;
final int CAM_WIDTH = 320;
final int CAM_HEIGHT = 240;
final int FPS = 15;
void setup() {
size(CAM_WIDTH, CAM_HEIGHT);
if (Capture.list().length == 0) {
println("There are no cameras available for capture.");
exit();
}
// Instantiate a new Capture object, requesting the specs:
cam = new Capture(this, CAM_WIDTH, CAM_HEIGHT, FPS);
cam.start(); // In Processing 2.0, you need to start the capture device
}
void draw() {
if (cam.available() == true) {
cam.read();
}
image(cam, 0, 0);
}
==== By specifying device ====
Processing will crop the capture to the canvas size if it doesn't fit.
/** Show video using specified capture device
* @author Mithat Konar
*/
import processing.video.*;
Capture cam;
final int CANVAS_WIDTH = 320;
final int CANVAS_HEIGHT = 240;
final int camnum = 55; // Pick a camnum that makes sense given the canvas size!
void setup() {
size(CANVAS_WIDTH, CANVAS_HEIGHT);
String[] camera_list = Capture.list();
if (camera_list.length == 0) {
println("There are no camera_list available for capture.");
exit();
}
println("Available cameras:");
for (int i = 0; i < camera_list.length; i++) {
println("camera " + i + ": " + camera_list[i]);
}
println();
println("Using camera " + camnum + ", " + camera_list[camnum] + ".");
// Instantiate a Capture object by specifying the device name stored
// in camera_list array:
cam = new Capture(this, camera_list[camnum]);
cam.start(); // In Processing 2.0, you need to start the capture device
}
void draw() {
if (cam.available() == true) {
cam.read();
}
image(cam, 0, 0);
}
===== Capture is a PImage =====
Something the [[http://processing.org/reference/libraries/video/Capture.html|documentation]] on the Capture class doesn’t mention is that Capture is derived from [[http://processing.org/reference/PImage.html|PImage]]. This is the key to manipulating Capture output. You need to dig into the Capture source code to figure that out:
public class Capture extends PImage implements PConstants { ...
This means all the methods and fields available to PImage objects should also be available to Capture objects.
===== Reduce resolution =====
The following example will reduce the resolution of the captured video in both space and time. In addition, it adds a filter that renders the output in black and white. (This might be useful for determining roughly the minimum resolution required for useful processed video results.)
/** Reduce resolution of captured image.
* @author Mithat Konar
*/
import processing.video.*;
Capture cam; // The video capture device.
PImage img; // Buffer image.
// Canvas parameters
final int CANVAS_WIDTH = 320;
final int CANVAS_HEIGHT = 240;
final int CANVAS_FPS = 4;
// Video capture parameters (adjust as neeeded for your
// platform's available capture options):
final int CAM_WIDTH = 320;
final int CAM_HEIGHT = 240;
final int CAM_FPS = 15;
// Rendering parameters: number of cells the rendered
// image should be in each direction:
final int REDUCED_WIDTH = 128;
final int REDUCED_HEIGHT = 96;
void setup() {
frameRate(CANVAS_FPS);
size(CANVAS_WIDTH, CANVAS_HEIGHT);
if (Capture.list().length == 0) {
println("There are no cameras available for capture.");
exit();
}
// Instantiate a buffer image used for subsampling, etc.
img = createImage(REDUCED_WIDTH, REDUCED_HEIGHT, RGB);
// Instantiate a new Capture object, requesting the specs:
cam = new Capture(this, CAM_WIDTH, CAM_HEIGHT, CAM_FPS);
cam.start(); // In Processing 2.0, you need to start the capture device
}
void draw() {
// Grab a frame
if (cam.available() == true) {
cam.read();
}
// The following should work but doesn't.
// cam.resize(REDUCED_WIDTH, REDUCED_HEIGHT);
// So we use an interim buffer image instead:
img.copy(cam, 0, 0, CAM_WIDTH, CAM_HEIGHT, 0, 0, REDUCED_WIDTH, REDUCED_HEIGHT);
// And draw the image (full canvas):
image(img, 0, 0, width, height);
filter(GRAY); // Render image() in B&W.
}
This version walks through each pixel in the reduced image and renders its value as an NxN block. (N is called OUTPUT_SCALE in the code below.)
/** Reduce resolution of captured image. Display results in blocks.
* @author Mithat Konar
*/
import processing.video.*;
// === Program constants === //
// Rendering parameters:
// number of cells the rendered image should be in each direction:
final int REDUCED_WIDTH = 32;
final int REDUCED_HEIGHT = 24;
// Canvas parameters:
// number of times you want REDUCED image blown up:
final int OUTPUT_SCALE = 10;
// frame rate of rendered output:
final int CANVAS_FPS = 8;
// Video capture parameters
// (adjust as neeeded for your platform's available capture options):
final int CAM_WIDTH = 320;
final int CAM_HEIGHT = 240;
final int CAM_FPS = 15;
// === Global variables ===//
Capture cam; // The video capture device.
PImage img; // Buffer image.
// === GO! === //
void setup() {
frameRate(CANVAS_FPS);
size(REDUCED_WIDTH*OUTPUT_SCALE, REDUCED_HEIGHT*OUTPUT_SCALE);
if (Capture.list().length == 0) {
println("There are no cameras available for capture.");
exit();
}
// Instantiate a buffer image used for subsampling, etc.
img = createImage(REDUCED_WIDTH, REDUCED_HEIGHT, RGB);
// Instantiate a new Capture object, requesting the specs:
cam = new Capture(this, CAM_WIDTH, CAM_HEIGHT, CAM_FPS);
cam.start(); // In Processing 2.0, you need to start the capture device
}
void draw() {
// Grab a frame
if (cam.available() == true) {
cam.read();
}
// Using a buffer because
// cam.resize(REDUCED_WIDTH, REDUCED_HEIGHT);
// doesn't work :-(
img.copy(cam, 0, 0, CAM_WIDTH, CAM_HEIGHT, 0, 0, REDUCED_WIDTH, REDUCED_HEIGHT);
img.loadPixels();
// For each column in img:
for (int col = 0; col < REDUCED_WIDTH; col++) {
// For each row in img:
for (int row = 0; row < REDUCED_HEIGHT; row++) {
// Get color from pixel at col, row
color c = img.pixels[col + row*img.width];
fill(c);
stroke(c);
rect(col*OUTPUT_SCALE, row*OUTPUT_SCALE, OUTPUT_SCALE, OUTPUT_SCALE);
}
}
}
You can render grayscale instead of color above by replacing the line color c = img.pixels[col + row*img.width];
with float c = brightness(img.pixels[col + row*img.width]);
===== Brightest pixel =====
The following example takes the reduced resolution camera feed above and draws a colored circle inside the brightest pixel found. If there is more than one pixel with the same brightness, then it shows the first one (scanning from top-left to bottom-right). It also adds some code to indicate frame timing.
/** Reduce resolution of captured image and indicate brightest pixel.
* @author Mithat Konar
*/
import processing.video.*;
// === Program constants === //
// Rendering parameters:
// number of cells the rendered image should be in each direction:
final int REDUCED_WIDTH = 32;
final int REDUCED_HEIGHT = 24;
// Canvas parameters:
// number of times you want REDUCED image blown up:
final int OUTPUT_SCALE = 20;
// frame rate of rendered output:
final int CANVAS_FPS = 6; // should divide evenly into CAM_FPS
// to avoid jitter.
// Video capture parameters
// (adjust as neeeded for your platform's available capture options):
final int CAM_WIDTH = 320;
final int CAM_HEIGHT = 240;
final int CAM_FPS = 30;
// === Global variables ===//
Capture cam; // The video capture device.
PImage img; // Buffer image.
int blink_state = 0;
// === GO! === //
void setup() {
frameRate(CANVAS_FPS);
size(REDUCED_WIDTH*OUTPUT_SCALE, REDUCED_HEIGHT*OUTPUT_SCALE);
ellipseMode(CORNER);
if (Capture.list().length == 0) {
println("There are no cameras available for capture.");
exit();
}
// Instantiate a buffer image used for subsampling, etc.
img = createImage(REDUCED_WIDTH, REDUCED_HEIGHT, RGB);
// Instantiate a new Capture object, requesting the specs:
cam = new Capture(this, CAM_WIDTH, CAM_HEIGHT, CAM_FPS);
cam.start(); // In Processing 2.0, you need to start the capture device
}
void draw() {
int brightestCol = 0;
int brightestRow = 0;
float bightestIntensity = 0.0;
// Grab a frame
if (cam.available() == true) {
cam.read();
}
// We are using a buffer img because
// cam.resize(REDUCED_WIDTH, REDUCED_HEIGHT);
// doesn't work :-(
img.copy(cam, 0, 0, CAM_WIDTH, CAM_HEIGHT, 0, 0, REDUCED_WIDTH, REDUCED_HEIGHT);
img.loadPixels();
// For each column in img:
for (int col = 0; col < REDUCED_WIDTH; col++) {
// For each row in img:
for (int row = 0; row < REDUCED_HEIGHT; row++) {
// Draw the pixel intensity.
float pixelIntensity = brightness(img.pixels[col + row*img.width]);
fill(pixelIntensity);
stroke(pixelIntensity);
rect(col*OUTPUT_SCALE, row*OUTPUT_SCALE, OUTPUT_SCALE, OUTPUT_SCALE);
// Determine whether this is the brightest pixel in this frame.
if (pixelIntensity > bightestIntensity)
{
bightestIntensity = pixelIntensity;
brightestCol = col;
brightestRow = row;
}
}
}
// Highlight brightest pixel.
fill(#000000, 0);
stroke(#ff0000);
strokeWeight(2);
ellipse(brightestCol*OUTPUT_SCALE, brightestRow*OUTPUT_SCALE,
OUTPUT_SCALE, OUTPUT_SCALE);
// Frame timer
fill(#000099, 100);
stroke(#0000ff, 100);
strokeWeight(1);
blink_state = ++blink_state % CANVAS_FPS;
rect(0, 0, blink_state*OUTPUT_SCALE, OUTPUT_SCALE);
}