User Tools

Site Tools


processing:video_capture

This is an old revision of the document!


Video capture examples

Examples below have been adapted from the Processing 2.0 library reference and from examples at Learning Processing.

List available capture devices

The following sketch will print out a list of available capture devices to the console. You'll need at least one entry to do video capture. On Linux (the only platform I tested) different resolutions show up as different devices.

list_capture_devices.pde
/** Print out a list of available capture (i.e., video) devices
 * @author Mithat Konar
 */
 
import processing.video.*;
 
Capture cam;
 
void setup() {
  String[] cameras = Capture.list();
 
  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } 
 
  println("Available cameras:");
  for (int i = 0; i < cameras.length; i++) {
    println("camera " + i + ": " + cameras[i]);
  }
}

Show video

By requesting capture parameters

show_capture_by_parameters.pde
/** Show video using requested capture parameters
 * @author Mithat Konar
 */
 
import processing.video.*;
 
Capture cam;
final int CAM_WIDTH = 320;
final int CAM_HEIGHT = 240;
final int FPS = 15;
 
void setup() {
  size(CAM_WIDTH, CAM_HEIGHT);
 
  if (Capture.list().length == 0) {
    println("There are no cameras available for capture.");
    exit();
  }
 
  // Instantiate a new Capture object, requesting the specs:
  cam = new Capture(this, CAM_WIDTH, CAM_HEIGHT, FPS);
  cam.start();  // In Processing 2.0, you need to start the capture device
}
 
void draw() {
  if (cam.available() == true) {
    cam.read();
  }
  image(cam, 0, 0);
}

By specifying device

Note that Processing will crop the capture to the canvas size if it doesn't fit.

show_capture_by_devnum.pde
/** Show video using specified capture device
 * @author Mithat Konar
 */
 
import processing.video.*;
 
Capture cam;
final int CANVAS_WIDTH = 320;
final int CANVAS_HEIGHT = 240;
final int camnum = 55;  // Pick a camnum that makes sense given the canvas size!
 
void setup() {
  size(CANVAS_WIDTH, CANVAS_HEIGHT);
 
  String[] camera_list = Capture.list();
 
  if (camera_list.length == 0) {
    println("There are no camera_list available for capture.");
    exit();
  } 
 
  println("Available cameras:");
  for (int i = 0; i < camera_list.length; i++) {
    println("camera " + i + ": " + camera_list[i]);
  }
 
  println();
  println("Using camera " + camnum + ", " + camera_list[camnum] + ".");
 
  // Instantiate a Capture object by specifying the device name stored
  // in camera_list array:
  cam = new Capture(this, camera_list[camnum]);
  cam.start();  // In Processing 2.0, you need to start the capture device
}
 
void draw() {
  if (cam.available() == true) {
    cam.read();
  }
  image(cam, 0, 0);
}

Capture is a PImage

Something the documentation on the Capture class doesn’t mention is that Capture is derived from PImage. You need to dig into the Capture source code to figure that out:

public class Capture extends PImage implements PConstants { ...

This means all the methods and fields available to PImage objects should also be available to Capture objects. This is the key to manipulating Capture output.

Reduce resolution

The following example will reduce the resolution of the captured video in both space and time. In addition, it adds a filter that renders the output in black and white. (This might be useful for determining roughly the minimum resolution required for useful processed video results.)

capture_reduce_res.pde
/** Reduce resolution of captured image.
 * @author Mithat Konar
 */
 
import processing.video.*;
 
Capture cam;  // The video capture device.
PImage img;   // Buffer image.
 
// Canvas parameters
final int CANVAS_WIDTH = 320;
final int CANVAS_HEIGHT = 240;
final int CANVAS_FPS = 4;
 
// Video capture parameters (adjust as neeeded for your 
// platform's available capture options):
final int CAM_WIDTH = 320;
final int CAM_HEIGHT = 240;
final int CAM_FPS = 15;
 
// Rendering parameters: number of cells the rendered
// image should be in each direction:
final int REDUCED_WIDTH = 128;
final int REDUCED_HEIGHT = 96;
 
void setup() {
  frameRate(CANVAS_FPS);
  size(CANVAS_WIDTH, CANVAS_HEIGHT);
 
  if (Capture.list().length == 0) {
    println("There are no cameras available for capture.");
    exit();
  }
 
  // Instantiate a buffer image used for subsampling, etc.
  img = createImage(REDUCED_WIDTH, REDUCED_HEIGHT, RGB);
 
  // Instantiate a new Capture object, requesting the specs:
  cam = new Capture(this, CAM_WIDTH, CAM_HEIGHT, CAM_FPS);
  cam.start();  // In Processing 2.0, you need to start the capture device
}
 
void draw() {
  // Grab a frame
  if (cam.available() == true) {
    cam.read();
  }
 
  // The following should work but doesn't.
  //  cam.resize(REDUCED_WIDTH, REDUCED_HEIGHT);
 
  // So we use an interim buffer image instead:
  img.copy(cam, 0, 0, CAM_WIDTH, CAM_HEIGHT, 0, 0, REDUCED_WIDTH, REDUCED_HEIGHT);
 
  // And draw the image (full canvas):
  image(img, 0, 0, width, height);
  filter(GRAY);  // Render image() in B&W.
}

This version walks through each pixel in the reduced image and renders its value as an NxN block. (N is called OUTPUT_SCALE in the code below.)

capture_reduce_res-2.pde
/** Reduce resolution of captured image. Display results in blocks.
 * @author Mithat Konar
 */
 
import processing.video.*;
 
// === Program constants === //
// Rendering parameters:
// number of cells the rendered image should be in each direction:
final int REDUCED_WIDTH = 32;
final int REDUCED_HEIGHT = 24;
 
// Canvas parameters:
// number of times you want REDUCED image blown up:
final int OUTPUT_SCALE = 10;
// frame rate of rendered output:
final int CANVAS_FPS = 8;
 
// Video capture parameters
// (adjust as neeeded for your platform's available capture options):
final int CAM_WIDTH = 320;
final int CAM_HEIGHT = 240;
final int CAM_FPS = 15;
 
// === Global variables ===//
Capture cam;  // The video capture device.
PImage img;   // Buffer image.
 
// === GO! === //
void setup() {
  frameRate(CANVAS_FPS);
  size(REDUCED_WIDTH*OUTPUT_SCALE, REDUCED_HEIGHT*OUTPUT_SCALE);
 
  if (Capture.list().length == 0) {
    println("There are no cameras available for capture.");
    exit();
  }
 
  // Instantiate a buffer image used for subsampling, etc.
  img = createImage(REDUCED_WIDTH, REDUCED_HEIGHT, RGB);
 
  // Instantiate a new Capture object, requesting the specs:
  cam = new Capture(this, CAM_WIDTH, CAM_HEIGHT, CAM_FPS);
  cam.start();  // In Processing 2.0, you need to start the capture device
}
 
void draw() {
  // Grab a frame
  if (cam.available() == true) {
    cam.read();
  }
 
  // Using a buffer because
  // cam.resize(REDUCED_WIDTH, REDUCED_HEIGHT);
  // doesn't work :-(
  img.copy(cam, 0, 0, CAM_WIDTH, CAM_HEIGHT, 0, 0, REDUCED_WIDTH, REDUCED_HEIGHT);
  img.loadPixels();
 
  // For each column in img:
  for (int col = 0; col < REDUCED_WIDTH; col++) {
    // For each row in img:
    for (int row = 0; row < REDUCED_HEIGHT; row++) {
      // Get color from pixel at col, row
      color c = img.pixels[col + row*img.width];
      fill(c);
      stroke(c);
      rect(col*OUTPUT_SCALE, row*OUTPUT_SCALE, OUTPUT_SCALE, OUTPUT_SCALE);
    }
  }
}

You can render grayscale instead of color above by replacing the line

color c = img.pixels[col + row*img.width];

with

float c = brightness(img.pixels[col + row*img.width]);
processing/video_capture.1365793841.txt.gz · Last modified: 2013/04/12 19:10 by mithat

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki