processing:video_capture
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
processing:video_capture [2013/04/12 02:30] – mithat | processing:video_capture [2013/08/03 04:44] – [Video capture examples] mithat | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Video Capture Examples | + | ====== Video capture examples |
- | Examples | + | The examples |
===== List available capture devices ===== | ===== List available capture devices ===== | ||
+ | The following sketch will print out a list of available capture devices to the console. You'll need at least one entry to do video capture. On Linux (the only platform I tested) different resolutions show up as different devices. | ||
+ | |||
<file java list_capture_devices.pde> | <file java list_capture_devices.pde> | ||
/** Print out a list of available capture (i.e., video) devices | /** Print out a list of available capture (i.e., video) devices | ||
Line 113: | Line 115: | ||
===== Capture is a PImage ===== | ===== Capture is a PImage ===== | ||
- | Something the documentation on the '' | + | Something the [[http:// |
<code java> | <code java> | ||
Line 184: | Line 186: | ||
</ | </ | ||
+ | This version walks through each pixel in the reduced image and renders its value as an NxN block. (N is called OUTPUT_SCALE in the code below.) | ||
+ | |||
+ | <file java capture_reduce_res-2.pde> | ||
+ | /** Reduce resolution of captured image. Display results in blocks. | ||
+ | * @author Mithat Konar | ||
+ | */ | ||
+ | |||
+ | import processing.video.*; | ||
+ | |||
+ | // === Program constants === // | ||
+ | // Rendering parameters: | ||
+ | // number of cells the rendered image should be in each direction: | ||
+ | final int REDUCED_WIDTH = 32; | ||
+ | final int REDUCED_HEIGHT = 24; | ||
+ | |||
+ | // Canvas parameters: | ||
+ | // number of times you want REDUCED image blown up: | ||
+ | final int OUTPUT_SCALE = 10; | ||
+ | // frame rate of rendered output: | ||
+ | final int CANVAS_FPS = 8; | ||
+ | |||
+ | // Video capture parameters | ||
+ | // (adjust as neeeded for your platform' | ||
+ | final int CAM_WIDTH = 320; | ||
+ | final int CAM_HEIGHT = 240; | ||
+ | final int CAM_FPS = 15; | ||
+ | |||
+ | // === Global variables ===// | ||
+ | Capture cam; // The video capture device. | ||
+ | PImage img; // Buffer image. | ||
+ | |||
+ | // === GO! === // | ||
+ | void setup() { | ||
+ | frameRate(CANVAS_FPS); | ||
+ | size(REDUCED_WIDTH*OUTPUT_SCALE, | ||
+ | |||
+ | if (Capture.list().length == 0) { | ||
+ | println(" | ||
+ | exit(); | ||
+ | } | ||
+ | |||
+ | // Instantiate a buffer image used for subsampling, | ||
+ | img = createImage(REDUCED_WIDTH, | ||
+ | |||
+ | // Instantiate a new Capture object, requesting the specs: | ||
+ | cam = new Capture(this, | ||
+ | cam.start(); | ||
+ | } | ||
+ | |||
+ | void draw() { | ||
+ | // Grab a frame | ||
+ | if (cam.available() == true) { | ||
+ | cam.read(); | ||
+ | } | ||
+ | |||
+ | // Using a buffer because | ||
+ | // cam.resize(REDUCED_WIDTH, | ||
+ | // doesn' | ||
+ | img.copy(cam, | ||
+ | img.loadPixels(); | ||
+ | |||
+ | // For each column in img: | ||
+ | for (int col = 0; col < REDUCED_WIDTH; | ||
+ | // For each row in img: | ||
+ | for (int row = 0; row < REDUCED_HEIGHT; | ||
+ | // Get color from pixel at col, row | ||
+ | color c = img.pixels[col + row*img.width]; | ||
+ | fill(c); | ||
+ | stroke(c); | ||
+ | rect(col*OUTPUT_SCALE, | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | |||
+ | </ | ||
+ | |||
+ | You can render grayscale instead of color above by replacing the line <code java> |
processing/video_capture.txt · Last modified: 2013/08/23 04:10 by mithat