Posts Tagged ‘processing’

glitch-sequencer: Free, Processing-Based App from GlitchDS Creator Hearts Netbooks

May 5, 2009

For those of you longing to mutate beats like so many promiscuous Petri Disk bacteria, programmer Bret Truchan is a kindred spirit. Bret has created a series of instant experimental classics for the Nintendo DS: glitchDS, a cellular automaton music sequencer, repeaterDS, a visual sample mangler, and cellDS, a grid-based sequencer you can script in Lua.

The Nintendo DS is portable and cute, but it’s not normally open to running software without the Nintendo Seal of Quality. (Insert snickers here.) To run Bret’s software, you need specialized hardware that fools the DS into running software. The DS isn’t entirely stable when it comes to things like timing, either, and it doesn’t have the flexibility of computers.

Enter the netbook. The netbook is nearly as portable, completely open to running whatever you like on Windows or Linux, and boasts easy USB connectivity, a big screen, and … well, you know, all the things you like about laptops. When it comes to musical productivity, much as I love the DS, the netbook has a whole lot going for it, and still has that added ultra-portability that makes you feel you can make music anywhere.

Bret recently made the jump to desktop software with Quotile, a step sequencer you can live-code for mighty morphing beats. Quotile is cool, but for many, glitchDS was the star. Now you can run glitchDS anywhere – just the job for a laptop you were going to retire, or that new netbook.

Not Sequencing, Glitch Sequencing

Glitch-sequencer is a sequencer, so it needs to either talk to a software synth or external hardware. Bret likes to hook it up to his machinedrum and monomachine. Our own Handmade Music event was the (unofficial) first public outing of the software, and included an HP netbook and the machinedrum, which makes for a sweet, mobile combination.

Bret’s mobile rig in action at Handmade Music. Photo: Jason Schorr.

Despite the appearance of a grid and sequences of levels, this isn’t an app that works like a conventional sequencer. Here’s the basic breakdown:

  • Cellular Automata via a seed + playback grid
  • Trigger and value sequencers to determine which MIDI events the organically-generated mutations produce
  • Pattern length, clock division settings for setting metric values
  • Sync settings

There are two grids, a “seed” sequencer that initializes a starting pattern, and a “playback” sequencer that provides feedback and control of the pattern that plays as the software runs. These two grids operate via principles of Cellular Automata, specifically the John Horton Conway Game of Life model, a evolutionary grid “game” that has been popular in computer music for its simplicity and the way it becomes animated in time. (The Game of Life is a “zero-player game,” which I suspect is probably the only truly fun way to play Monopoly.)

The playback sequencer is just a set of cells. To determine when each cell actually trigger events, you use a neat, color-coded trigger sequencer, which, as it sounds, is what calls MIDI events. Using the value sequencers for each color-coded swatch, you determine what that message is. In fact, if you wanted, you could use glitch-sequencer to control only effects parameters or envelopes instead of notes – or visuals, or anything that can be triggered by MIDI.

As you’ve got seeded grids doing their organic, unpredictable thing, you’ll likely want a little bit of control, too, and you have mechanisms for that. There’s a pattern length grid which determines pattern length in a more conventional way, plus a clock division setting for setting the master rhythmic division. There’s also a snapshot setting, which itself is presented as a grid so you can make little glitchy song arrangements by triggering different settings.

Where all of this gets fancy is the additional trigger settings. In addition to the MIDI event values, you get:

  • Gate percentage for randomized probabilities
  • Clock division
  • Loop length
  • Quantization for pitch (none, Ionian, Phrygian)

You can also manage the color-coded swatches as layers and mix their volume independently.

A Handmade Music attendee gets her hands on the glitchy goodness. Photo: Jason Schorr.

My one-line version of the manual: with that many parameters, screw around a bit and you’ll get something pretty unpredictable and glitchy.

This concept is related to other attempts to do similar, Game of Life-based sequencers, particularly Lazyfish’s Newschool for Reaktor, and (applied to an effect) Audio Damage’s Automaton. Because tiny implementation details can have a big impact on the resulting sound, though, it’s always nice having a new take on this, and I think Bret’s creation is unique in its ability to tightly control the sequence or completely screw things up with a lot of parameters.

It is all built in Processing, the free, open-source Java-based coding environment. I’m hoping to get a scoop on some of the experience Bret had with timing and Java, so stay tuned. Processing coders, the MIDI library Bret used is themidibus. There’s a trick to getting MIDI working on the Mac thanks to the fact that Apple decided to stop supporting a standard Java API in their implementation (doh!), but once you hurdle that, you’ve got Mac + Windows + Linux support – and this could be ported to Android, too, with a little work.

http://createdigitalmusic.com/2009/05/01/glitch-sequencer-free-processing-based-app-from-glitchds-creator-hearts-netbooks/

Advertisements

Ableton Live 8 Misuse: Ping Pong Psuedo Scratching Effect Video Tutorial

April 27, 2009
For all the emphasis on learning how to use creative tools the proper way, it’s often when youmisuse a feature that it really becomes a powerful tool. So, in the spirit of some of the “mistutorials” from Ableton’s own Dennis DeSantis, here’s our friend Michael Hatsis of New York’s Track Team Audio / Warper Party / Dubspot with a really unusual way to achieve scratching effects.
You know the Ping Pong effect for its clichéd, stereo-panning echo effects. But here, it goes an entirely different direction: now that Live 8 has added new delay modes, you can create some special effects that don’t sound like the typical effect. Mike manages to warp and bend Ping Pong into something that sounds a lot like scratching. He warns that “this is not meant to replace vinyl nor will it produce a totally authentic sounding scratch sound.” On the other hand, you start to get some sounds that are reminiscent of scratching but sound unique, which I think is a Very Good Thing.
Live 8 users, download the template:
http://www.trackteamaudio.com/videos/scratchtemplatelive8.zip
There’s also some nice discussion happening over on the Ableton blog. (Main request: automation / dummy clips for more sound-warping power.)
Video: Total misuse of a ping pong delay – scratch effects
(And those of you Pd/Max/SuperCollider/Chuck/Reaktor users out there, maybe this will inspire some DIY effects along similar lines.)
Previously:
Ableton Live 8 Creative Tutorial Videos: Using and Misusing Groove Extraction
Ableton Live 8 Creative Tutorial Videos: Misusing Frequency Shifter
(and, yes, much as I love Live 8, I welcome other tools, too – anyone interested in tutorials to request / tutorials you want to make?)

http://createdigitalmusic.com/2009/04/27/ableton-live-8-misuse-ping-pong-psuedo-scratching-effect-video-tutorial/

Sound to Pixels and Back Again: Isolating Instruments with Photosounder

April 22, 2009

Sound is a wonderful, if invisible thing. To work with these tiny fluctuations in air pressure that make up what we hear, we always work with some sort of software metaphor. So why not make that metaphor pixels – and why not manipulate the visual element directly?

Translating between sound and image is not a new concept in music software. The deepest tool for these functions is unquestionably the Mac-only classic MetaSynth, which sprang from the imagination of Bryce creator and graphic designer Eric Wenger. To me, one of the most appealing features of MetaSynth has always been its filter tool, the one component that allows you to work directly with sound using imagery and painting tools. The core of the tool, however, turns images into a score for synthesis, which opens up powerful features for microtones and the like but can conversely make simply designing sounds more challenging. (Side note: Leopard users, read this re: MetaSynth.)

Photosounder looks like MetaSynth, but it more directly translates between sound and image. It also has a uniquely straightforward interface for precisely adjusting controls and mappings. Put these together, and you can really use Photosounder as an audio tool. That opens up not only experimental techniques, but even makes conventional tasks more accessible.

Photosounder is also under very active development, with recent additions like a lossless mode for better sound fidelity and loop modes. The result is a really compelling looking tool for audio manipulation.

What can you do with these pixel powers over sound? Users have been experimenting and posting some pretty impressive stuff:

  • Isolating and removing individual instruments – making this an ideal remixing and sampling tool – using Photoshop
  • Making entire tracks from photographs (which, again, was possible with MetaSynth as infamously employed by Aphex Twin, but sounds very different here)
  • Processing using Photoshop filters
  • Making beats by drawing
  • Extreme time processing

Photosounder is currently Windows-only, but Linux and Mac versions are promised. (By the way, I think that’s going to become more commonplace as savvy developers take up cross-platform development tools, toolchains, and frameworks.)

It’s cheap enough to impulse-buy, too, at EUR25 non-commercial or EUR99 commercial.

http://photosounder.com/

Photosounder examples (with video)

I hope to get my hands on Photosounder and show off some features with this soon. Thanks to everyone who sent this in! (And yeah, after four or five people I finally get around to mentioning it!)

The best way to see what’s possible: check out the videos. Here’s a selection of my favorites:


http://createdigitalmusic.com/2009/04/16/sound-to-pixels-and-back-again-isolating-instruments-with-photosounder/#more-5642

The Video Processing System, Part 1

January 22, 2009
Jitter

By AndrewBenson, Section Tutorials, Topic Jitter
Posted on Mon Dec 22, 2008 at 02:57:53 PM EST

Between the tutorials, Jitter Recipes, and all of the example content, there are many Jitter patches floating around that each do one thing pretty well, but very few of them give a sense of how to scale up into a more complex system. Inspired by a recent patching project and Darwin Grosse’sguitar processing articles, this series of tutorials will present a Jitter-based live video processing system using simple reusable modules, a consistent control interface, and optimized GPU-based processes wherever possible. The purpose of these articles is to provide an over-the-shoulder view of my creative process in building more complex Jitter patches for video processing.

Download the patch used in this tutorial.

Designing the System

I want to keep the framework for this project as simple as possible to allow for quickly building new modules and dropping them into place without having to rewrite the whole patch each time. This means staying away from complex send/receive naming systems, encapsulating complex stuff into easy to manage and rearrange modules, and using a standardized interface for parameter values. I also wanted to stay away from creating a complex and confusing framework for patching that would require a learning curve to implement.

For input, we’ll use a camera input and a movie file playback module with a compositing stage after them to fade between inputs. For display, we’ll create an OpenGL context so that we can take advantage of hardware-accelerated processing and display features to keep our system as efficient as possible. Between input and display, we’ll put in place several simple effect modules to do things like color adjustment and remapping, gaussian blur, temporal blurring, and masked feedback. In later articles, we’ll also look at ways to incorporate OpenGL geometry into our system.

For a live processing set up, being able to easily control the right things is very important. For this reason, we’re going to build all of our modules to accept floating-point (0.-1.) inputs and toggles to turn things on an off. Maintaining a standardized set of controls makes it much easier to incorporate a new gestural controller module into your rig. For any live situation, whether it be video, sound, etc., it is going to be much more fun and intuitive if you use some sort of physical controller. There are many MIDI controllers, gaming controllers, sensor interface kits, and knob/fader boxes on the market for pretty affordable prices, all of which will be fairly straightforward to work with in Max. In addition to creating intuitive controls, we’ll also look at ways to store presets to easily recall the state of your modules.

The Core

We’re going to start building this rig by first putting together the core features that we’ll need before moving on to the effects. First thing we’ll need is a master rendering context where our patch will display its output. Looking in the “master-context” subpatch, you’ll see the essential ingredients for any OpenGL context (qmetro, jit.gl.render, jit.window, etc.), along with some handy control features.


Notice that we are using a send object (‘s draw’) to send out a bang. This is an easy way for us to trigger anything that needs triggering in our system without running patch cords all over the place. Since we are building a pretty straightforward video processing patch, we don’t need to worry too much about complex rendering sequences. In this patch we’ve also provided an interface to jit.window to quickly set it to predetermined sizes or to make it a floating window, as well as the inclusion of the standard ESC->fullscreen control. In our main patch we place the necessary interface objects to control these features, including a umenu loaded with common video sizes for the jit.window. This little umenu will certainly come in handy in other places, so I’ll go ahead and Save Prototype… as “videodims”. In our main patch you’ll also notice the jit.gl.videoplane object at the end of the processing chain. This will serve as our master display object.

The Inputs

Now that we have the groundwork laid for our OpenGL rendering, we can begin to add functionality to our video processor. The obvious first step will be to add some video inputs to our system. The “camera-input” subpatch contains the essential interface for the jit.qt.grab object (Windows users should use jit.dx.grab instead). One thing you might notice is that we’ve reused the umenu for resizing here. Another thing to note is that we are using @colormode uyvy, which will essentially double the efficiency of sending frames to the GPU. For more information on Jitter colormodes, check out the Tutorials. We’ve also included a gate to turn off the bangs for this input module if we aren’t using it. I’ve also borrowed the device selection interface from the jit.qt.grab helpfile.


In the “movie-file” subpatch, you’ll find the essentials for doing simple movie playback from disk. A lot of elements have been reused from the “camera-input” module, but I’ve also created a simple interface for scrubbing through the movie to find the right cue point. The trick here is to use the message output from the right of jit.qt.movie to gather information about the file. When a new file is loaded, a “read” message comes out of the right outlet, which then triggers a “getframecount” message, the result of which we send to our multiplier object. At this point I should mention another little trick that I’m using, which is to setup the slider in the main patch to send out floating-point numbers between 0 and 1. To do this, I went into the inspector and changed the values of the Range and the Output Multiplier attributes, while also turning on the Float Output attribute. Once again, I save this object as a Prototype called “videoslider” so that I can use it anywhere I require a floating-point controller.

The final stage of our input stage is the COMPOSITOR section. This module contains two parts – a texture-conversion/input-swapping patch and a simple crossfading patch. Inside the “texture-conversion” patch, you’ll see some simple logic that allows me to swap which channel the inputs are going to. This allows us to run the movie input even when turning off the camera module. Each input is then sent to a jit.gl.slab object that has the “cc.uyvy2rgba.jxs” shader loaded into it. This converts the UYVY color information into full resolution RGBA colored textures on the GPU.

Mixing Inputs

Once our input streams are on the graphics card as textures, we can use jit.gl.slab to do all sorts of image processing. The most obvious thing to do at this stage is to add a compositing module to combine the two inputs. Inside the “easy-xfade” patch you’ll see a jit.gl.slab with the “co.normal.jxs” shader loaded into it. This does straight linear crossfading, a la jit.xfade.

We’ve also provided an interface to switch to other compositing functions, borrowed from one of the Jitter examples. The bline object is used here to create a ramp between states to provide smooth transitions. The benefit of using bline here is that it is timed using bangs instead of absolute milliseconds. Since video processing happens on a frame-by-frame basis, this proves to be much more useful for creating smooth transitions.

This wraps up our first installment of the Video Processing System. Stay tuned for future articles where we will be adding a variety of different effect modules to this patch, as well as looking at parameter management strategies.

http://www.cycling74.com/story/2008/12/22/115753/24