Von Nohrfeldt Ensemble & Naïvi – Live at Rewire & The Rest Is Noise

The software is based on the software written for Year Of The Horse and features individual panel controls in the form of filter stacks. These filter stacks are made of specific controls and shaders that belong to a single LED panel. All of these elements are driven by frequency bands sampled from the music. All panels share the same (modified) source texture that can be mapped using a uv layout system.

Presets allowed me to create a specific look for a particular part of the show, and quickly switch between these states. Every state is modified live by a UPC-16 and Ipad midi controller.

The software is written from scratch in C++ using OF and OpenGL, the panels are custom build.

IMG_0317

The ipad midi controller, used for selecting audio frequency bands per panel together with intensity and influence. The UPC-16 with only rotary controls is used to apply additional shader effects on top of the textures sampled from the video source.

Naivi: Studio 80 Year Of The Horse VJ / Light Performance

DSC7963-1024x965

As NAIVI we performed with our custom build LED panel and self written software at Year Of The Horse in Studio 80 (Amsterdam). The organisation wanted to do something different with their light setup and we (NAIVI) were invited to make that happen.

The LED panel is build from the ground up from high brightness LED’s shipped in from China. The panel receives it’s data through a Teensy Microcontroller. I wrote a custom C++ library to convert any sort of image (or video) data to a stream of bytes the micro controller can work with. On top of the library we wrote a vj application that samples and decomposes a video and audio stream to procedurally generate the image data that is send over. These images are synced to the music based on a modulation matrix that accepts LFO, Recorded Channel and Audio data. The modulation matrix consists of a set of application effects (controls) that are used to render a set of images.

You could think of the application as a video synthesizer that accepts any sort of device input to generate an image using a specific render interface (OpenGL).

The Application can also control any already existing light in the club. Most lights accept a DMX signal that is send out by a control panel. The panel in studio 80 accepts midi data that can be mapped to various controls to manipulate the brightness or movement of a light. The already existing modulation matrix (in the software application) is used to calculate a new light value based on the image, audio and lfo data available in the running application.This enabled us to control both the lights and visuals from the same software environment, merging two distinct worlds that are most often controlled seperately.

All the software is written using OpenFrameworks in C++. The teensy code is native C.

Programmable LED wall

After performing with our custom VJ software last month, I noticed that the video stream felt constrained by the bounds of the canvas and did not light up the venue in a way that was desirable or expected. Although we received very positive feedback regarding the visuals itself, I wanted to get rid of the beamer and move towards a more custom / scalable light emitting video projection source.

After reading up on custom LED panels I decided to give that a go. This would render the lights in a club in-effective and could display the visuals in a more tangible way. The added benefit of the hardware is that there would be no more fighting over what gets priority, the lights or the visuals. This turned out to be my main point of frustration that night we performed.

The final wall (combined) should be approx 4 meters wide and 2 meters high. The LED screen can be broken up in to various parts (panels), all of them addressable through a custom GL interface that renders parts of the video stream to a sub-set of the screen. This enables us to place multiple panels throughout a possible venue / space,

But buying that much equipment at once costs a lot of money. I decided to build a small prototype first, using components recommended online. Further down you can find the various parts I used. The microcontrollers arrived within a week, as well as the programmable led strips, cables, plugs and adapters.  All of the equipment was shipped from the United States using UPS.

I ordered 2 Teensy 3.1 microcontrollers, together with the OctoWS2811 Adaptor board to address the individual led strips. In combination with the OctoWS2811 LED library this setup can address thousands of LED’s, perfect for building a project of this size. I had to solder the components together and used a standard CAT cable to drive the individual strips. The wiring of the strips to the board was rather straight-forward. For this prototype 500 individual LED’s were fed a video stream using only 4 outputs out of the 8 available on every Teensy board. Every 2 strips (approx 300 leds) were given their own 10 amp, 5 volts power supply. The PWM signal was send from one teensy board. Multiple Teensy boards can be connected to keep the video signal in sync. The led strips were ordered at Adafruit.

The Led Panels are addressed through a custom C++ interface, compatible with OpenFrameworks. Using this interface I can drive multiple LED panel using various applications, including my VJ application. The vj app is controlled using an IPad and Akai APC20 (midi controller). The amount of light emitted is (to say the least) exceptional. Perfect for lighting up a medium sized venue, especially once all the hardware is combined in to a blistering light source.

Now there’s a working prototype, we can order the additional led strips, controllers and adaptors. 8 teensy boards will be used to drive the individual LED panels. Every teensy board wil feed approximately a 1×1 meter led panel. All of these panels can be combined to create a big wall, or placed throughout a venue to generate a more spacial effect. More on that soon.

grab_onegrab_threegrab_fourgrab_five

One finished led panel dancing to the music and lighting up the club @ Year of the Horse, Studio 80, Amsterdam

led wall test wip (using OpenFrameworks)

2nd programmable video led wall test

App Controllers

Akai APC20 and Ipad, used for controlling the application that drives the panels.

DSC_0230

The 2 teensy controllers soldered on to the Octo adapter (running)

DSC_0241

16 meter of led strips controlled by one Teensy board, using 4 out of the 8 available PWM channels. 2 strips use one 10A 5V power supply (300 leds)

Kala Processing Library (C++)

Started a new generative visuals project, where a video stream is layered and fragmented in real-time, using a custom library build upon OpenFrameworks. This library will be used to develop various filters and algorithmic effects that can be combined with video streams, images and geometric data. The current version runs a stable 60+ fps, where the source data is sampled from a video stream.

The framework will be used the 17th of May (2014) to render the visuals for the upcoming club night: The Year Of The Horse. The trailer (shown below) is recorded live and used a custom midi interface to control the layers. The midi interface hooks in to an IPad and Akai APC-20

blur

glitch_oneglitch_three

glitch_two

Video Analyzer / Sampler

I’m working on a program that analyzes and samples large video files (movies). First implementation is shaping up nicely, where there is an option to rip individual frames from a stream (in sequences) and analyze shots. The idea is to use this information to create spacial relationship maps based on the information found in a video stream.

The application is written in C++ and fully threaded. Multiple operations can be performed at the same time on the same video stream: for example sampling, analyzing and browsing. The interface is straightforward and should be of help when trying to find the right sample settings.

At some point I hope to be able to share the application and release the source code.

video_analyzer_screen

Pacemaker Game Sketch

Last weekend I participated in the Global Game Jam 2013. For me it was the first time making a game from scratch and I liked the idea of returning to my student days of no sleep and sipping beer behind a monitor early in the morning. In the end we had a working platformer where all the elements are synced to a global pulse (beat) that changes over time in the level. The theme was: heartbeat.

As the bpm increases, so does the difficulty level (objects rotate, move.and disappear at a different interval). It also causes the audio to slowly change from a heartbeat to a more aggressive hardcore kick.

Unfortunately we had no real level / game designer and for that reason not a completely polished game. Everything came together 3 hours before the deadline. But it’s playable and does illustrate what we tried to accomplish.

I used Unity for the first time and must say I’m impressed. The component based  object model is flexible and almost everything you can think of is there: Audio, Shaders, Lights, Scripting / Coding environment, VS integration, etc. Definately worth trying out.

If you feel like playing it, I uploaded the game right: HERE. Use the arrow keys to move and press space to jump. Works better with an actual Controller (tried xbox 360).

 

Vertex interpolation using Barycentric Coordinates and Bilinear Filtering

About a week ago I had to get uv coordinates accurately transferred on to a new set of points.
I knew these points lay on the existing surface, and that a nearest neighbour solution would cause problems, because a point can be closer to a different vert than the primitive it sits on. I also knew I had to be able to correctly interpolate quads and triangles.

After consulting the web I found out that you could use the triangle’s Barycentric coordinates and the quadratic’s filtered uv position to correctly interpolate the vertices that define a primative (or face). Both methods return a set of uv coordinates that help define the point’s relative position on that sampled face.

Remember that with this particular uv set I mean the uv set relative to a certain primitive or face. Not the uv set that defines a vert’s position in 2d space (relative to other vertices).

In the case of a triangle (defined by: A,B,C), the uv set specifies the weight of every vert, say: (U: 0.2, V: 0.5, W: 0.3). Note that for simplicity the last value is often left out because if the point lies within the triangle, the seperate weights always add up to one. Coming back on the previous weights, we can write the third weight (W) as: 1-(0.2+0.5).

You could also say that the uv coordinates describe the distance you have to walks over the edge of the triangle starting from vert A to vert B and C. If the value of u or v (where u = location on edge: A-B, and v = location on edge: C-A) ends up to be higher or lower than one, the point lies outside of the triangle.

The point’s position (P) can now easily be calculated: P = (B*U) + (C*V) + (A*W)

For a quadratic primitive the uv coordinates don’t describe the vert weights directly. The set only describes the relative position of the point based on the primitive’s local uv space. But by using a simple bilinear interpolation we can use these coordinates to calculate that point’s position (or interpolated uv values). We start by defining two new points in the u axis, and use these points to find the final weight in the v axis.

Say we have a square defined by: A, B, C and D and a uv set that defines a point in this square with the coordinates: (U)0,2 and (V)0,5. It’s a right sided primitive. We can figure out the two new points (P1 and P2) in the u direction by using the primtive’s edges: P1 = (B-A)*U, P2= (D-C)*U. But this only get’s us half way. We have the interpolated u value as a set of 2 points, but we can use these points as an edge to get the final position (PF) (or whatever data you see fit): PF = (P2-P1)*V. And that’s it…

But this doesn’t explain how to get those uv coordinates. Houdini has some build in functions for this that can be accessed with VEX, C++ and Python. For this example I will stick with Python. But you could also write your own. A good example on finding the Barycentric coordinates of a triangle can be found here. Finding the coordinates for a quadratic primitive is more diffifcult and depends on the way the primitive is described. Doing a simple google search on: ‘parametric surface’ will get you half way. This wiki article is also a good starting point.

Now for some code. I wrote a UV Interpolator in Python that corrrectly interpolates the uv attribute from a set of vertices. I already know the primitve to sample, so I don’t perform any unnecessary ray casts. The operator works with both quads and triangles. The function that is used to get the uv coordinates local to the primitive is: hou.Prim.nearestToPosition(Point). In vex these coordinates are generated when doing an intersect operation.

"""
This operator interpolates the uv coordinates from a primitive, indexed from the second input.
The points defined in the first input should have an integer attribute pointing to the primitive to sample from.
The primitive should have uv coordinates defined on vertices or points.
Only works with quadractic or triangle primitives!
For Quadratic primitives: bilinear interpolation is used to find the new uv coordinates.
For Triangle primitives: the Barycentric coordinates are used to find the new uv coordinates
For a sample reference: http://www.gamerendering.com/2008/10/05/bilinear-interpolation/
"""


# This code is called when instances of this SOP cook.
node = hou.pwd()
geo_one = node.geometry()
geo_two = node.inputs()[1].geometry()

# Sample the parameters
uv_location = node.parm("uv_location").evalAsString()
prim_index_name = node.parm("prim_index_name").eval()
max_distance = node.parm("max_distance").eval()
prim_type = node.parm("prim_type").evalAsString()
group_name = "faulty_samples"

#Attributes-------------------------------------------------------------------------------

# First sample the uv attribute from the second input
uv_attrib = None
if uv_location == "points":
uv_attrib = geo_two.findPointAttrib("uv")
else:
uv_attrib = geo_two.findVertexAttrib("uv")
use_points = (uv_location == "points")
use_triangles = prim_type == "triangle"

# Make sure the attribute was found
if uv_attrib is None:
raise hou.NodeError("Can't find uv attribute")

# Now sample the primitive index attribute from the first input
prim_index_attrib = geo_one.findPointAttrib(prim_index_name)
if prim_index_attrib is None or prim_index_attrib.dataType() != hou.attribData.Int:
raise hou.NodeError("Can't sample primitive index attribute of type Int: %s" % prim_index_name)

# Add a new point uv attrib if necesarry
added_uv_attrib = geo_one.findPointAttrib("uv")
if added_uv_attrib is None:
added_uv_attrib = geo_one.addAttrib(hou.attribType.Point, "uv", (0.0,0.0,0.0), True)

# Create a faulty point group
faulty_point_group = geo_one.findPointGroup(group_name)
if faulty_point_group is None:
faulty_point_group = geo_one.createPointGroup(group_name)

#Methods--------------------------------------------------------------------------------

def getUVCoordinatesFromQuad(inCoordinates, inPrimitive):
"""
From the incoming primitive we first create two new interpolated points on the u axis
From these points we create the final uv coordinate based on the v axis, using bilinear interpolation
"""


verts = inPrimitive.vertices()
vertuvs = []

if len(verts) != 4:
raise hou.NodeError("Primitive: %d does not have excactly 4 verts!" % inPrimitive.number())

# get the uv values from our verts or points
for vert in verts:
if not use_points:
vertuvs.append(vert.attribValue(uv_attrib))
else:
vertuvs.append(vert.point().attribValue(uv_attrib))

# get our final weights in u and v
rv = 1-inCoordinates[1]
ru = 1-inCoordinates[0]
pv = inCoordinates[1]
pu = inCoordinates[0]

# calculate two new uv samples in the u direction
bottom_uv = ((vertuvs[1][0]*pu + vertuvs[0][0]*ru), (vertuvs[1][1]*pu + vertuvs[0][1]*ru))
top_uv = ((vertuvs[2][0]*pu + vertuvs[3][0]*ru), (vertuvs[2][1]*pu + vertuvs[3][1]*ru))

# interpolate over v to get our final value
final_uv = ((top_uv[0]*pv + bottom_uv[0]*rv), top_uv[1]*pv + bottom_uv[1]*rv, 0.0)

return final_uv

def getUVCoordinatesFromTriangle(inCoordinates, inPrimitive):
"""
Compute the new uv coordinates based on the incoming Barycentric coordinates.
The first coordinate maps to the 3rd vert, the second coordinate to the 2nd vert.
The weight of the first vert is computed by complementing the added two weights.
"""


verts = inPrimitive.vertices()
vertuvs = []

if len(verts) != 3:
raise hou.NodeError("Primitive: %d does not have excactly 3 verts!" % inPrimitive.number())

# get the weights
vert_weights = (1-(inCoordinates[0]+inCoordinates[1]), inCoordinates[1], inCoordinates[0])

# get the uv values from our verts or points
for vert in verts:
if not use_points:
vertuvs.append(vert.attribValue(uv_attrib))
else:
vertuvs.append(vert.point().attribValue(uv_attrib))

# compute the new uv values
new_u = (vertuvs[0][0]*vert_weights[0]) + (vertuvs[1][0]*vert_weights[1]) + (vertuvs[2][0]*vert_weights[2])
new_v = (vertuvs[0][1]*vert_weights[0]) + (vertuvs[1][1]*vert_weights[1]) + (vertuvs[2][1]*vert_weights[2])

return (new_u,new_v, 0.0)

#Compute---------------------------------------------------------------------------------

"""
Iterate over every point that we need to interpolate the coordinates for.
"""


points = geo_one.points()
prims = geo_two.prims()

warning_string = ""
warning_occured = False

for point in points:
# Get the primitive
sample_prim = prims[point.attribValue(prim_index_attrib)]

# Make sure the primitive is a poly
if sample_prim.type() != hou.primType.Polygon:
raise hou.NodeError("Primitive: %d is not of type Polygon" % sample_prim.number())

# Get the parametric uv location of the point on the primitive
local_sample_data = sample_prim.nearestToPosition(point.position())
para_uv_coor = (local_sample_data[1], local_sample_data[0])

distance = local_sample_data[2]

# Add an entry if it's too far away from the primitive
if distance > max_distance:
warning_string += "Point: %d appears to be %f units away from indexed primitive: %d\n" % (point.number(), distance, sample_prim.number())
faulty_point_group.add(point)
warning_occured = True

# Sample the uv coordinates
new_uv_coord = None
if not use_triangles:
new_uv_coord = getUVCoordinatesFromQuad(para_uv_coor, sample_prim)
else:
new_uv_coord = getUVCoordinatesFromTriangle(para_uv_coor, sample_prim)

point.setAttribValue(added_uv_attrib, new_uv_coord)

if warning_occured:
raise hou.NodeWarning(warning_string)

C++ Midi Analyser

Been playing around with openFrameworks recently, trying to get a real time 2d fluid solver up and running (more on that later). To get it more interactive I decided to attach some of my midi controllers, but couldn’t find a Midi library in openFrameworks. Luckily the ofxMidi addOn helped me out.

After downloading the addon and getting it compiled I noticed that the example was actually quite useful to test my various midi devices! After tweaking the code a bit to view the selected port and the devices in an ordered list, the app proved quite useful…

It allows you to select any of the attached devices and display the messages that are received (value, channel, velocity etc.). When a device get’s disconnect it simply disappears without crashing the app. The other way round works as well. Ever tried that in CUBASE? Pretty solid code, thanks Chris O Shea for making a neat addon. Much appreciated! Makes my life a lot easier.

I attached my modified code and the compiled x64 executable (windows). If you have any midi devices hooked up, they should simply appear. Use the numeric keys (1-9) to select a port (device).

Note that in order to compile the modded code, you need to download the ofxMidi addon and link to the source code in there, together with the openFrameworks library. Tested mine with v0071. But the exe should be enough.

Midi Analyser

C# Zoom, Pan and Composite Exercise

Was working on a custom control that allows a user to freely pan and zoom in on image.
It also supports a minimal amount of image compositing, in this case a custom grid.

The image is being drawn on a custom panel using my own drawing code.
The actual image can be composited with a grid overlay, that is being placed on top of the main form.

The ZoomControl currently holds an Image and Overlay object. The Overlay object is used to draw the grid and is tied with the main control using a seperate event that is dispatched when the amount of rows or columns changes. The rows and columns can be updated through the main interface. The Overlay object creates an image that is stored internally, but is accessible to other components. This image is composited with the background image in the ZoomControl and drawn to screen. The drawing is based on the current zoom and pan levels.

Drawing could be optimized by storing the composited image in memory, updating it only when one of the images changes. Currently the image is composited every time the draw function is called.

Anyway, the project can be downloaded HERE.
It’s build in Visual Studio 2010 but should compile find in older versions.

Blofeld

After days of coding for fun or at work I figured it was time to pick up my old synth again.

Back in the days (just a couple of years ago), I spend a lot of time producing music. I still enjoy my record collection and am passionate about anything electronic. Especially Drum and Bass, only not the type that gave it a bad name. Thank god there are people like D-Bridge, Spectrasoul and Alix Perez to make your life more comfortable.

Anyway. Produced this little track in the wee hours of my spare time, all strings and basses are made with the Waldorf Blofeld. Initial patterns created in Renoise, finalized in Cubase. Long live the string and pad!

Download: Fomal – Sugar Rush