WIP: Introducing Miko, a real time VJ solution written in C++

This last month I’ve been slaving over a new tool that allows me to vj in REAL TIME based on images.
It samples pixel data that is used to drive a fluid simulation with image recognition layered on top.
Custom vector maps are derived to let elements `search` each other.
The results can be displayed using fields, particles , geometry and image data.

My intention was to play with the structure of an image, find it’s hidden agenda.
The image is decomposed in to key elements and from there on recreated using various fields of data.
The results can be quite, ehm, trippy.

It runs stable at around 60 fps, where it can draw over 100.000 particles + ordered geometry and scalar fields.
All is written in C++ using OpenGL, with the help of OpenFrameworks and OpenCV. Houdini is used to pre-process certain images.

Multiple images can be loaded in when it is launched and everything is controlled through my own midi interface on the spot.

It has to be noted that it wasn’t easy but very rewarding!
The reason for doing this is that I’ve always been disappointed by visuals presented at parties.
Most often it comes down to pre-rendered shapes with some glow, synced to a sloppy beat.

I had my first gig demo’ing the software last thursday, during the Amsterdam Dance Event.
As ‘Lange Man’ I tried my best to make the visuals work at the Green Room event, where friends hosted the Nachtwerk stage. The application ran stable for about 5 hours, without dropping the frame-rate. So far reactions were very (very) positive and can’t wait to use it again.

My plan is to improve the application over time, implementing a custom UI and new features that would allow it to run even faster, process larger amounts of data and be more versatile. I’m waiting on some footage that was shot during the night and am working on a small demonstration video. For now I just have some screen grabs.

Hopefully I can release the source code in some usable form this month, so anyone interested can dive in and see what was done or how to do such a thing.

First off let me thanks my girlfriend Maartje Willemijn Smits for shooting some beautiful images to work with.
I’d also like to thank MEMO for getting me started with a very open and scalable fluid solver solution, much appreciated!

Vertex interpolation using Barycentric Coordinates and Bilinear Filtering

About a week ago I had to get uv coordinates accurately transferred on to a new set of points.
I knew these points lay on the existing surface, and that a nearest neighbour solution would cause problems, because a point can be closer to a different vert than the primitive it sits on. I also knew I had to be able to correctly interpolate quads and triangles.

After consulting the web I found out that you could use the triangle’s Barycentric coordinates and the quadratic’s filtered uv position to correctly interpolate the vertices that define a primative (or face). Both methods return a set of uv coordinates that help define the point’s relative position on that sampled face.

Remember that with this particular uv set I mean the uv set relative to a certain primitive or face. Not the uv set that defines a vert’s position in 2d space (relative to other vertices).

In the case of a triangle (defined by: A,B,C), the uv set specifies the weight of every vert, say: (U: 0.2, V: 0.5, W: 0.3). Note that for simplicity the last value is often left out because if the point lies within the triangle, the seperate weights always add up to one. Coming back on the previous weights, we can write the third weight (W) as: 1-(0.2+0.5).

You could also say that the uv coordinates describe the distance you have to walks over the edge of the triangle starting from vert A to vert B and C. If the value of u or v (where u = location on edge: A-B, and v = location on edge: C-A) ends up to be higher or lower than one, the point lies outside of the triangle.

The point’s position (P) can now easily be calculated: P = (B*U) + (C*V) + (A*W)

For a quadratic primitive the uv coordinates don’t describe the vert weights directly. The set only describes the relative position of the point based on the primitive’s local uv space. But by using a simple bilinear interpolation we can use these coordinates to calculate that point’s position (or interpolated uv values). We start by defining two new points in the u axis, and use these points to find the final weight in the v axis.

Say we have a square defined by: A, B, C and D and a uv set that defines a point in this square with the coordinates: (U)0,2 and (V)0,5. It’s a right sided primitive. We can figure out the two new points (P1 and P2) in the u direction by using the primtive’s edges: P1 = (B-A)*U, P2= (D-C)*U. But this only get’s us half way. We have the interpolated u value as a set of 2 points, but we can use these points as an edge to get the final position (PF) (or whatever data you see fit): PF = (P2-P1)*V. And that’s it…

But this doesn’t explain how to get those uv coordinates. Houdini has some build in functions for this that can be accessed with VEX, C++ and Python. For this example I will stick with Python. But you could also write your own. A good example on finding the Barycentric coordinates of a triangle can be found here. Finding the coordinates for a quadratic primitive is more diffifcult and depends on the way the primitive is described. Doing a simple google search on: ‘parametric surface’ will get you half way. This wiki article is also a good starting point.

Now for some code. I wrote a UV Interpolator in Python that corrrectly interpolates the uv attribute from a set of vertices. I already know the primitve to sample, so I don’t perform any unnecessary ray casts. The operator works with both quads and triangles. The function that is used to get the uv coordinates local to the primitive is: hou.Prim.nearestToPosition(Point). In vex these coordinates are generated when doing an intersect operation.

"""
This operator interpolates the uv coordinates from a primitive, indexed from the second input.
The points defined in the first input should have an integer attribute pointing to the primitive to sample from.
The primitive should have uv coordinates defined on vertices or points.
Only works with quadractic or triangle primitives!
For Quadratic primitives: bilinear interpolation is used to find the new uv coordinates.
For Triangle primitives: the Barycentric coordinates are used to find the new uv coordinates
For a sample reference: http://www.gamerendering.com/2008/10/05/bilinear-interpolation/
"""


# This code is called when instances of this SOP cook.
node = hou.pwd()
geo_one = node.geometry()
geo_two = node.inputs()[1].geometry()

# Sample the parameters
uv_location = node.parm("uv_location").evalAsString()
prim_index_name = node.parm("prim_index_name").eval()
max_distance = node.parm("max_distance").eval()
prim_type = node.parm("prim_type").evalAsString()
group_name = "faulty_samples"

#Attributes-------------------------------------------------------------------------------

# First sample the uv attribute from the second input
uv_attrib = None
if uv_location == "points":
uv_attrib = geo_two.findPointAttrib("uv")
else:
uv_attrib = geo_two.findVertexAttrib("uv")
use_points = (uv_location == "points")
use_triangles = prim_type == "triangle"

# Make sure the attribute was found
if uv_attrib is None:
raise hou.NodeError("Can't find uv attribute")

# Now sample the primitive index attribute from the first input
prim_index_attrib = geo_one.findPointAttrib(prim_index_name)
if prim_index_attrib is None or prim_index_attrib.dataType() != hou.attribData.Int:
raise hou.NodeError("Can't sample primitive index attribute of type Int: %s" % prim_index_name)

# Add a new point uv attrib if necesarry
added_uv_attrib = geo_one.findPointAttrib("uv")
if added_uv_attrib is None:
added_uv_attrib = geo_one.addAttrib(hou.attribType.Point, "uv", (0.0,0.0,0.0), True)

# Create a faulty point group
faulty_point_group = geo_one.findPointGroup(group_name)
if faulty_point_group is None:
faulty_point_group = geo_one.createPointGroup(group_name)

#Methods--------------------------------------------------------------------------------

def getUVCoordinatesFromQuad(inCoordinates, inPrimitive):
"""
From the incoming primitive we first create two new interpolated points on the u axis
From these points we create the final uv coordinate based on the v axis, using bilinear interpolation
"""


verts = inPrimitive.vertices()
vertuvs = []

if len(verts) != 4:
raise hou.NodeError("Primitive: %d does not have excactly 4 verts!" % inPrimitive.number())

# get the uv values from our verts or points
for vert in verts:
if not use_points:
vertuvs.append(vert.attribValue(uv_attrib))
else:
vertuvs.append(vert.point().attribValue(uv_attrib))

# get our final weights in u and v
rv = 1-inCoordinates[1]
ru = 1-inCoordinates[0]
pv = inCoordinates[1]
pu = inCoordinates[0]

# calculate two new uv samples in the u direction
bottom_uv = ((vertuvs[1][0]*pu + vertuvs[0][0]*ru), (vertuvs[1][1]*pu + vertuvs[0][1]*ru))
top_uv = ((vertuvs[2][0]*pu + vertuvs[3][0]*ru), (vertuvs[2][1]*pu + vertuvs[3][1]*ru))

# interpolate over v to get our final value
final_uv = ((top_uv[0]*pv + bottom_uv[0]*rv), top_uv[1]*pv + bottom_uv[1]*rv, 0.0)

return final_uv

def getUVCoordinatesFromTriangle(inCoordinates, inPrimitive):
"""
Compute the new uv coordinates based on the incoming Barycentric coordinates.
The first coordinate maps to the 3rd vert, the second coordinate to the 2nd vert.
The weight of the first vert is computed by complementing the added two weights.
"""


verts = inPrimitive.vertices()
vertuvs = []

if len(verts) != 3:
raise hou.NodeError("Primitive: %d does not have excactly 3 verts!" % inPrimitive.number())

# get the weights
vert_weights = (1-(inCoordinates[0]+inCoordinates[1]), inCoordinates[1], inCoordinates[0])

# get the uv values from our verts or points
for vert in verts:
if not use_points:
vertuvs.append(vert.attribValue(uv_attrib))
else:
vertuvs.append(vert.point().attribValue(uv_attrib))

# compute the new uv values
new_u = (vertuvs[0][0]*vert_weights[0]) + (vertuvs[1][0]*vert_weights[1]) + (vertuvs[2][0]*vert_weights[2])
new_v = (vertuvs[0][1]*vert_weights[0]) + (vertuvs[1][1]*vert_weights[1]) + (vertuvs[2][1]*vert_weights[2])

return (new_u,new_v, 0.0)

#Compute---------------------------------------------------------------------------------

"""
Iterate over every point that we need to interpolate the coordinates for.
"""


points = geo_one.points()
prims = geo_two.prims()

warning_string = ""
warning_occured = False

for point in points:
# Get the primitive
sample_prim = prims[point.attribValue(prim_index_attrib)]

# Make sure the primitive is a poly
if sample_prim.type() != hou.primType.Polygon:
raise hou.NodeError("Primitive: %d is not of type Polygon" % sample_prim.number())

# Get the parametric uv location of the point on the primitive
local_sample_data = sample_prim.nearestToPosition(point.position())
para_uv_coor = (local_sample_data[1], local_sample_data[0])

distance = local_sample_data[2]

# Add an entry if it's too far away from the primitive
if distance > max_distance:
warning_string += "Point: %d appears to be %f units away from indexed primitive: %d\n" % (point.number(), distance, sample_prim.number())
faulty_point_group.add(point)
warning_occured = True

# Sample the uv coordinates
new_uv_coord = None
if not use_triangles:
new_uv_coord = getUVCoordinatesFromQuad(para_uv_coor, sample_prim)
else:
new_uv_coord = getUVCoordinatesFromTriangle(para_uv_coor, sample_prim)

point.setAttribValue(added_uv_attrib, new_uv_coord)

if warning_occured:
raise hou.NodeWarning(warning_string)

C++ Midi Analyser

Been playing around with openFrameworks recently, trying to get a real time 2d fluid solver up and running (more on that later). To get it more interactive I decided to attach some of my midi controllers, but couldn’t find a Midi library in openFrameworks. Luckily the ofxMidi addOn helped me out.

After downloading the addon and getting it compiled I noticed that the example was actually quite useful to test my various midi devices! After tweaking the code a bit to view the selected port and the devices in an ordered list, the app proved quite useful…

It allows you to select any of the attached devices and display the messages that are received (value, channel, velocity etc.). When a device get’s disconnect it simply disappears without crashing the app. The other way round works as well. Ever tried that in CUBASE? Pretty solid code, thanks Chris O Shea for making a neat addon. Much appreciated! Makes my life a lot easier.

I attached my modified code and the compiled x64 executable (windows). If you have any midi devices hooked up, they should simply appear. Use the numeric keys (1-9) to select a port (device).

Note that in order to compile the modded code, you need to download the ofxMidi addon and link to the source code in there, together with the openFrameworks library. Tested mine with v0071. But the exe should be enough.

Midi Analyser

C# Zoom, Pan and Composite Exercise

Was working on a custom control that allows a user to freely pan and zoom in on image.
It also supports a minimal amount of image compositing, in this case a custom grid.

The image is being drawn on a custom panel using my own drawing code.
The actual image can be composited with a grid overlay, that is being placed on top of the main form.

The ZoomControl currently holds an Image and Overlay object. The Overlay object is used to draw the grid and is tied with the main control using a seperate event that is dispatched when the amount of rows or columns changes. The rows and columns can be updated through the main interface. The Overlay object creates an image that is stored internally, but is accessible to other components. This image is composited with the background image in the ZoomControl and drawn to screen. The drawing is based on the current zoom and pan levels.

Drawing could be optimized by storing the composited image in memory, updating it only when one of the images changes. Currently the image is composited every time the draw function is called.

Anyway, the project can be downloaded HERE.
It’s build in Visual Studio 2010 but should compile find in older versions.

Blofeld

After days of coding for fun or at work I figured it was time to pick up my old synth again.

Back in the days (just a couple of years ago), I spend a lot of time producing music. I still enjoy my record collection and am passionate about anything electronic. Especially Drum and Bass, only not the type that gave it a bad name. Thank god there are people like D-Bridge, Spectrasoul and Alix Perez to make your life more comfortable.

Anyway. Produced this little track in the wee hours of my spare time, all strings and basses are made with the Waldorf Blofeld. Initial patterns created in Renoise, finalized in Cubase. Long live the string and pad!

Download: Fomal – Sugar Rush

Opening Up Houdini

There are times when you want to get information from other applications, or your own, into Houdini. Although there are many useful operators for loading and exporting information, opening up a port offers more or additional functionality.

A while back I was asked to create a protocol that would enable us to do exactly that in Houdini. This could be from Maya or any other application. In this post I won’t discuss the various serialization methods but will try to open a door on how to start your own (simple) server when launching houdini. One could use this server that runs in the back-ground (in a seperate thread) to directly access information, execute tasks or sample data (in a custom node for example).

A possible use could be to access external data to reconstruct meshes without writing a single file to disk.

To achieve this we need to open a port on startup that can receive data independently from the session a user is working in. Therefore not locking up Houdini.

A couple of files are searched for and run when Houdini is launched. Most noticably 123.cmd, 456.cmd, 123.py and 456.py. 123.cmd and 123.py run when Houdini launches without loading a file. 456.py and 456.cmd are run whenever the session is cleared or a file is loaded. Another option could be to create a file called pythonrc.py. This file is always invoked when Houdini starts up.

For this example I created a file called 123.py and made sure the HOUDINI_SCRIPT_PATH was confifgured correctly to find that file when it launches. If unsure on how to do that, simply create the file in you Houdini script folder.

For example: $HFS/houdini/scripts/456.py > C:/Program Files/Side Effects Software/Houdini 12.0.581/houdini/scripts/456.py. There should already be a file called 123.cmd (the hscript equivalent) in there. Appending your own custom HOUDINI_SCRIPT_PATH location is advisable! When upgrading Houdini, scripts remain accessible.

Once the file has been created we want to start the server when we launch a new Houdini session. I’ve given the server a permanent port to connect to (2000) but it is possible to assign multiple ones for every session running. In this case it means that from wherever you are, you can connect to exactly one Houdini session by sending data over port 2000. Most likely you want to connect to ‘localhost’, which is the computer you are using.

To seperate the object from the initialization steps, a seperate HServer class is created and instantiated in the 123.py file. There can be only one port open and one instance of this class running, making it a singleton. And although Python’s definition of a singleton is weird, it is preferred in this case.

# import our server module
import hdaemon

# start the python server
thread_name = hdaemon.startHPythonServer('python_server')
print "started python sever in thread: %s" % thread_name

These lines of code are executed when houdini launches. When the instance of houdini is exited, the port will close and the thread will stop. But what is actually started, and how? The following piece of code shows what happens when trhe startHPythonServer function is called.

The module ‘hdaemon’ holds the HPythonServer object and 2 methods for starting and stopping it. The start method checks for a thread already running with the name ‘python_server’. If a thread with that name is still active, no new port will be opened and no new process will be returned. If the server is not running, a new HPythonServer is created and started in a seperate thread.

The socket module is used as the low level network interface.

"""
@package hdaemon
This package defines a Python Server that can be used to communicate with Maya
@author Coen Klosters
"""


import threading
import socket
import uuid
import hou

class HPythonServer(threading.Thread):
    """
    Simple Python Server running in a seperate thread within houdini, listerning to commands
    Connect to Houdini using the HPORT specified.
    """

   
    CLIENT_IDENTIFIER = "HPythonServer"
    HOUDINIPORT = 2000
   
    def __init__(self, inClientIdentifier=None ):
        """
        Constructor
        @param self The object pointer
        @param inClientIdentifier A string that allows the overriding of the client identifier.
        """

       
        threading.Thread.__init__(self)     # initialize the thread
       
        if inClientIdentifier is not None:
            self.CLIENT_IDENTIFIER = inClientIdentifier
       
        self.__thread_running = True        # when set to false, cancels the socket to listen
        self.__guid = uuid.uuid4()          # unique identifier name
        self.__associated_socket = None     # Initialize the socket
           
        # Start the thread
        self.setDaemon(True)
        self.start()

           
    def stop(self):
        """
        Close the socket and stop the thread
        @param self The object pointer
        """

        self.__thread_running = False
        self.__associated_socket.close()

   
    def setup(self,inConnection):
        """
        Listens to the connection given to receive external data.
        @param self The object pointer
        @param inConnection The incoming server connection
        """

       
        operation = True
        conn, addr = inConnection.accept()
        print 'Connected to: ', addr
        while self.__thread_running:
            data = conn.recv(4096)          #make sure it's a power of two!
            if data == "print":
                print "switching to print mode"
                operation = True
            if data == "evaluate":
                print "switching to evaluation mode"
                operation = False
                continue
           
            if operation:               # if in print mode, print whatever received
                print data
            else:
                hou.session.dataobject.data = data  # otherwise store the data in the houdini session currently active
               
            if not data: break
           
        print "connection broken with: %s" % str(addr)
       

    def run(self):
        """    
        This is the override of the threading.Thread.run() function which is started on the AssetDaemonPythonClient's thread.
        As long as the __thread_running variable is True we will stay in this function listening for data from the socket.
        @param self The object pointer
        """

       
        HOST = ''                                  # Localhost
        PORT = self.__class__.HOUDINIPORT          # Port to connect to Houdini with
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        try:
            s.bind((HOST, PORT))
            s.listen(1)
        except socket.error:
            s.close()
            s = None
            return -1
       
        while self.__thread_running:
            self.setup(s)


def startHPythonServer(inDaemonThreadName):
    """
    Helper function for setting up and starting a HPythonServer
    @param inDaemonThreadName The name to use for the thread containing the client, this name is used to ensure the client works as a singleton
    @return object instance
    """

    mcd_thread = None
    for thread in threading.enumerate():
        if thread.name == inDaemonThreadName:
            mcd_thread = thread
   
    if mcd_thread == None:
        mcd_thread = HPythonServer()
        mcd_thread.name = inDaemonThreadName
       
    return mcd_thread

     
def stopHPythonServer(inDaemonThreadName):
    """
    Helper function for stopping an HPythonServer
    @param inDaemonThreadName The name of the thread that the client should be running in, by using this name the function will find the thread and stop it.
    """

    mcd_thread = None
    for thread in threading.enumerate():
        if thread.name == inDaemonThreadName:
            mcd_thread = thread
           
    if mcd_thread is not None:
        mcd_thread.stop()

When launching Houdini, a seperate thread running alongside Houdini should be waiting for packages to arrive. To connect to Houdini simply specify the server and port you want to connect to. Messages that are send are printed to screen in Houdini.

To connect to Houdini, do the following.

import socket

HOST = 'localhost'                          # The remote host
PORT = 2000                             # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))                 # Connect
s.send("Hellow Houdini, I am your father")      # Send a message
s.close()                       # Close the connection

Off course, sending simple strings doesn’t get you far. In order to actually do something useful, the data needs to be serialized / deserialized. But that’s something I might address some other time. A handy tip is to look in to the hou.session module. This module can be used to store received data and is accessible throughout Houdini! Think of parsing mesh data that can be accessed in a python Sop. Very useful!

Pyro 2 Work Flow Examples

I totally forgot I had these laying around.. Thought I’d share them.
Simple examples on how to use some of the Pyro 2 functionality, such as inheriting object motion, object collision, clustering etc.
Not very visual but could be useful. Small post-it notes explain the various subsets present in the netwerk.

Note that the examples only work with Houdini 12 +.

Get the package HERE

Automated Edge Flip Operator

Last week I ran in to a long lasting Houdini frustration, the lack of proper Edge support.
We get the control over various primitive types, points and verts, but not Edges.

This becomes frustrating when dealing with complex primitive sorting and optimization techniques. For that purpose I decided to create my own Edge object, a rather simple wrapper that let’s me create Edges from points, verts and primitives inside Houdini. An edge can be constructed using two points, a primitive or 2 verts. Functionality includes comparing angles, getting the length, comparing other existing edges on point presence, not order etc.
Just for convenience 🙂

One of the first things I needed it for was an automatic edge flip operator. The SOP samples primitive data and flips edges based on the optimal angle of neighbouring prims. Similar to the Delauney edge orientation principle. The flipping of an edge is only possible when the union of the two incident faces forms a convex quadrilateral. Flip Constraints are based on the curvature of a reference mesh, optimizing the triangle edge orientation on harsh edges or cutoff points.

To determine if the edge needs to be flipped, both edge lengths are computed, together with the maximum shared angle. If the angle (thus the edge length) is higher for the incident face, the edge can be flipped. Otherwise the angle is compared based on the maximum difference between the point normals and prim to point normals. If the difference comes close to 0, the edge follows curvature. If it is higher then the specified threshold, the edge should be flipped. (conforming the proposed mesh orientation and curvature).

One of the implementation issues: how do you keep track of the mesh when you flip an edge? Flipping an edge makes the next edge (sharing one of the previous edge primitives) most likely invalid, or a possible candidate. You could solve this by recursively update the mesh as you flip an edge, keeping track of the deforming state and marking areas that have been computed. This would be rather slow and tedious to incorporate. I ended up marking the edge that should be flipped, any new edge being sampled is being checked against any shared edge points. If one of the shared edge points match a flipped edge, it isn’t considered. Running the operator a couple of times would asure most edges will align. Most likely ater two computational rounds. I also added a “dirty” mode, not taking into account previously flipped edges. (works surprisingly well)

The actual operator doesn’t flip the edges. Houdini’s Edge Flip Sop does that for me. I simply count the total number of edges to be flipped and save the Edge Points as a Detail String Attribute.
The count is used to specify the number of iterations the loop should make and inside I simply sample each saved edge. It is a limitation I had to work with in Houdini.

A handy tip on speed and flexibility might be to pre-sample your data-set in to clusters. Clusters are formed based on any Vector3 attribute and are weighted over X recursions. Finding the most optimal distribution as it is being computed. Creating clusters (based on Position for example) speeds up the computing of the edges to be flipped, because the scope is limited to only a small island of primitives. In comparison to the entire mesh. Similar to using a Kd-Tree, but simpler. Read up on the algorithm here and here

Now for some code, the edge class should be easy to recreate based on what was told and can be seen in the following snippet:

from Triangulate.hedge import Edge


'''
Globals---------------------------------------------------------------------------------------------------
'''


# Helps me debug my code
debug = hou.pwd().evalParm("debug_mode")
export_name = hou.pwd().evalParm("edge_name")
flip_count_name = hou.pwd().evalParm("count_name")
export_groups = hou.pwd().evalParm("export_groups")
compare_angle = hou.pwd().evalParm("compare_angle")
angle_attrib_name = hou.pwd().evalParm("angle_attribute")
constrain_attrib_name = hou.pwd().evalParm("constrain_attribute")
constrain_angle = hou.pwd().evalParm("constrain_angle")
min_angle = hou.pwd().evalParm("min_angle")
max_angle = hou.pwd().evalParm("max_angle")
constrain_cut = hou.pwd().evalParm("constrain_cutoff")
safe_mode = hou.pwd().evalParm("safe_mode")

# This code is called when instances of this SOP cook.
node = hou.pwd()
geo = node.geometry()
prims = geo.prims()
points = geo.points()
edge_list = []
shared_edges = []
flipped_edges = []
flipped_points = []


'''
Necessary Attributes---------------------------------------------------------------------------------------------------
'''


# Find the count attribute or create it otherwise
count_attrib = geo.findGlobalAttrib(flip_count_name)
if count_attrib is None:
    count_attrib = geo.addAttrib(hou.attribType.Global, flip_count_name, 0)
else:
    geo.setGlobalAttribValue(count_attrib, 0)

angle_attrib = geo.findPointAttrib(angle_attrib_name)
if angle_attrib is None and compare_angle:
    raise hou.NodeError("Can't find angle comparison point attribute: %s" % angle_attrib_name)

constrain_attrib = geo.findPointAttrib(constrain_attrib_name)
if constrain_attrib is None and constrain_angle and compare_angle:
    raise hou.NodeError("Can't find constrain attribute: %s" % constrain_attrib_name)


'''
Methods---------------------------------------------------------------------------------------------------
'''


def Console(inString):
    """
    Helps me to debug my code
    """

    if debug:
        print inString


def GetPrimPoints(inPrim):
    """
    Get's the points out of a primitive
    """

    point_list = []
    for vert in inPrim.vertices():
        point_list.append(vert.point())
    return point_list


def GetHighestAngle(inNormals):
    """
    Compares vectors to get the highest angle
    """

    max_angle = 0
    for n in xrange(len(inNormals)):
        n1 = inNormals[n]
        for ni in range(n+1, len(inNormals)):
            n2 = inNormals[ni]
            angle = n1.angleTo(n2)
            if angle > max_angle:
                max_angle = angle

    return max_angle


def GetHighestPrimAngle(inNormals, inPrimNormal):
    """
    Compares the normals against the primitive
    """

   
    max_angle = 0
    for normal in inNormals:
        angle = normal.angleTo(inPrimNormal)
        if angle > max_angle:
            max_angle = angle

    return max_angle
   

def FindUnsharedPoints(inPrimOne, inPrimTwo, inEdge):
    """
    Returns a list of points that isn't shared between the primitives
    """


    unshared_points = []
    prim_one_points = GetPrimPoints(inPrimOne)
    prim_two_points = GetPrimPoints(inPrimTwo)

    # compare first primitve
    for point in prim_one_points:
        if point in inEdge.points:
            continue
        else:
            unshared_points.append(point)

    # compare second primitive
    for point in prim_two_points:
        if point in inEdge.points:
            continue
        else:
            unshared_points.append(point)

    return unshared_points


def SampleEdges(inPrim):
    """
    Computes the primitive edges
    New edges are appended to the edge_list
    """


    Console("sampling prim: %d" % inPrim.number())

    #Fetch the vertices
    point_list = GetPrimPoints(inPrim)

    # Now calculate the edge
    amp = len(point_list)
    for i in range(amp):
        if i < (amp-1):
            point_one = point_list[i]
            point_two = point_list[i+1]
        else:
            point_one = point_list[-1:][0]
            point_two = point_list[0]

        edge = Edge(point_one, point_two)

        # if the edge is part of already existing edges, it shared a primitive
        # i know this is dirty but i'd rather have this then do 2 lookups :(
        try:
            Console("shared edge: %s" % edge)
            existing_edge = edge_list[edge_list.index(edge)]
            existing_edge.addPrim(inPrim.number())
            shared_edges.append(existing_edge)
        except ValueError:
            Console("new edge: %s" % edge)
            edge.addPrim(prim.number())
            edge_list.append(edge)


def CompareAngle(inPrimOne, inPrimTwo, inEdge, inUnsharedPoints):
    """
    Compares the angle of the point normal angle against primitive normal
    Returns true if the edge needs to be inverted...
    The comparison is based on the maximum difference between the prim point normals,
    and the prim point normals compared with the prim normal. If the difference is rather
    large on both sides, consider the edge to be flipped

    An extra constrain attribute can be specified. Works best with curvature.
    If the curvature of the unshared points is higher then the combined edge points,
    It is allowed to flip, together with the angle constrain
    """

   
    prim_one_points = GetPrimPoints(inPrimOne)
    prim_two_points = GetPrimPoints(inPrimTwo)

    # get the point normals
    prim_one_normals = []
    prim_two_normals = []
    for point in prim_one_points:
        prim_one_normals.append(hou.Vector3(point.attribValue(angle_attrib)))
    for point in prim_two_points:
        prim_two_normals.append(hou.Vector3(point.attribValue(angle_attrib)))    

    # figure out the angles
    max_angle_one = GetHighestAngle(prim_one_normals)                                  # the highest measured angle between the point normals
    max_angle_two = GetHighestAngle(prim_two_normals)                                
    max_prim_angle_one = GetHighestPrimAngle(prim_one_normals, inPrimOne.normal())     # the highest measued angle between points and prim normal
    max_prim_angle_two = GetHighestPrimAngle(prim_two_normals, inPrimTwo.normal())

    angle_diff_one = abs(max_angle_one - max_prim_angle_one)                           # difference between prim 1 normals      
    angle_diff_two = abs(max_angle_two - max_prim_angle_two)                           # difference between prim 2 normals

    max_value = max(angle_diff_one, angle_diff_two)                                    # get the higest of the two                            
    min_value = min(angle_diff_one, angle_diff_two)                                    # get the lowest of the two
   
    Console("normal angle one: %s" % max_angle_one)                                    # some debug stuff
    Console("normal angle two: %s" % max_angle_two)
    Console("prim   angle one: %s" % max_prim_angle_one)
    Console("prim   angle two: %s" % max_prim_angle_two)
    Console("angle diff one: %s" % angle_diff_one)
    Console("angle diff two: %s" % angle_diff_two)

    # figure out the additional attribute
    if constrain_angle:
        com_original = inEdge.pointOne.attribValue(constrain_attrib) + inEdge.pointTwo.attribValue(constrain_attrib)            # original edge point attribute values
        com_diff = inUnsharedPoints[0].attribValue(constrain_attrib) + inUnsharedPoints[1].attribValue(constrain_attrib)        # possible new edge attribute values
        constrain = com_diff / 2.0                                                                                              # average to be compared against attribute cutoff

        if com_diff > com_original and constrain > constrain_cut and max_value >= max_angle and min_value >= min_angle:
            return True
    else:
        if max_value >= max_angle and min_value >= min_angle:
            return True

    return False


def FlipEdge(inEdge):
    """
    Checks if the edge needs to be flipped or not
    """


    if (safe_mode) and (inEdge.pointOne in flipped_points or inEdge.pointTwo in flipped_points):
        Console("ALREADY IN FLIPPED EDGES: %s" % inEdge)
        return -1

    prim_one = prims[inEdge.edgePrims[0]]
    prim_two = prims[inEdge.edgePrims[1]]
   
    # for both primitives find the point that isn't shared and figure out if it should be flipped
    unshared_points = FindUnsharedPoints(prim_one, prim_two, inEdge)
    if len(unshared_points) != 2:
        ("%s has more more or less then 2 unshared points" % inEdge)
        return -1

    Console("unshared points: %s, %s" % (unshared_points[0].number(), unshared_points[1].number()))
   
    # check edge length
    edge_two = Edge(unshared_points[0], unshared_points[1])
    edgelength_one = inEdge.edgeLength
    edgelength_two = edge_two.edgeLength
   
    if edgelength_two < edgelength_one:
        Console("FLIPPING EDGE!")
        flipped_edges.append(inEdge)
        if safe_mode:
            flipped_points.append(inEdge.pointOne)
            flipped_points.append(inEdge.pointTwo)
        return 1

    # check angle
    elif (compare_angle) and (CompareAngle(prim_one, prim_two, inEdge, unshared_points)):
        Console("FLIPPING EDGE!")
        flipped_edges.append(inEdge)
        if safe_mode:
            flipped_points.append(inEdge.pointOne)
            flipped_points.append(inEdge.pointTwo)
        return 1
           

def Finalize():
    """
    Creates the detail attributes and groups (if chosen)
    The counter specifies the amount of edges to be flipped
    """

    count = 0
    for edge in flipped_edges:

        # set or create the detail attribute
        edge_attrib_name = "%s_%d" % (export_name, count)
        edge_attrib = geo.findGlobalAttrib(edge_attrib_name)
        if edge_attrib is None:
            edge_attrib = geo.addAttrib(hou.attribType.Global, edge_attrib_name, "none")
        geo.setGlobalAttribValue(edge_attrib,"p%d-%d" % (edge.pointNumberOne, edge.pointNumberTwo))
       
        # see if we export the group
        if export_groups:
            group = geo.createPointGroup("%s%d_%d" % (export_name, edge.pointNumberOne, edge.pointNumberTwo))
            for point in edge.points:
                group.add(point)

        count += 1

    # set the global attribute value
    geo.setGlobalAttribValue(count_attrib, count)
   

'''
Compute---------------------------------------------------------------------------------------------------
'''


# For every prim construct 3 edges
# See if the edges have multiple primitives shared
# If multiple primitives are shared, do the comparison

# Calculate shared edges
for prim in prims:
    SampleEdges(prim)
    Console("\n")

# See if they need to be flipped
for edge in shared_edges:
    Console("sampling edge: %s" % edge)
    FlipEdge(edge)
    Console("\n")

# Set up the edges to be flipped
Finalize()

Primitive Normal Orientation

Ever wondered what “Orient Polygons” does? It tells me that it winds all the polygons in the same ‘direction’. What direction is that?
And how does it know (determine) what polygons to reverse or flip. I find that desription utterly useless and needed more control over my prim normals in Houdini (defining direction using a set of shared points and their surface orientaiton) for a project i’m working on.

Based on those requirementrs I made a simple orient prim normal operator that uses a vector point attribute to test the primitive orientation. If the op determines the prim normal needs to be flipped, it’s added to a primitive normal group that can be reversed using the reverse SOP.

import math

# This code is called when instances of this SOP cook.
node = hou.pwd()
geo = node.geometry()


# Evaluate the parameters
prim_group = hou.pwd().evalParm("primitive_group")          #primitive group to sample from
normal_name = hou.pwd().evalParm("sample_attribute")        #vector attribute used for averaging and comparison
export_group = hou.pwd().evalParm("export_group")           #primitive group wrong faced normals get exported to
angle_threshold = hou.pwd().evalParm("angle_offset")     #maximum allowed angle

# Finds the primitive group
def findPrimGroup(inName):

    for group in geo.primGroups():
        if group.name() == inName:
            return group.prims()
            break

    return None


'''
Sample Information------------------------------------------------------------------------------------------
'''



# Fetch the primitives and sample input information
if len(prim_group) is not 0:
    prims = findPrimGroup(prim_group)
    if prims is None:
        raise hou.NodeError("Can't find primitive group: %s" % prim_group)

else:
    prims = geo.prims()

if len(prims) < 1:
    raise hou.NodeError("Not enough primitives provided! Need at least one..")

if len(export_group) < 1:
    raise hou.NodeError("No export group found")

if export_group in geo.primGroups():
    raise hou.NodeError("Primitive group '%s' already exists" % export_group)

export_group = geo.createPrimGroup(export_group)

normal_attrib = geo.findPointAttrib(normal_name)
if normal_attrib is None:
    raise hou.NodeError("Lookup attribute '%s' does not exist" % normal_name)


'''
Compute---------------------------------------------------------------------------------------------------
'''


# Now sample the normals
for prim in prims:
   
    # get the prim verts
    verts = prim.vertices()

    # from every vert get the associated point normals
    # note that every point normal is normalized
    prim_point_normals = []
    for v in verts:
        vector = hou.Vector3(v.point().attribValue(normal_attrib))
        vector.normalized()
        prim_point_normals.append(vector)

    # accumulate point normals
    accum = hou.Vector3(0,0,0)
    for n in prim_point_normals:
        for i in range(3):
            accum[i]+=n[i]

    # get the average
    count = len(prim_point_normals)
    for i in xrange(len(accum)):
        accum[i] /= count

    # get primitive normal
    prim_normal = hou.Vector3(prim.normal())
    inv_prim_normal = -1*prim_normal

    # get the angle
    current = accum.angleTo(prim_normal)
    inversed = accum.angleTo(inv_prim_normal)

    if inversed < (current+angle_threshold):
        export_group.add(prim)

Houdini Pyro2 Dev Tests

While working on Pyro2 at Side Effects, some time was spend on simming and rendering dev tests. After incorporating a new feature (usability or tech related), a test drive was necessary.

Houdini 12’s Demo Video already shows what is capable using the new set of tools. But I though it might be interesting to see what was rendered trying to improve the tools.

All small tests were rendered using Mantra and simmed using the tools being worked on at that time. The Pyro 1 shader was used, as version 2 wasn’t available.

Note that leaving Velocity Blur off, some flames might appear a bit ‘blobsy’. The streaking effect will work better turning velocity blur on.

Render settings:

  • Micro Polygon Rendering
  • Volume Filter Width, 1 or 0.75
  • 1 or 2k Transparent Depth Map shadows
  • Max of 3 spot lights, classic 3 way lightning set-up
  • Max 15 minutes a frame, 720×480
  • In case of Clustering, Delayed Load archives were used (more memory efficient)
  • 3-6 pixel samples
  • Gaussian 2×2 Pixel Filter
  • Temperature was used as a transparency mask for Heat
  • No Noise
  • No Velocity Blur

Burning Sphere

Burning Grid

Heavy Fire

Moving Fire Ball

Clustered Tables

Small Smoke Trail

Explosion Playblast (viewport)