Integrated Slide Presentation for UDK

One of the greatest things about pursuing game-centric research in an academic environment is getting to present rich interactive audio-visual work to a conference audience that’s been beaten down by a long succession of static power-point, keynote and web n.0 slide shows. One of the worst things is bouncing in and out of your game-engine of choice to display those text and graphic elements that you need to get your points across.

The other day I had resigned myself to preparing yet another set of slides when something inside me drew a red-line-in-the-sand and said no more. It’s time.

As most of my current research takes place within the UDK, I blocked off an afternoon to see if I could hack a UDK native slide presentation system into place. After begrudgingly giving up the dream of an integrated Awesomium-like in-engine browser solution, I instead went low brow, and decided to hack the UDK Hud instead. Lo and behold, it actually worked.

What I ended up with is a totally functional yet simple slide-presentation system that can pull slide formatting and text from a custom .ini, and images and custom fonts from UDK packages. It features slide-by-slide background colors, alphas, text locations, image sizing, LERP’d X,Y coordinate and width/height transitions and can all be controlled by an XBox controller.

Here’s a quick video example:

It’s all checked into the dev branch of UDKOSC up on github. All the good bits can be found in the new OSCHud.uc class and its corresponding DefaultOSCPresentation.ini.

Without getting too deep into the description (the code is pretty easy to read) the main sections involve setting up arrays of structs for Slides and Layouts that read from the .ini in the custom HUD class, defining data for each “Slide” and “Layout in the .ini, then letting it all flow.

Here are the structs in the HUD Class:

struct Layout
var float x;
var float y;
var float width;
var float height; // 0.0 - 1.0 * current resolution
var int colorR;
var int colorG;
var int colorB;
var int colorA;
var int textColorR;
var int textColorG;
var int textColorB;
var int textColorA;
var float transitionTime;
var int Footer; // default -1 means no Footer

struct Slide
var int layout;
var String title;
var float titleScale;
var String text;
var float textX;
var float textY;
var float textScale;
var int imgId; // default -1 means no img
var float imgX;
var float imgY;
var float imgScale;

And the Arrays for each:

var config array Slides;
var config array Layouts;

Then define data for each Slide and Layout in the .ini:


Slides=(layout=4,title="Musical Sonification of Avatar Physiologies, Virtual Flight and Gesture",titleScale=1.0,text="\n \n \nRobert Hamilton \n \nCenter for Computer Research in Music and Acoustics \n \nStanford University \n \",textScale=0.8,textX=0.05,textY=0.2, imgId=-1)
Slides=(layout=0,title="Slide 1 \n",titleScale=1.0, text="This is the 2nd slide", textScale=1.0,textX=0.11,textY=0.11, imgId=-1)
Slides=(layout=2,title="Slide 2 \n",titleScale=1.0, text="This is the 3rd slide", textScale=0.2,textX=0.1,textY=0.8,imgId=0,imgX=0.01,imgY=0.1,imgScale=0.4)
Slides=(layout=4,title="Slide 3 \n",titleScale=1.0, text="This is the 4th slide", textScale=0.3,textX=0.01,textY=0.01, imgId=-1)
Slides=(layout=5,title="Slide 4 \n",titleScale=1.0, text="This is the 5th slide \n Chr(8226) • \feff0040 test second line",textScale=1.0,textX=0.1,textY=0.5,imgId=-1)


; 0
; Left side, transparent white block
Layouts=(x=0.0, y=0.01, width=0.4, height=1.0,textColorR=0,textColorG=0,textColorB=0,textColorA=255, colorR=255, colorG=255,colorB=255,colorA=80, transitionTime=0.03, footer=-1)

; 1
; Full Screen, White bg, Black text, less transparent
Layouts=(x=0.0, y=0.0, width=1.1, height=1.1,textColorR=0,textColorG=0,textColorB=0,textColorA=255,colorR=255,colorG=255,colorB=255,colorA=190, transitionTime=0.1, footer=0)

; 2
; Full Screen, Black bg, White text
Layouts=(x=0.0, y=0.0, width=1.1, height=1.1,textColorR=255,textColorG=255,textColorB=255,textColorA=255,colorR=0,colorG=0,colorB=0,colorA=255, transitionTime=0.01, footer=0)

; 3
; Small black block
Layouts=(x=0.4, y=0.4, width=0.3, height=0.7,textColorR=255,textColorG=255,textColorB=255,textColorA=255,colorR=0,colorG=0,colorB=0,colorA=255, transitionTime=0.01, footer=0)

; 4
; Full Screen, White bg, Black text, Opaque
Layouts=(x=0.0, y=0.0, width=1.1, height=1.1,textColorR=91,textColorG=91,textColorB=91,textColorA=255,colorR=255,colorG=255,colorB=255,colorA=255, transitionTime=0.1, footer=0)

; 5
; Full Screen, Black bg, White text
Layouts=(x=0.0, y=0.0, width=1.1, height=1.1,textColorR=255,textColorG=255,textColorB=255,textColorA=255,colorR=0,colorG=0,colorB=0,colorA=200, transitionTime=1.0, footer=0)

Dig deeper into the class to see exactly how it works but it’s pretty fun and should make presenting our UDK work in conferences or lectures much more enjoyable.

Tagged with:
Posted in Uncategorized

Sonifying UDK Skeletal Mesh Components with UDKOSC and Supercollider

As part of our ongoing work on ECHO::Canyon, I started exploring methods to use a UDK actor’s Skeletal Mesh, or more specifically the bones that make up that Skeletal Mesh to drive sound and music. Think of it along the lines of digital puppetry, but with each movement or gesture mapped to a sound-generating process using UDKOSC. Using an older-school wired XBox controller, UDKOSC and our custom ‘Valkordia’ Skeletal Mesh from ECHO::Canyon, it wasn’t too difficult to rig up a pretty straightforward example of sonified musical bone data.

Here’s a demo capture of two simple sonifications of the right and left wing-tips bones of the Valkordia wings. The first shows simple sine waves mapped to each wing’s Z coordinate with a slight offset between their base frequencies to make the sines beat just a bit. The second show the Valkordia’s “call” sound modulated slightly by the distance measured between the two wing tips.

Fun to play with and portends many things to come.

To explain what’s going on, this tutorial will step through the process of configuring the Skeletal Mesh controllers, mapping 2-dimensional Joystick data to drive our Inverse Kinematic controller, identifying individual bones in the skeletal mesh in UnrealScript and finally the sonification steps to use UDKOSC and Supercollider to make some sweet, sweet music.

While most of the concepts and examples I’ll cover here aren’t anything too crazy for experienced Unrealscript jockeys, they’ll require a pretty comprehensive understanding of how Unrealscript classes, custom animations and SkeletalMeshes and control systems all fit together. For those reasons, I’d say this tutorial is targeted at medium to high level UDK users. In that light I’ll refer to a number of processes and objects without actually describing each and everyone in detail as that would expand the scope of this post well beyond its already large size. I will however try to hyperlink UDK forum posts and UDN example pages into the text where appropriate.

Also, this may get long so to aid in digestion, I’ll break it down into the following chunks:

  1. Understanding the Skeletal Mesh
  2. Our Friend: the SkelControl_CCD_IK
  3. Conditioning our Input data
  4. Bone tracking in UDKOSC
  5. Sonification with Supercollider

After the Break: Understanding the Skeletal Mesh…

Tagged with: ,
Posted in UDKOSC


ECHO::Canyon v1.0 was premiered last week at the 4/25/2013 “Music & Games” concert @ccrma.  With amazing custom visuals, modeling and rigging by Chris Platz, this environment led us in a whole new direction for interactive musical sonification and procedural music with UDKOSC.

The piece and entire concert are archived up on UStream thanks to Dave Kerr:  (ECHO::Canyon is the first piece of the stream).

Here’s a screen-grab of the player “Valkordia” character, floating out by the edge of the environment.

ECHO::Canyon - From the edge

The basic mappings used in this piece focused around flight, and the motion of our main player character in relation to constructs in the environment.

  • The blue crystals visible above were represented by dynamic shifting synths, with their amplitude keyed to the player’s respective distance to each.
  • Directional ray traces, fired directly down, left and right from the player, produced a “screetching” feedback wail driven by the players height  and peripheral distance to solid-body objects in the environment.
  • Flocks of OSC-controlled pawn and AI Valkordia characters were represented with banks of oscillators, with pitch mapped to their absolute z/height in the enviroment.

One of the most exciting parts of this piece has been the introduction of OSC control over the player, camera and pawn entities, allowing us to “compose” gestures and sequences of actions in a dynamic fashion. Using our new OSCControl scripting app, repeatable gestures (timed sequences of player, camera and engine commands) can be created. And when control is handed over to a dynamic system, like our custom control ChucK or Supercollider scripts, things really start to get interesting.

All in all, the first launch of ECHO::Canyon was great, and with many iterations to come, we’re really looking forward to the next presentation.

Tagged with: ,
Posted in UDKOSC