Naoto
Naoto Naoto Hieda

Piecemaker2 and Piecemeta 1

Piecemaker2 and Piecemeta 1

The project “Generative Pathways” is a collaboration between Naoto Hieda and Lisa Parra starting in September 2017. Extending the concept of EEG Experiments, the initial aim is to research the intersection of geometric choreography and neurotechnology. There are two elements. First, we need a computer program to inspire or to choreograph a dancer. Fortunately we have access to the source code of Pathfinder project, which later I rewrote as Pathrefinder. Second, the movements will be archived in the forms of video, movement data and bio/neurosignals to be reviewed and analyzed later on. We had discussions with Motion Bank through Choreographic Coding Labs and a meeting at its headquarters and we decided to use Piecemaker2 and Piecemeta platforms. Piecemaker2 is a web service to archive videos and annotations in a single timeline. Piecemeta is also a web platform but to store numeric data with constant frame rate (e.g., motion tracking). To create an account and to receive a tutorial guide, please contact Motion Bank directly. In this article, I would like to describe my initial attempt to use Piecemaker2/Piecemeta and the current issues.

To begin with, I danced with Pathfinder projected on a wall and recorded it with Yi Action Camera and Kinect 2. Therefore, the available data are:

  • Video (action camera)
  • Depth video (Kinect)
  • Skeletal tracking (Kinect)
  • Pathfinder score

The following video shows how the skeleton (left) and video from the action camera (right) appear:

A post shared by Naoto Hiéda (@micuat) on

The data seem to be synchronized in the video but it is because I manually started playing back the Kinect recording and action camera video at the right time. Also they are two separate programs (skeletal tracking and video player), which should be shown in a single program. The goal is to use Piecemaker2/Piecemeta so that one can review all the data streams in sync by simply hitting a play button (or even navigate through a timeline). Challenges are:

  • Uploading videos and aligning the timestamps
  • Outputting the Pathfinder score and uploading it to Piecemaker2/Piecemeta
  • Extracting skeletal tracking and uploading it to Piecemeta

Uploading videos and aligning the timestamps

The easiest way is to upload the videos on YouTube (if you wish you can select “unlisted” so only those who have a URL can watch the video) and embed them to Piecemaker2. The problem is aligning two or more videos. One solution is to use QR codes to visually encode timestamps, which should look like this:

This is a good solution for accurate timing but you need to find a video frame a with visible QR code, decode the timestamp and calculate the timestamp at the first frame. So far I have not found an automation script to do this, and hopefully I have time to write it soon (TODO1). Alternatively, if your camera has an audio input jack, which my action camera does not have, you can use linear timecode to make your life easier. This time I decided to manually align timestamps by marking the frame in video when I clapped my hands.

Outputting the Pathfinder score and uploading it to Piecemaker2/Piecemeta

I modified the Pathfinder code so that it dumps the score as text:

1
26.124 r: +00.000 -> +01.571 tx: -06.000 -> +00.000 ... tri: +00.000 -> +01.000

The first field is the timestamp in seconds since Pathfinder is launched. Then labels “r”, “tx”, “ty”, “sx”, “sy” and “tri” follow, which are rotation, translation (x/y), scaling (x/y) and triangle deformation parameters, respectively. Every parameter is followed by [origin] -> [destination] so that from this score, one can recover the Pathfinder geometry, which is not implemented yet (TODO2). To align the timestamps with the video, I extracted the timestamp of the video from Piecemaker2 (something looks like 1503625428.717). Again I manually marked when the Pathfinder starts in the video. So, for example, if the video’s timestamp is 1503625428.717 and Pathfinder starts after 3.5 seconds, the score should start at 1503625432.217. I assume that this can be simplified and automated if the clocks of the camera and a computer that runs Pathfinder are synchronized. Then,with no effort, absolute timestamps should be aligned (TODO3).

I found uploading the score data to Piecemaker2/Piecemeta to be challenging. I manually made “scene” markers on Piecemaker2 with the corrected timestamps. The advantage of Piecemaker2 is that you can jump from one scene of the video to another just by clicking on the scenes on the web platform. However, it seems there is no API to upload markers so if you have tens or hundreds of scenes, this is not practical. Next time I will use Piecemeta instead. The advantage is that data upload can be simplified by outputting csv or trac data (explained later). Nevertheless, you need to develop your own app to review the data with the video since the interface on Piecemeta is quite basic. Also you need to reformat the score into data with a constant framerate, say 30 frames per second, while points between the score can vary (usually 6-12 seconds).

Extracting skeletal tracking and uploading it to Piecemeta

Kinect SDK comes with Kinect Studio, which allows you to record color and depth streams in an xef file and play them back from apps that use Kinect SDK. It is a powerful tool, and I used it for recording. As a post process, the recorded data is streamed to OSCeleton and output to a csv file using it’s logging feature (you need to enable logging from a command line option or use my branch). This branch also includes a script to convert the output csv file into a trac format that is supported by Piecemeta. Once converted into trac, the joints are automatically labeled (e.g., SpineShoulder / X). You can find some data here.

The only issue is timestamping. Since the time when you hit the play button on Kinect Studio to stream data to OSCeleton is arbitrary, it is almost impossible to programmatically align the timestamp of the skeletal tracking and the original video. It seems there is a Kinect Studio API so you may be able to extract the timestamps when they are recorded. But this also means that modification to the OSCeleton code is needed (TODO4). Another problem with timestamping is that OSCeleton is not running at the fixed framerate so you need to upsample the output skeletal tracking data to meet your desired framerate (TODO5).

Currently I have problems described above but, at least, I succeeded in retrieving the uploaded data from Piecemaker2/Piecemeta and playing back the video with skeletal tracking by modifying the Processing sketch by Florian Jenett:

A post shared by Naoto Hiéda (@micuat) on

Obviously data are not in sync, but I wanted to write this report in order to clarify where I am and what has to be done. Hopefully in the next post I will have a better picture of the workflow to record, to upload and to review multiple data streams.

comments powered by Disqus