Festival viña 2012
The above photo was posted on our Facebook page a few days ago. One of the comments to accompany the post was a disbelief that MadMapper could be involved in such a large scale project. What this person did not know was that I had an article scheduled for this week containing text from the persons involved that explained the project in detail. The following was written by based Rafael Pereyra of Visual Lab and Luis Barrera of Anatomico with some slight corrections and editing by me.

<< begin >>

MadMapper, Who said protection mapping only?

One of the most common issues with vjing in a large video workspace is the arrangement and optimization of the video signal outputs for LED processors.

If you use a Mac Pro you can maybe have a chance to manage the output video signals because of the multi output feature and Modul8. But you must map each output to fit the desired pixel space in your composition. Large stages require more than an single 4:3 or 16:9 composition, sometimes they require 9:1 or some kind of custom pixel space, mixing more than 2 layers of video along with some cameras, that increases the CPU and GPU usage, compromising your performance.

One great solution we experienced two weeks ago at Viña del Mar Festival. The festival is one of the most important music shows in Latin America, and we used of MadMapper. The show was in charge on MADIS (www.madis.cl), the most experienced stage designing company in Chile, working with Visualists Anatomico and Ju Quezada (www.anatomico.cl).

The festival holds more than 700 m2 of LED panels, 8 LED Processors, 2 Hi End Catalysts systems, a Mac Pro running Modul8 and MadMapper, a incredible render farm of 6 machines rendering a pixel space of 9000 x 1000 in realtime at 60fps, rendering algorithmic visuals programmed by a software called MAMBO made by Proyecto LED from Chile. The experience with MAMBO was awesome, managing a huge pixel space in realtime. Everything was controlled with 2 Lemurs and 1 iPad running the Lemur software.

All this video converged onto a Vista Spider System with 8 by 8 SDI inputs and outputs. One of the big challenges of the show was to convert the huge LED pixel space in one single SDI video signal for all the LED Processors, reducing use of simultaneous outputs for the LED processors. It was a huge remap made by software.

All this video converged onto a Vista Spider System with 8 by 8 SDI inputs and outputs. One of the big challenges of the show was to convert the huge LED pixel space in one single SDI video signal for all the LED Processors, reducing use of simultaneous outputs for the LED processors.

So, what does all this mean? Perhaps these images will provide a better illustration of what was achieved.

And now here is the schematic view of the LED arrangement of the Backdrop.
It is very important to respect the space between LED modules and the gap.

The following graphic illustrates how the gap works in relation to the Modul8 output:

This is what is displayed without the gap:

With gap you can tell the difference, the separation between modules is respected by the software arrangement. So it look natural respecting the spaces and gaps.

We had the change how we used the software for Morrisey´s show and remap a virtual pixelspace in Modul8 using a Syphon output of 9000 by 1000 pixels and remap it to a single SDI output of Full HD with MadMapper. The technical crew was amazed how easy and quick it was for us to do the setup. We used more than 400 quads.

Here is a schematic view:

And another screenshot with the MadMapper interface with both the input and preview windows:

<< end >>

This is not the only instance where our software has been part of a much larger workflow. Modul8 has been known to partake in very large festival setups and live broadcast environments. When you have such highly complex setups, it’s nice to have one or two things be simple.

This entry was posted in installations. Bookmark the permalink.

5 Responses to Who said projection mapping only?

  1. Timothy says:

    I have been using this technic for the past 3 months, although not on a HUGE scale. I was getting BORED of playing multiple layers of video over each other, so I decided to PANEL and CHECKER BOARD my screen. I created a map of panels and a checker board map. On each map I put a small space in-between each panel and square. I have those maps saved and now when ever I get any new clips, I run them through the map and screen capture with ScreenFlow. I also mad a mask to use in Resolume

    PANEL MECHANICS:

    So I have 8 panels with space in-between each, (no video runs in the space between the panels.) I run a layer of video from Resolume, through Syphon to Mad Mapper. In Resolume I use one layer on the FULL composition screen size. I have found that using the full screen creates a better video. when you downsize the video to use multiple layers through Resolume, the video comes out with less quality than a DVD. I set the layer to auto run with NO…I repeat NO, transition. So I have the video running through all 8 panels. with everything running I do a screen capture and save. Next we turn off every other panel (4) and do a screen capture. the last thing to do is turn on the panels you turned off and turn off the panels you turned on and do another screen capture.

    After the captures, all you have to do is clip the captures into separate clips.

    Here comes the fun part.

    After I have clipped the video into clips, I drop the full panel clips (8 panels) into one layer, the left 4 panels in a second layer, then the right 4 panels in a third layer. So, if you run the full video, it looks like the screen is broken up into 8 screens. Now, I turn that layer off and start layer 2 and 3 and set one to forward auto run and the other to random auto run. This will give you 2 videos running, one on every other panel and the other video on the other panels.

    Then if you setup a 4th and 5th layer, you can setup Resolume to play a 3rd layer in the space between the panels.

    A LITTLE INSIGHT:

    Through trial and error, I came up with these conclusions. It boils down to work flow and stability. First this is for one screen, I was looking to do something different for WMC 12′ VJ Challenge. I chose to screen capture my videos because, I have a dual core Macbook Pro, and their is less strain on my CPU.

    You might be asking why I screen captured a full panel when I have screen captures of the alternate panels. Well, through trial and error, I found out that when I tried to match up the clips in Resolume, I saw that they never matched up perfect to make the full panel video. This is because of my dual-core.

  2. Holly!! That is pretty impressive! WOW :D
    It’s so freaking awesome!
    Congratulations to the team who made that possible, and to the creators of madmapper and modul8, It is a great deal to know what you can achieve through good teamwork and two great softwares.
    I mean the mapping I do looks so poor compare to this. I want more challenge! :D
    Thanks for sharing this valuable information!

  3. Yes, thanks a lot for the replies, there’s no doubt the powerful tool is Madmapper, i guess even more capacities of uses allows you to manage more screens with one single Led processor.

    For example last venue we had 4 columns dressed with 4 LED faces arount them, all 4 columns was mapped with different images, and each face of each column also mapped with one different image too.

    Same processor, only need 1 video signal in a macbook air. A wide video arrangement of several video sources remapping each column in the madmapper output with pixel precision, a thing that before in M8 cant be done.

    • No, we were doing the video triggering in M8 and then remapped into a special layout to avoid waste of pixelspace in Madmapper. The original base software is Mambo from Proyecto LED Chile, a 6 computer server.
      For the video ingest they used Vista Systems Spider in SDI to get signals from Mambo, Madmapper or SDI switching.

      cheers

  4. Nicholas Rivero says:

    Were you using a Mac Pro with a capture card to ingest the video and then pixel-map it to your LED panels? Or was the content being played back from Modul8 on the Mac Pro that you were using?