Video resolutions & codecs: how centralised storage can enhance your workflow

 

BY ELVIN JASAREVIC

"I remember the days of Apple’s Blue and White G3, the first version of Final Cut Pro and Sony DSR decks. It must have been some time in 1999, when the DV (digital video) revolution started. It was a time when some even believed that with the help of the Digieffects Cinelook or the Magic Bullet Suite and its “film look”, real film would be a thing of the past.
With 4K cameras widely available today, I think we are finally there, well ... almost. Most of the promos and even movies today are shot on these 4K cameras. With capture cards available from multiple vendors and support from almost all NLE (non-linear editing) applications, you could build your own fully functional 4K NLE system at only US$10,000. I say “only” because in 1999, Avid Symphony based on Avid Meridian hardware was priced around $150,000 and could edit only in standard definition (SD)!

 

If we need to stabilise this image, this can be easily done because with TV sets we actually do not use around 65% of this 4K footage — depending on the variation of 4K being used.
Changing the currently widely adopted HD standard to 4K will be a huge technical challenge for TV companies and, unfortunately, I cannot realistically see 4K hitting home TV screens any time soon.
But there are many other advantages to using 4K. In some Sony theatres you can actually see 4K movies, while companies such as Amazon, Apple, Google, Hulu, Microsoft and Netflix can actually create 4K productions and bypass the theatres and traditional channels by using the Internet. This is far more superior and provides the companies with the ability to utilise this technology to connect to a much wider audience.

The DDP booth at Broadcast India 2014 looked impressive with cameras from Sony, Blackmagic, Panasonic RED and Arri cameras

 

 

As with every technology revolution, there are also new challenges.
At the recent Broadcast India expo in Mumbai I met many editors and engineers, many of whom are still working with SD resolution!
It is not only about buying a 4K camera or 4K-capable editing systems. There are many considerations: workflow, limited 4K varieties, codecs, bit rates, frame rates and so on. And do not forget that the higher the resolution, the frame rate and bit rate, the faster the storage needs to be.
As 4K is approximately 4 x 1080 (HD, or high-definition), it provides incredibly detailed images. HD has approximately two million pixels and 4K increases this to eight million. So because we have a really big picture, we can easily zoom in to see part of the image.

 

Video resolutions, codecs and storage requirements

It has been over 15 years since I started working professionally with video cameras and editing systems.
I have been at almost every major broadcast exhibition and is honoured to have worked as a demonstrator for companies such as Adobe, Apple, Avid, Blackmagic Design and for the past eight years for award-winning DDP – Dynamic Drive Pool, a shared storage company by Ardis Technologies. During this time I have seen countless changes in post-production workflows and there are some questions I am regularly asked about shared storage:

 

I have 15 FCP (Final Cut Pro) edit- ing systems and need 20TB of storage. What is the cost? 
Or, can this storage be used for 4K capture or grading and so on ... 
My replies would typically be:

What video resolution and codec are used? How many editing systems would be connected? What are the total number of video streams required? 

Why do I ask these questions? Well, in order to specify what storage will be able to deliver the required bandwidth and to allow material to be edited in real time, these are must-ask questions.
By getting the right information, the required bandwidth for the storage can be calculated. The general rule is that more hard drives and more raid controllers will provide you with more band-width, but in order to be able to get all of this bandwidth out of storage, you would also require the correct cards for your throughput (I/O). These are your 1GbE, 10GbE or even 40GbE Ethernet or fibre channel cards at the back of the storage.
However, it is not as simple as it sounds, because different resolutions and bit rates cause different seek time. For example, if one raid system with 16 drives can play two streams of 2K files
at 350MB/s per stream, which is around 700MB/s, it does not mean that it would be able to play 190 DV streams (each stream of DV is 3.7MB/s, so 700/3.7 = 189).
And this is because of seek time. A handy general rule regarding seek time is, the smaller the file size and the bit rate, the more seek time is required. Also, more drives in shared storage means that video files will play only when all heads for each of the 16 drives are in the right position. However, if you use SSD drives, there is no seek time and the calculation above would then be acceptable.
Now, let us compare some of
the popular HD and digital cinema resolutions and codecs: HD can be
720p or 1080i/p, compressed or uncompressed, 8- or 10-bit, and can use different codecs, such as ProRes, DNxHD and AVC-intra, with bandwidth required from 3MB/s up to 180MB/s for full-HD RGB 10-bit 444.
Video formats are referred to by their vertical (Y-axis) resolutions, whereas digital cinema formats are referred by their horizontal resolution. 2K, 3K, 4K, 5K or even 6K can have resolutions from 2048 x 2160 up to 6144 x 3160, and can use codecs such as XAVC, R3D and Arri Raw, with bandwidth required for each video stream ranging from 30MB/s for 4K Sony XAVC422 10-bit or R3D at 40MB/s up to full 4K DPX uncompressed 10-bit 24fps at 1.27GB/s!

 

Elvin Jasarevic with the Red Dragon 6K camera

Or simply:
2k= 2048 wide (2 x 1024, where 1024 is 1K) so it is 2048 x 1152
3k= 3072 wide (3 x 1024) for 3072 x 1728
4k= 4096 wide (4 x 1024) for 4096 x 2304

What about workflow?

With cameras like the Sony F55, you can capture both HD and 4K images simultaneously, and do rough editing of material using the HD proxies while the 4K RAW files are archived and colour corrected.
Working with lower resolution means that your editing will be so much easier and its response in NLE applications faster. There are a few workflows available, but in relation to editorial, there are specifically two ways: native editing and transcoded editing.
With native editing you edit files
as you capture them, making it more difficult. Most editors try to avoid this, as this in general equates to having the fastest PC and storage.
If editors require lower-res files for dailies, they will usually create ProRes or DNxHD files for offline editing. After editing is done they can simply relink this final edit (AAF/ XML file) with the original material from the camera for final grading, which is the final step in this process. BMD Resolve would be a good and inexpensive starting point in the grading process.
Of course, in any modern workflow consisting of thousands of files or projects, MAM or PAM will be of great help and it is worth looking at the Focal Point Server.
One of the most important components in workflow for any company is shared storage. It needs to be easy to use, with all the right tools and, obviously, it needs to be very fast.
My recommendation is, of course, the DDP, an Ethernet-based SAN system, which recently received the IABM Design & Innovation Awards in the storage category.
But do not take just my word for it, nor that of 22 independent judges. Check DDP at your respective dealer and see for yourself.

Back to News

NEWS, MESSEN UND EVENTS