The technology is getting better, the prices continue to come down and some post-production houses are becoming masters of the video-to-film process, but shooting a film on video and blowing it up to 35mm film can still seem something of a dark art.
Part of the reason for this is that cameras and technology are changing so quickly. What seemed the best balance between cost and creative values a year or two ago may no longer hold true. But also, each project is unique and may require a different solution and aesthetic.
Vancouver-based filmmaker Kenneth Sherman has worked on two projects shot on video and blown up to film. He was editor on one of the first High Definition 24P (HD24P) video to film feature in Canada, The Rhino Brothers (2001), a small-town ice hockey drama, and more recently teamed up with many of the same players from that production to direct a video to film short Go-Go Boy - Prelude (2002) shot on DVCAM. The film, which was feted by the local industry for its artistic merit in 2003, was intended to be both a visual treatment for a feature, and to showcase transfer technologies.
"We were really trying to get the best possible pixel rate that we could," says Sherman, who collaborated with local outfits Matrix Professional Systems, William F. White and Airwaves, and post-production house Rainmaker.
Go-Go Boy (Prelude) was shot with two cameras: a Sony DSR570, which has a 16:9 chip, and the Sony PD150, the 3-chip prosumer camcorder that shoots both MiniDV and DVCAM. They used an anamorphic lens on the PD150 because it doesn't have a 16:9 chip and they wanted to use the whole chip (the PD150's 16:9 setting simply crops the image as it acquires it). Sherman edited the film at home on an Avid and output to a DVCAM tape. That tape was then "uprezed" by his post house Rainmaker in Vancouver ("they doubled the lines of resolution") to create an HD master which was used to do all the colour timing before going to film.
Sherman sees two big advantages to going with DVCAM rather than film or High Definition: financial ("you are saving a ton of money") and the creative flexibility of being digital at the post-production stage.
"You have a lot more precision when you are timing video than when you are timing film. Like there was one shot in particular where there was a strip of an edge of a door that was white and everything else was fairly dark in the frame and we are able to go in and isolate that one stripe down the centre of the film and bring it right down so it didn't stand out so much."
James Tocher, who heads Vancouver-based Digital Film Group, recommends that anyone considering doing a blow-up should begin by getting advice from DPs and directors who have gone through process. It is generally recommended you speak early in the project with post-production houses and shoot test footage.
Digital Film Group burst onto the international digital film scene with their transfer of Atanarjuat the Fast Runner in 2001, where they showed that video shot on the North America NTSC video system was a viable route for video to film transfers. Prior to that many filmmakers in North America were using PAL video to do blow-ups rather than NTSC, because it had been successfully pioneered in Europe.
"What we try to do on all levels is... help them make the transition in their minds from what they see on the monitor versus how it's going to look on the big screen. So that takes in lots of factors - what format and resolution they originated on, how the detail settings were set up, how the black level settings were set up... all these factors come into play and they are specific to the cameras, they are specific to the formats, the pixel resolutions, the frame rates. All these things have a bearing on how the thing is finally going to look on the big screen."
Tocher says rates for a feature length transfer range from C$25,000-C$40,000 (10,200-16,300 pounds), depending on the sophistication of the technology and people making the transfer.
DFG itself transfers 20 to 25 digitally shot features a year, including On The Corner, featured in this column last week. Those features are predominantly shot on NTSC video, with a quarter to a third of those features shot on High Definition video. "HD has been increasing exponentially in the last couple of years," says Tocher.
When considering blowing up to film, one of the most important decisions is choice of camera. "That is a huge factor," says Tocher. He points to the fact that now many filmmakers can afford to edit right up to High Definition on programs like Adobe Premiere Pro and Apple Final Cut Pro on well-speced PCs or Macs.
"I think the better educated digital filmmaking DPs and directors will often go for higher-end formats like DigiBeta and DVCPro50 because they realise that there is a significant difference in cameras, but not a significant difference in post-production price. So posting in standard def or MiniDV, and Digital Betacam is really not that different any more."
The quality of a camera's lenses, the size of its chip and the electronics inside the camera are all going to affect the final image, even when shooting on the same format. "The difference between the 570 and PD150 was pretty remarkable and that is just based on the camera chip size," says Sherman, even though both cameras used DVCAM tape, at the same speed and under the same conditions.
Sherman is not against using lower end formats like MiniDV and has even worked with Hi-8 to get a "really grainy, rough videoey" effect, but he says that the aesthetic requirements of the project should dictate the technology and not the other way round. He adds, "If people are expecting it (MiniDV) to look like Lord of the Rings they're up for a challenge."
Tocher points out that each camera has its own quirks that you should be aware of when shooting for blow-up, particularly if you opt to shoot on a prosumer DV camera. He mentions the Panasonic DVX100, a 3-chip 24P MiniDV camera that he owns himself and that he considers "an exceptional little camera" in its price range.
One of the DVX100's strengths is that it allows you to shoot in 24 progressive frames per second, emulating the 24 frames per second rate of film. Progressive video shows each frame in one pass as opposed to the two-pass approach of standard interlaced video which divides a video frame into two fields, one made up of the odd-numbered horizontal lines, the other made up of even-numbered lines (hence NTSC interlaced video comprises of 30 frames and 60 fields a second, and PAL interlaced video has 24 frames and 48 fields a second).
Although the DVX100's 24P setting gives it an edge as far as film transfers are concerned, some of its settings can play havoc with footage destined for the big screen.
"There is a setting called vertical detail level, and it's got two settings, thick or thin. If you set it to the wrong one you end up with a lot more noise. The picture on a monitor looks sharper, but what they are doing in order to achieve that sharpness is spiking the vertical detail of the signal so that it causes all sorts of problems. You may not necessarily see it on a monitor, but you will see it when you blow it up to film," he says.
Tocher says a common mistake made by filmmakers new to the blow-up process is to use settings like CineMatrix and CineGamma which make video look more cinematic. "What camera manufacturers are trying to do is fake you out by making a video signal on a video monitor look more 'filmlike'," he says, but it could be degrading the image as far as a blow-up is concerned.
The film setting, for example, may simply change the shutter speed setting on an NTSC camera from standard 30 frames/60 interlaced fields a second to 30 progressive frames a second.
"60i transfers better to 24P better than 30P," says Tocher. "And the reason for that is there is less information to interpolate to get to 24 because you actually have to interpolate new frames to get to 24 if you are not at 24P already."
Other settings designed to create a film look could add video artefacting to the final blow-up.
"The camera could be doing things like compressing a certain range of the signal or boosting up a certain area of the gamma in order to make it look more filmlike, but maybe what it is doing is not giving you the full range of your black levels... or it could actually be adding noise to your video in order to add the kind of feeling of a film grain. It's a bunch of video engineers sitting around going, 'Oh that looks more like film. Great let's keep that.' But then when you blow up to film - and film has a much wider response area - it sees everything in the signal that you may not have seen in the monitor. Monitors tend to compress the lower end of the signal."
Softening filters should also be used with caution or not at all if you shooting standard definition. "The process there of interpolating the pixels tends to soften the picture a little bit so if you use the softening filters then you are compounding the softening effect when it gets to film," says Tocher.
Other things to consider are good monitors on set and good charts (Tocher suggests the Macbeth chart) to ensure more control over colour and lighting.
Video requires less lighting than film, but does not have the same latitude, which can lead to image blow out in high contrast lighting situations.
Anyone who watched the recent European Cup matches saw a good example of this where half the pitch was under the shadow of the stadium with the result that half the picture was either too dark or totally washed out because the camera couldn't handle the contrast.
Sherman said he had a similar day filming The Rhino Brothers where the sky was blown out. "There was no information so we couldn't do anything in the timing to try and regain some sort of colour."
Even experienced camera operators struggle in some situations. No wonder filmmakers become control freaks.