Line scratches are a common problem in archived film. The problem is transferred to video during the telecine transfer process. The artefact is easily visible as a vertical line of bright or dark intensity, oriented more or less vertically over much of the image. It may be caused when material from some particle is smeared vertically over the film material in the projector or by the abrasion of the film as it passes over some particle caught in the mechanism. The task is to propose a technique for the automatic detection and removal of the artefact. Detection is complicated by the fact that lines occur as natural phenomena in interesting scenes. Furthermore, the defect can occur in the same or nearly the same location in consecutive frames. Thus detection of line artefacts cannot rely on temporal discontinuity in image brightness, and the chapter concentrates on purely spatial line detection. The pictures below, KNIGHT , SITDOWN and STAR show two examples of line scratches, illustrating also the persistence over frames.
It is difficult to propose a mathematical model for the effect of the abrasion or occlusion of the film (causing the scratch) on the intensity of the projected light. This would be the first step in the design of a detection algorithm. Instead it is possible to make some observations regarding the luminance cross-section of the line defect, from which a model useful for detection can be proposed. The use of this feature is explored in the first part of the chapter, which concludes by proposing a validation step for separating possible line defects from false alarms.
Since no reliable model of the degradation process is proposed, the removal of the line defect is treated as a missing data problem. A few varieties of reconstruction are explored. The region which is reconstructed persists across several frames and also persists across much of the vertical extent of the picture. It is interesting to note that errors in the reconstruction algorithm chosen are more visible than in the case of Blotches.
The pictures shown below are associated directly with particular sections in Chapter 8 of the book. The raw image data (for stills only) is also contained in .PGM files of the same name as indicated in the captions of each image. As usual the reader is asked to disregard spurious behaviour at the edges of frames processed since no great care was taken over designing the algorithms to have useful behaviour at the extremities of each frame. This is merely an implementational point.
See Figure 9.6 in the book. The pictures below show the importance of reducing the visibility of the interpolated line patch. The entire original sequence of 256 x 256 pixels per frame (64 frames) is stored as KNIGHT.SEQ in the usual raw format. The original and restored versions of frame 33 below show that the JPEG compression process actually reduces the visibility of the interpolated line areas. This is so because the difference between the Least Squares (LS) and Sampled interpolations are almost invisible at this scale and in the JPEG format used for images to be viewed in the HTML browser. The differences are much more apparent in the original, raw image data which the reader is invited to examine. Some attempt to better illustrate the difference is shown in the zooms on frame 33. As that scale, the LS interpolant shows up as a visibly different flat area, whereas the Sampled interpolant is more difficult to place. In those examples the width of line interpolated was 8 pixels, in order to emphasise the difference. There is one image shown where the width was taken as 5 (taken from the result of the Bayesian refinement step), here the interpolated line area is much less visible.
|Original KN33ORG||LS Restored Line width = 8; KN33LS|
|Sampled restoration Line width = 5; KN33FINE||Sampled restoration Line width = 8; KN33SAMP|
|Original K33ZORG||LS Restored, Line width = 8, K33ZLS||Sampled, Line width = 8, K33ZSAMP||Sampled, Line width = 5, K33ZFINE|
Also on the CD are a number of sequences which show restoration of the entire KNIGHT sequence. As usual, the deblotched sequence is shorter by two frames because 3 frames are needed to restore one frame. The sequences are described below.
The reader is directed to observe how the interpolated line regions become less visible after subsequent processing to remove blotches (KNDLDS.SEQ). They become even less visible after noise reduction.
The images in the book (Figure 9.9) do reflect the perceived visibility of the interpolated line areas. Line removal on frame 8 is reproduced here . The detection mask is also shown. All the image data for this example is in the subdirectory SITDOWN. The original sequence is 10 frames of 512 x 512 pixel frames and is stored in SITDOWN.SEQ in the usual raw format. The images below show good interpolation of the line areas. A length threshold of 100 was used for detection, and 7 dark lines were assumed. The Bayesian refinement step rejected none of the proposals, and a line width of 8 pixels was used for interpolation of all the line features. A 2DAR causal model of 3x3 pixel support was used with sampled interpolation and a block size of 16 x 16 pixels.
|Detection mask SITDETL7|
|Sampled restoration SITDL7|
The SITDOWN sequence is excellent for illustrating both the successes and the failures of the current algorithm. Some additional sequences are described below.
|Original, CARGO||Sampled restoration, line width = 8, CARGODL|
The STAR sequence is stored in the STAR subdirectory as STAR.SEQ in the usual raw format, but with 576 x 720 pixel frames (CCIR rec. 601). Also in that directory are a few other sequences showing line removal after deblotching the sequence. The sequences are as follows:
|Original STARFR1||Descratched with JOMBADI Line width = 8; STDSJF1|
|Descratched and Line Removed with SDIa+ML3Dex STDSDIF1||Line Removal on JOMBADI reconstruction STDSJLF1|
This chapter has again outlined an algorithm which combined deterministic and stochastic methods. The overall concept is that a deterministic pre-processing algorithm can yield a very good starting point for a stochastic process, allowing the power of MCMC methods (for example) to be used in a practical, low cost solution by improving the convergence of the stochastic optimization stage.
The work has shown that the automatic detection of line scratches is complicated by the fact that they persist in nearly the same location in each frame. In some sense the deterministic process which was introduced can stand on its own as an effective detection system if the user is willing to identify the number of lines in the image. This kind of user interaction may be reasonably viable in the film post-processing industry, although much less viable for real-time television pre-processing. Nevertheless the deterministic stage is so computationally simple that it is conceivable that the selection of a `suitable' threshold for detection of lines can be coped with as part of a real time system.
The Bayesian refinement step was introduced solely in an effort to improve the hands-off operation of the algorithm. It suffers from one major disadvantage in that it assumes that the line traverses the entire image, which is usually the case but not always. It is possible to design a scheme which uses binary indicator variables to switch on and off the introduction of a line profile at different points vertically along the line. The estimation of these variables could be incorporated as part of the refinement strategy.
The idea of treating the line as an area of missing data was necessary only because a good model of degradation could not be found. An alternative degradation model is that of the line being formed by a pulse passed through a second order system. This can be adopted in the refinement stage so that the coefficients of the system as well as the height and width of the pulse become model parameters. The estimation of these parameters may allow a more general shape to be fitted and so the evolution of the line vertically may be better tracked and so better removed. This is one focus of current work.
As a final note, it must be recognized that the techniques presented by Hirani et. al. and Strohmer are viable alternatives to the spatial AR interpolation process presented here. Hirani et al employ a POCS based method in the frequency domain for reconstructing missing patches by manually locating regions of similar texture in the image. This can be adapted for use here. Strohmer's technique interpolates regions using trigonometric polynomials, in a kind of weighted FFT formulation. This is a much more suitable approach than presented by Hirani, although the computational complexity is higher. However, in both cases it would be harder to deal with the visibility of the relatively `smooth' interpolant.