Abstract:In view of such flaws as color distortion, detail loss and visible haze residues found in existing defogging algorithms, a defogging algorithm has thus been proposed based on an improved conditional generation of adversarial network. First of all, for a better preservation of the underlying texture information and structure information of the image so as to share the features between the shallow and deep images, a generator with symmetric layer jump connection structure has thus been designed. Secondly, in order to preserve the details of the image and reduce the artifacts, the loss function is redesigned, with L1 loss as well as perceptual loss introduced based on the original network loss. The peak signal-to-noise ratio of the proposed algorithm on the HSTS data set can reach as high as 27.306 4 dB, and the structural similarity can reach 0.963 3, which is 5.728 dB and 0.058 1 higher than the optimal values of other algorithms respectively. Target detection mAP after defogging has been improved 2.51%, meanwhile the recall rate has been improved 4.31%. The experimental results show that the proposed algorithm can help to reduce the color difference, remove the haze residue and basically eliminate the block effect, showing that it is characterized with obvious advantages in both subjective and objective evaluation.