Show simple item record

dc.contributor.authorGaldran A.en_US
dc.contributor.authorVazquez-Corral J.en_US
dc.contributor.authorPardo D.en_US
dc.contributor.authorBertalmio M.en_US
dc.description.abstractWe propose a novel image-dehazing technique based on the minimization of two energy functionals and a fusion scheme to combine the output of both optimizations. The proposed fusion-based variational image-dehazing (FVID) method is a spatially varying image enhancement process that first minimizes a previously proposed variational formulation that maximizes contrast and saturation on the hazy input. The iterates produced by this minimization are kept, and a second energy that shrinks faster intensity values of well-contrasted regions is minimized, allowing to generate a set of difference-of-saturation (DiffSat) maps by observing the shrinking rate. The iterates produced in the first minimization are then fused with these DiffSat maps to produce a haze-free version of the degraded input. The FVID method does not rely on a physical model from which to estimate a depth map, nor it needs a training stage on a database of human-labeled examples. Experimental results on a wide set of hazy images demonstrate that FVID better preserves the image structure on nearby regions that are less affected by fog, and it is successfully compared with other current methods in the task of removing haze degradation from faraway regions.en_US
dc.publisherIEEE Signal Processing Lettersen_US
dc.subjectColor correctionen_US
dc.subjectcontrast enhancementen_US
dc.subjectimage dehazingen_US
dc.subjectimage fusionen_US
dc.subjectvariational image processingen_US
dc.titleFusion-based variational image dehazingen_US

Files in this item


This item appears in the following Collection(s)

Show simple item record

Except where otherwise noted, this item's license is described as info:eu-repo/semantics/openAccess