Image with fine details is the key for
image analysis, and extraction. Haze, fog,
and suspended
particles
in the atmosphere
cause
obstacles in image
visibility
due to the
scattering of light in the media by those suspended particles[1, 2].Visibility
is one of the research
sought-after
topics
during
the last
decade. Thousands of research papers are
found to address the problem [3]. Several techniques have been proposed till
now. As the problem is
ill-posed
and
prediction-based
, no single method can validate the problem. Dehazing,
defogging,
and deraining
techniques are
classified
into three broad categories, (i) Image
enhancement based, (ii) Image Fusion based, and (iii) Image Restoration based
[3].
The image
Enhancement method takes care
of the contrast and visual effect of the image without taking care of image
degradation.
The image
fusion-based
method highlights the
maximization
of information from multiple
sources
of images.
The image
restoration-based
method finds the
optical-based
physics model to invert the degraded image and
compensate
for the
distortion by some statistical prior. Among
the three categories, image restoration methods
address
the dehaze problem precisely until now. The image formation optical model was
first proposed by H. Koschmieder [10] and improved by MacCartney [11].
In [9] pioneering work of
Oakley
et al. for the first time proposed an image formation scattering model to fix
the problem of visibility improvement. They have solved the inverse model with
scattered and attenuated relative pixel flux estimation. Finally, this
estimated attenuated map was subtracted from the hazy image to
produce a
clear
image. A temporal filter is presented to solve
the problem.
Tan [4] in 2008: The work of Oakley [9]
improved
contrast. Further in [4], the transformation of one gray
image into a color image was performed under the two preceding conditions, i.
the contrast in the clear image should be higher than that of the hazed image,
ii.
Attenuation
of field spots is a continuous
function of distance and gradually becomes
smooth.
Fattal [5] in 2008: In [5], the novel
prior estimation
described no
correlation between object surface shading and the
transmission map. Independent component analysis (ICA) and a Markov random
field (MRF) model applied to estimate the surface albedo. Thus, it quantified
the
medium transmission of the scene and recovered
the clear image from the hazy image.
J Kopf et. al. [29] in 2008: established
an intrigued
system of
browsing,
enhancing, and manipulating
outdoor
photographs in association with
already
existing GIS digital terrain and urban models. Thus, the generated image is of
high quality, and clear, but
requires
expensive infrastructure and
offline
processing is used.
He et
al. 's
[6]: Dark channel
prior (DCP) is
undoubtedly a milestone
work of
the dehazing problem. It triumphs over the drawbacks
of the above-mentioned algorithms.
A clear
image has a minimum intensity in a patch out of a colour
channel. This principle is the soul of
the
model, which was then applied
to
atmospheric
scattering
models
and developed
marvellous
results.
It combines with
soft matting for master stroking the restoring image, which is responsible for
high computational complexity.
Tarel et
al. 's
[7]: developed a fast
contrast-based enhancement
to remove haze with linear complexity. The atmospheric veil
function was considered locally changeable slowly,
thus the extinction
coefficient
of the medium
was estimated.
The transmission
coefficient of
the medium
was
estimated by pretreatment and
median filtering. White balancing was applied to smooth the heterogeneous
medium.
Berman et
al. 's
[8]: It is non-local
prior with nonuniform degradation.
The proposed
method introduced colours of the
haze-free
image
to be clustered firmly
and spread over the entire RGB image depending on their
different transmission coefficients.
Whereas a hazy
image
forms a strong
line
of colours that was earlier clustered, called the haze line. It recovers
distance
maps
and
haze-free
images
reproduce
from
the haze
line.
The
algorithm is linear, faster, deterministic, no training is required.
K Zhang [30]: In [30] 2017 a novel idea
was introduced
with the DnCNN
model that
utilized
batch
normalization
and residual connection for
blind
Gaussian
denoising.
DnCNN
is a Trainable
Non-linear
Reaction-Diffusion
(TNRD) Model for
fast and effective
image restoration.
Y Chen [31] in 2015: TNRD – In [31]
described TNRD (Trainable Nonlinear
Reaction-Diffusion)
based image restoration
with highly
parameterized
linear filter
followed by highly
parameterized
influence
functions through training of
a loss-based
approach. It is equally applicable for image Gaussian denoising,
super-resolution,
and deblocking.
Kim et. Al. [35]: Contrast of hazy images
enhanced by minimum information loss as cost function compensation. Static
image and video are processed in real time. Flickering artifacts in video and
ringing artifacts in still images are removed [35].
Kolar:
Non-homogeneous
illumination is corrected
with
optimization
of parameters of B-spline shading model
to
Shannon's entropy on Parzen windowing. Gradient-based optimization
algorithms
efficiently
use the derivatives of entropy. The work
investigates extensively large retinal images to improve inhomogeneity in
illumination [36].
PSAC (Photoshop Auto Contrast algorithm): is
widely used in
photoshop
images
for
contrast
improvement [37].
Tang
studied in
depth different haze-relevant features, especially DCP, in a learning framework
to extract the best dehazing feature combination. The synthetic hazy dataset,
as a training set, was found effective for dehazing real-world data [38].
Xiao [39]: Real-time
single image retinex based color preservation method for defogging is
presented. The method restores clear images from foggy images with real colour
and a real-time basis.
Contribution: Efficient and effective
results found in [12-15,
32]. Blind gaussian
Denoiser (DnCNN) improves depth map quality effectively. Thus, refined
transmission
maps were
extracted. Finally,
good quality
reconstruction was achieved
through
the linear
optics model. Both DnCNN and optics
models
are linear.
Fig. 1. Example of a) Sample
hazy image (b) Dehazed image
The rest of this
paper is arranged
as follows.
The image
formation model has been
discussed with mathematical details and related works in section 2. In section
3, the experiment with qualitative and quantitative analysis is examined in
detail. Section 4 provides a summary of the work and recommends future research
directions and shortcomings.
In prior
knowledge-based
dehazing,
original scene radiance is recovered
through
the
degradation model, and
Physics-based
optical scattering model shown respectively in figure 2
and figure 3.
Figure 2. a. Image Degradation
Model (Left), b. Image Formation Optical Model
|
(1)
|
This model
relies
on Mie scattering
[10,11]. In [9] experimented with
this model
for the first time to improve image quality under poor visibility conditions.
Since then, this
problem
has been a
research
hotspot.
Image
captured by the camera is divided into two parts, one is
direct attenuation of light from the original
scene
to the camera or
observer, and the other is
a
scattering of
atmosphere light
ending up at the camera.
Thus, the final image
at the observation point
is
blurry, low
contrast,
poor
visibility, and noisy. This
mechanism is expressed in figure 2b and represented by equation (2)
|
(2)
|
where I(x)
is a degraded
image, J(x)
represents original scene radiance, t(x) is transmission map and A
is
Atmospheric light. Three variables, J(x), t(x), and A are
unknown. Single image dehazing is
an under-constrained
problem. Efficient estimation of t(x) and A is the key to
effective haze removal. Thus, optimum estimation of t(x), and A are
the key to restore J(x). t(x) is estimated from depth
estimation, multiple images, or from some prior
with a single
image. But estimation of the unknown parameters
leads
to the
overall
problem of the ill-posed
inverse problem or constrain / intractable
optimization
problem. J(x)t(x) term is known as direct attenuation
as original scene radiance reduces exponentially with
distance. A(1-t(x)) is called an atmospheric veil,
airtight,
atmospheric scattering light which causes shifts of
colour, and degradation of the scene.
Figure 3 a.
Flow Chart [DnCNN] (Left); b. DnCNN network Architecture [DnCNN] (Right)
DnCNN is a feedforward very deep
Convolution Neural Network for denoising
under discriminative
learning of research hot
spots.
This
architecture
incorporates a learning
algorithm
with regularisation. Residual learning and batch
normalization are
incorporated to speed up the training process
towards
denoising performance boost up. This model in figure 4 removes
s efficiently
blind Gaussian noise. Thus, the
model can perform cleaning the noise in the hidden layers. These features
attract several applications like jpeg image
artifact
removal,
single image
super-resolution,
image deblocking, GPU computing [DnCNN]. Any deep CNN
involves two steps:(i) network architecture design
(hereby
VGG) and (ii) model learning from training data (residual
learning speed up and
better-denoised
performance with batch normalization) [30].
a.
Network Depth
Convolution kernel of 3x3 size without any
pooling layers.
The receptive
field of d depth is (2d+1) (2d+1). Increased size
of
the receptive
field tends to
grasp a larger
image area so that trade-off between performance and
efficiency is an important issue in designing DnCNN with proper depth d. Receptive
field size relates
to
the patch size
of
the demonizing
model used. Highly
noised
zones
require
larger patch
sizes
for effective
reconstruction. DnCNN uses
σ
=25 as noise level,
b. Network Architecture
Input to DnCNN
is noisy
. y=x+v. According
to discriminative learning like MLP (Multilayer Perceptrons), CSF (a cascade of
shrinkage fields), mapping function
𝓕
(y)=x estimates clean image. In
the DnCNN model,
𝓡(y)≈v, residual mapping, is
extracted from residual learning. Thus, x=y-𝓡(y).
Now, the average mean square
error between residual images and estimation from noisy input is
|
(3)
|
This is also considered as a loss function
to learn
the trainable
parameter
. {(xi, yi)}Ni=1 signifies
N noisy-clean image pairs. Figure 3
shows a proposed
model to learn residual
images.
Deep Architecture
There are three types of architecture.: i.
conv+ReLU, first layers 64 filters with 3x3xc size for 64 features maps along
with ReLU unit (ReLU, max (0,.)) for nonlinearity, (c-number of image
channels). ii. Conv+batch normalization_ReLU- Batch
normalization
is performed
in-between convolution and ReLU. 64 filters with 3x3x64 size are for 2~(d-1)
depth hidden layers. iii. Convolution
is
for
the last layers for reconstructed output.
Removal of Boundary Artifacts
To maintain the size of the output image
as that of input, zero
paddings
are
maintained before
convolving. This way boundary
artifacts
are
removed.
c. Unification of Residual
Learning and Batch Normalization for Image Denoising:
The model in figure 1 is
equally efficient to produce
x from
𝓕
(y) or
𝓡
(y) to predict noise v. The
benefit of ReLU and Batch Normalization is used not only to speed up
performance but also to estimate
F(y) as close as clean image X with the
estimation of residual image v. Residual learning is integrated with batch
normalization
to
speed up and to cope up the performance due to
internal
covariate shift during
parameter training.
Finally, Bath
normalization
boosts denoising
performance the best.
d. Association with TNRD
The DnCNN model proposed is a
one-stage
TNRD
(Trainable Nonlinear
Reaction-Diffusion)
model. Initially,
TNRD was developed to
address the problem below
|
(4)
|
It is
defined
as a huge set of
noisy-clean image pairs, N -the number of image size,
ℷ
-the regularisation parameter.
fk
* x represents the convolution of image x with kth
kernel fk.
p
k(..)
indicates
the tunable kth penalty
function
in
the
parameter TNRD model. In Gaussian denoising
.
The first stage diffusion iteration is represented
as one gradient descent inference step starting at point y
|
(5)
|
is obtained by 1800
phase shift of filter fk, also known as adjoint
of filter
fk. α is the step size. ρ'
(.)=ϕk
(.) and
. Thus, equation 3 turns to the
equation
|
(6)
|
v1
is the estimated
residual of x
w.r.t y. The effect of influence
function
ϕk
(.) is considered as
pointwise
nonlinearity applied on
convolutional
feature
maps.
Equation 4 represents 2-layers
feed-forward
CNN. DnCNN of
figure 3 is
regarded as
generalized
TNRD with
i. Replacement
of the influence
function
with ReLU
simplifies CNN training, ii.
The capacity
of image modeling increases as the depth of CNN increases,
iii. Batch normalization boosts CNN performance. Most of the DnCNN parameters
represent image priors.
Though the DnCNN
model
is basically for Gaussian noise, equally applicable for any type of noise. v1
can be obtained from equation (3) if
|
(7)
|
Equation (5) is applicable for any kind of
noise. Thus, the DnCNN
model
is used
efficiently to remove SSIR, JPEG
artifacts
and to clean hidden layers.
e. Tending to General Image
Denoising:
The Gaussian
denoising model
is best suitable for a fixed noise level. Thus, before cleaning, the noise
level
is estimated
and
scaled
down to a particular level and then applied to the model for efficient
results.
Training
images are
with AWGN from a wide range of noise levels,
down-sampled images with multiple upscaling factors,
and JPEG images with different quality factors. This method
is performed
excellently
not only on blind
gaussian image denoising
but also on image
deblocking,
SISR (Single Image
Super-Resolution),
and blind image
denoising.
Refined
transmission is
achieved
from turbid transmission via an
approximate depth map (minimum intensity channel in the proposed model).
The
finer
depth map is
denoised
with
blind gaussian DnCNN
in a learned
framework through hidden layers. Equation (2) is the
optical
physics-based
image degradation model
[1,2]. I(x), J(x), t(x), A, and d are degraded,
images
original image, transmission, atmospheric light, and
distance respectively. β is the extinction coefficient
of the
atmosphere and is represented as
|
(8)
|
J(x)t(x) term is responsible for direct
attenuation
and the A
(1-t(x)) term represents airlight. These two terms are the
reason behind the hazy model. Direct attenuation deteriorates the brightness of
pixels as it traces away from the source. Whereas airtight term causes the
pixel intensity
white or grey as transmission
decreases. It is clear from equation (2)
that the airtight
term is additive. Therefore, as transmission decreases,
brightness increases while colour fades or saturates less. During the
transmission from the original scene point to the acquisition point, each pixel
gets corrupted with additive as well as multiplicative noise. This noise shifts
colour, contrast, brightness, and sharpness of the pixel, and makes the
resulting image whitish and almost invisible. Mathematically, as d tends to
infinity, t(x) tends to zero. Consequently, I(x) tends to A. This is the reason
that far objects are whitish and gradually
vanish
[27]. Single image haze removal becomes difficult to solve.
A
minimum
of three RGB -
channels
is chosen as a depth map [12,13] and refinement is done
using DnCNN
on depth map to get noiseless
output which will produce
clear transmission
estimation. This is shown in
equations
(9),
(10) and (11).
|
(9)
|
Ic
and Icmin indicate each RGB
or
multi-channel
of
an image and at least
three or
more channels
respectively. The
minimum intensity channel Icmin
can
now be considered
as a raw
depth map to
recover
haze-free
image and
easily be made
noise-free
or smoothened with
the
DnCNN technique shown
by equation (9).
|
(10)
|
Equation (10) shows a
noise-free
minimum intensity channel or refined depth map. This channel is
normalized.
Compliment of this equation will
produce a maximum
intensity channel with DnCNN
to reconstruct
prominent
image structure
and reduced computational
complexity and easy to implement
as
transmission estimation t(x). With DnCNN, good quality haze-free
images
will be generated without compromising the important
structure of the original image. To generate a depth map by minimum patch
estimation is more accurate, but computationally expensive [7].
The
final
refined transmission
is represented by equation (11).
|
(11)
|
tnew,k are refined transmission
and
a
proportionality constant for aerial perspective respectively[33,34]. The value
of k is between 0 to 1, clear visibility to no visibility. The concept of k,
haziness factor, discussed in detail [7,12-15,32,43]
and
has been chosen
dynamically
for flexible, visually pleasing images. Atmospheric light
is estimated as the average
of the top
1% pixel
intensity of each channel. Estimated transmission and atmospheric light help to
revive original scene radiance J(x) from equation (2) and can be rewritten as
in Eq. (12). This method is shown in figure 4. The process is shown pictorially
in
Figures
5,
6 in
.
detail
|
(12)
|
Fig 4 Block Diagram of SImDnCNNVI
Model
Figure 5. Analysis of the
SImDnCNNVI model, Input, Depth map, transmission, output, improved depth map,
improved transmission map, Improved output, a depth map of the final output,
transmission map of the final output (L-R).
Figure 6. Top L-Input, R-Depth map;
Bottom L- Output, R-Depth map
Figure7 L-R; Top Hue,
middle-saturation, Bottom-Value: of Hazy Input; Top Hue, middle-saturation,
Bottom-Value of Improved Output
In figure 7, the hue, saturation, and
value channel of input (left) and output(right) are shown.
The saturation
and value channel histogram of the output image
is
more distributed than its hazed version. That indicates
the quality of the image gets improved
while the hue
channel is almost the same. Thus, colour
attenuation
is prevented,
while colours are tending to saturate,
and brightness increases with contrast. As a whole, direct attenuation
terms
are less affected after rectification while airtight term
decreases.
To verify the usefulness of the
SImVIDnCNN model, we
performed
extensive experiments on both synthetic and natural hazy
image datasets [Frida, He, O-Haze] and compared them with seven
state-of-the-art
methods. The experiment is run on Matlab18a with the
SImVIDnCNN model. For synthetic
datasets,
we evaluate
our results quantitatively and qualitatively. For the natural hazy image
dataset, we provide qualitative results to illustrate our superior performance
in
generating perceptually pleasing and haze-free images.
Four images are selected from the Frida data
set for comparison
experiment. These images are natural and
have
large sky areas which
create
a halo effect and blocking
artifacts
at
the time of dehazing. Figure 8 contains four hazy images and their depth map,
transmission estimation, and
haze-free
images.
Output images are halo
effect-free,
nature
colour preserved, visibly clear. Figure 9 presents one sample image which is
experimented for comparative analysis with five
state-of-the-art
methods. It is evident from the fig 9 that the proposed
method
gives
more visibility and visually
pleasing image. Especially, the sky region gives no reflection that is a very
common problem of dehazing methods. Now, synthetic Frida Dataset
has
also experimented
with four images. Figure 10 shows
those four synthetic low visible images in the top row and SImDnCNNVI Dehazed
output in the bottom row. This method is equally efficient
in removing haze from synthetic hazy images
like natural images. Trees, buildings are invisible in the
synthetic hazy images, whereas its dehazed results are visibly clear. Ten
images from the O-Haze dataset are experimented with proposed algorithm and
compared to GT and seven benchmark algorithms. The results of proposed work are
really satisfactory shown in table V.
Figure 8 Four images a. Hazy,
b. Depth Information, c. Transmission Estimation, and d. Dehazed
Output
Figure 9 y16_photo.png with different
state-of-the-art
techniques(Fattal, He, Kopf, Tan, Tarel) and proposed work
Subjective tests are biased [41];
quantitative assessments are
before i
nvestigated for experiments. PSNR, SSIM,
Entropy, compression ratio are few criteria for effectiveness. In Table
-
I, SSIM, PSNR, Entropy (hazy, and
haze-free)
performance
experiment
with
the proposed
method and four images of fi
gure
8. Experimental results show that the SImDnCNNVI model is
efficient and effective. SSIM is appreciable with an average of 0.7232, PSNRs
are also high (average12.6150), the entropy of
haze-free
images
is
higher than
hazy images. In figure 9, Objective Analysis one sample image
with the
state-of-the-artwork
of Fatal, He, Kopf, Tan, Tarel, and the proposed work
is
compared. As reflected
from
table II, proposed approach outperforms
in
compression ratio, entropy parameters. Other
parameter’s
values are not bad. Therefore, the testing report finds
good
results.
Table I Objective Assessment of proposed
method with SSIM, PSNR, and Entropy of figure 8
Table II Objective Analysis one sample
image with the
state-of-the-artwork
of He, Tan, Tarel,
K
opeaf,
Fattal, and proposed work
in
figure 9
Figure 10 Synthetic Image from
Frida2 Dataset a. Hazy Image, b. Dehazed Image
Figure11. More Visual
Comparative Analysis
Figure 11 shows an additional visual
comparison
of different techniques with proposed work. Figure 11 shows
that in the proposed method, visibility is greater with small details. Its
corresponding objective analysis in table III finds that proposed results are
showing good performance over the others. But as the D
n
CNN is already trained in MATLAB 2018a, running of our
program takes very little time whereas all other mentioned methods are offline
with high computational complexity. As already
explained, the
DnCNN model
deblocks
artifacts and
reduces
blind noise, thus
reconstructed output produces clear images through hidden layers.
Table: III Objective evaluation of Figure
11
Images with different degraded
forms
like
underwater, rain, close
objects,
nighttime,
etc were studied, and remarkable results were obtained.. This is shown in
figure 12 from the frida2 dataset. Therefore, it can be concluded that the
proposed approach is equally applicable for any kind of degraded images as
well.
Figure 12. Application on nine
extreme degraded images of frida2 dataset a.
an
Input (Top Row), b.
It's
SImDnCNNVI Dehazing output (Bottom)
Table V:
O-Haze Dataset 10 images:
In table V, ten images from the GT O-Haze
dataset [42] with seven benchmark algorithms results have been presented and
compared with ours. Surprisingly, our results visually
outperform
almost all
others’ results. [42] is performing better than ours in some images.
Run time is a factor that
evaluates the effectiveness of an algorithm both in time and space. Moreover,
the computational power with GPU plays a significant role
in the effectiveness
of an algorithm. As we are performing our work on Processor INTEL Core
i3,3110M
CPU@2.40GHz,
64-bit
operating system, with MATLAB2018a. Thus, our algorithm is
efficient and has low
complexity
compared to
state-of-the-art
techniques [He, Fattal, Meng, Tarel]. The proposed
method is linear in N=nxm(size of image),
the
number of pixels in the image. Restoring the dehazed image
from the transmission map is O(N). All the methods are implemented in MATLAB
2018a, and we evaluate them on the same machine. The average run time using two
image resolutions shown in Table IV. In [24],
it
has been shown that
state-of-the-art
methods
are taking
a
few seconds in an environment on
the same machine (Intel CPU 3.40 GHz and 16GB memory and NVIDIA GeForce GTX 285
(1 GB) graphics card). Our method works very well compared to other
state-of-the-works.
Table IV Run Time
This paper addresses the
classical constrained ill-posed inverse single
image Dehazing problem,
broadly visibility improvement that
scales
down single image visual quality and visibility which has
immense applications. The efficient DnCNN model is a
popular blind denoising
feed-forward
very deep
learning architecture. The depth of the image is recovered through DnCNN blind
denoising,
cleaning
hidden layers of
transmission map without
losing
original image information.
The added
effect of DnCNN denoising is to remove
blurring, blocking,
and resolution
problems. Atmospheric light is estimated
through an
average
of 1% maximum intensity pixels of each channel.
Adaptable haziness factor makes the algorithm effective and rich [32]. Our
algorithm is linear.
State-of-the-art
dehazing
techniques
recover
the
haze-free
image with
either
compromise of time and memory complexity or visual quality [4-7, 29]. Our approach
is fast, has low computational complexity, and efficient with null hallow
effect, no colour shifting, and improved visibility. Finally, this method is
adaptable to a wide range of degraded images on
the
day, night, rainy, underwater
conditions
with natural
and
synthetic image
datasets
and compared to seven
benchmark algorithms with GT O-Haze dataset and statistical parameters
like PSNR, SSIM, Entropy, and compression ratio.
[1].Sung Cheol Park, Min Kyu Park, and
Moon Gi Kang ,Super-Resolution Image Reconstruction: A Technical Overview, IEEE
Signal Processing Magazine, May 2003. 1053-5888/03/$17.00©2003IEEE .
[2] Yoav Y. Schechner, Member, IEEE, and
Yuval Averbuch ,Regularized Image Recovery in Scattering Media , IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 9,
September 2007 .
[3] W. Wang, X. Yuan, Recent Advances in
Image Dehazing, IEEE Journal of Automatica Sinica, Vol. 4, No. 3, July 2017
[4] R Tan, Visibility in Bad Weather from
A Single Image, 2008 CPVR, IEEE Explore, DOI:
10.1109/CVPR.2008.4587643,ISSN:1063-6919.
[5]
R
Fattal, Single Image Dehazing, ACM Transaction on Graphics(TOG), vol-27,
Issue-3,August2008.
[6] K.,He, J., Sun, and X., Tang,: Single
image haze removal using dark channel prior”, IEEE Conference on Computer
Vision and Pattern Recognition, Miami, FL, 2009,pp- 1956 – 1963
[7] J. P. Tarel,
Hautiere, N.,
Fast
visibility restoration from a single color or gray level image, IEEE 12th
International conference on Computer Vision (2009)
2201 – 2208.
[8] D Berman, T Treibitz, S Avidan,
Non-local Image Dehazing, CVPR2016.
[9] J. P. Oakley and B. L. Satherley,
“Improving image quality in poor visibility conditions using a physical model
for contrast degradation,” IEEE Trans. Image Process., vol. 7, no. 2, pp.
167°179, Feb. 1998.
[10] H. Koschmieder, Theorie der
horizontalen sichtweite, Beitr.Phys. Freien Atm., vol. 12, 1924, pp. 171–181.
[11] E J McCartney , Optics of the
Atmosphere: Scattering by Molecules and Particles, New York, NY, USA:Wiley,
1976.
[12] D Das, S Roy, S S Chaudhuri, Dehazing
Technique based on Dark Channel Prior model with Sky Masking and its
quantitative analysis, CIEC16, IEEE Explore, IEEE Conference ID: 36757 .
[13] S Roy, S S Chaudhuri, Modelling and
control of sky pixels in visibility improvement through CSA , IC2C2SE2016
[14] S Roy, S S Chaudhuri, Modeling of
Ill-Posed Inverse Problem, IJMECS, 2016, 12,pp- 46-55
[15] S Roy, S S Chaudhuri, Low Complexity
Single Colour Image Dehazing Technique,Intelligent Multidimensional Data and
Image Processing, June 2018, IGI Global.
[16] Image Denoising Using Wavelets ,—
Wavelets & Time Frequency — ,Raghuram Rangarajan Ramji Venkataramanan
Siddharth Shah ,December 16, 2002 .
[17] Siraj Sidhik , Comparative study of
Birge–Massart strategy and unimodal thresholding for image compression using
wavelet transform ,OPTIK, ELSEVIER, 2015,Optik 126 (2015) 5952–5955 .
[18] Wavelet Signal and Image Denoising,
E. Hostalkova ́, A.
Prochazka,
Institute of Chemical
Technology Department of Computing and Control Engineering.
[19] Comparative Analysis of Filters and
Wavelet Based Thresholding Methods for Image Denoising, Anutam, Rajni, SBSSTC,
SBSSTC, Ferozepur, Punjab.
[20]
Discrete Wavelet
Transform Decomposition Level Determination Exploiting Sparseness Measurement,
Lei Lei, Chao Wang, X Liu,
World Academy of Science, Engineering and Technology International
Journal of Electrical and Computer Engineering Vol:7, No:9, 2013.
[21] D. L. Donoho and I. M. Johnstone,
―Adatpting to unknow smoothness via wavelet shrinkage,‖ Journal of
the American Statistical Association, vol. 90, no. 432, pp. 1200-1224, December
1995. doi:10.1.1.161.8697.
[22] A Dixit, P Sharma, A Comparative
Study of Wavelet Thresholding for Image Denoising, I.J. Image, Graphics and
Signal Processing, 2014, 12, 39-46 , DOI: 10.5815/ijigsp.2014.12.06
[23] H Guo, C S Burrus, Fast Approximate
Fourier transform via Wavelets Transform , Proceedings of the SPIE, Volume
2825, p. 250-259 (1996).
[24] W Ren, S Liu, H Zhang, J Pan, X Cao,
M-H Yang, Single image dehazing via multi-scale convolutional neural Networks
,European conference on computer vision,Springer, Cham,October2016, pp.154-169.
[25] L. Kratz and K. Nishino, “Factorizing
scene albedo and depth from a single foggy image,” in Proc. IEEE 12th Int.
Conf. Comput. Vis. (ICCV), Sep./Oct. 2009, pp. 1701–1708.
[26] Gaofeng MENG, Ying WANG, Jiangyong
DUAN, Shiming XIANG, Chunhong PAN , Efficient Image Dehazing with Boundary
Constraint and Contextual Regularization , IEEE International Conference on
Computer Vision , 2013 IEEE, pp.617-624.
[27] Q Zhu, J Mai, L Shao , A Fast Single
Image Haze Removal Algorithm Using Color Attenuation Prior , IEEE Transactions
on Image Processing, Vol. 24, No. 11, November 2015, pp.3522-3533
[28]
Dana Berman,
Tali Treibitz,
Shai Avidan,
Non-Local Image
Dehazing, IEEE ,(CVPR2016), pp.1674-1682.
[29] J Kopf, B Neubert, B Chen, M
Cohen, D Cohen-0r, O Deussen, M Uyttendaele, D Lischinski, Deep photo: Model- based
photograph enhancement and viewing, ACM transactions on graphics (TOG),2008
[30] K Zhang, WZuo, Y Chen, D Meng, L
Zhang, Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising, IEEE TIP2017
[31] Y Chen, T Pock, Trainable Nonlinear Reaction Diffusion:A
Flexible Framework for Fast and Effective Image Restoration, IEEE ,TPAM, vol(20),I(20),2016
[32] S Roy, S S Chaudhuri,
Fast Single Image Haze Removal Scheme
Using Self-Adjusting: Haziness Factor Evaluation,
IGI Global, Jan 2019.
[33] E. B. Goldstein. Sensation and
perception. 1980.
[34] A. J.Preetham,P.Shirley,and B.Smits.
A practical analytic model for daylight, SIGGRAPH, pages 91–100, 1999.
[35] Kim, J.-H., Jang, W.-D.,
Sim, J.-Y., and Kim, C.-S. (2013). Optimized contrast enhancement for real-time
image and video dehazing. Journal of Visual Communication and Image
Representation, 24(3):410–425.
[36] L. Kubecka, J. Jan, and R. Kolar,
“Retrospective illumination correction of retinal images,” Journal of
Biomedical Imaging, vol. 2010, p. 11, 2010.
[37] T Knoll, J Knoll, PSAC,
Abode Photoshop 1990.
[38] K Tang, J Yang, J Wang, Investigating Haze-Relevant Features in
a Learning Framework for Image Dehazing, IEEE Conference on Computer Vision and
Pattern Recognition, 2014.
[39] D Xu, C Xiao, J Yu, Color Preserving
Defog method for foggy and Haze Scenes, VISAPP 2009.
[40] C Xiao, J Gan, Fast
Dehazing using Guided Joint Bilateral Filter, Springer Verlag 2012.
[41] P Mahamadi, A E Moghadam, S Shirani,
Subjective and Objective Quality Assessment of Image: A Survey, Mejlesi Journal
of Electrical Engineering, vol-9(1),2015.
[42] C O Ancuti, R Timofti, C D
Vleeschouwer, O-Haze:A Dehazing
BenchMark
with Real Hazy and Haze-free Outdoor Images,CVPRW 2018.
[43] S Roy, S S Chaudhuri,
WLMS-based Transmission Refined Self-Adjusted No Reference Weather Independent
Image Visibility Improvement, IETE Journal of Research, September, 2019.