Saturday, May 18, 2013

Image processing _question


INTERVIEW AND SHORT ANSWER QUESTION ON IMAGE PROCESSING-BIOMEDICAL JOBS & NOTES-QUESTION NO 1-20
Define Image? 
An image may be defined as two dimensional light intensity function f(x, y)
where x and y denote spatial co-ordinate and the amplitude or value of f at any point
(x, y) is called intensity or grayscale or
 brightness of the image at that point.
2. What is Dynamic Range? 
The range of values spanned by the gray scale is called
 dynamic range of an
image. Image will have high contrast, if the dynamic range is high and image will have
dull washed out gray look if the dynamic range is low.
3.
 Define Brightness? 
Brightness of an object is the perceived
 luminance of the surround. Two objects
with different surroundings would have identical luminance but different brightness.
4.
 Define Tapered Quantization? 
If gray levels in a certain range occur frequently while others occurs rarely, the
quantization levels are finely spaced in this range and coarsely spaced outside of it. This
method is sometimes called Tapered Quantization.
5.
 What do you meant by Gray level
Gray level refers to a scalar measure of intensity that ranges from black to grays
and finally to white.
6.
 What do you meant by Color model? 
A Color model is a specification of 3D-coordinates system and a subspace within
that system where each
 color is represented by a single point.
7.
 List the hardware oriented color models? 
1.
 RGB model
2. CMY model
3. YIQ model
4. HSI model
8. What is Hue of saturation? 
Hue is a color attribute that describes a pure color where saturation gives a
measure of the degree to which a pure color is diluted by white light.
9.
 List the applications of color models? 
1. RGB model— used for color monitor & color
 video camera
2. CMY model—used for color printing
3. HIS model—-used for color
 image processing
4. YIQ model—used for color picture transmission
10. What is Chromatic Adoption? 
`  The hue of a perceived color depends on the adoption of the viewer. For example,
the American Flag will not immediately appear red, white, and blue of the viewer has
been subjected to high intensity red light before viewing the flag. The color of the flag
will appear to shift in hue toward the red component cyan.
11.
 Define Resolutions? 
Resolution is defined as the smallest number of discernible detail in an image.
Spatial resolution is the smallest discernible detail in an image and gray level resolution
refers to the smallest discernible change is gray level.
12. What is meant by pixel?
A digital image is composed of a finite number of elements each of which has a
particular location or value. These elements are referred to as pixels or image elements or
picture elements or pels elements.
13. Define Digital image? 
When x, y and the amplitude values of f all are finite discrete quantities , we call
the image digital image.

14. What are the steps involved in DIP? 
1. Image Acquisition
2. Preprocessing
3. Segmentation
4. Representation and Description
5. Recognition and Interpretation
15. What is recognition and Interpretation? 
Recognition means is a process that assigns a label to an object based on the
information provided by its descriptors.
Interpretation means assigning meaning to a recognized object.
16. Specify the elements of DIP system? 
1. Image Acquisition
2. Storage
3. Processing
4. Display
17.
 Explain the categories of digital storage? 
1. Short term storage for use during processing.
2. Online storage for relatively fast recall.
3. Archical  storage for infrequent access. 18. What are the types of light receptors?
The two types of light receptors are
1.  Cones and
2.  Rods
19.
 Differentiate photopic and scotopic vision?
Photopic vision  Scotopic vision
1. The human being can resolve
the fine details with these cones
because each one is connected to
its own nerve end.
2. This is also known as bright
light vision.
Several rods are connected to
one nerve end. So it gives the
overall picture of the image.
This is also known as thin light
vision.
20. How cones and rods are distributed in retina? 
In each eye, cones are in the range 6-7 million and rods are in the range 75-150
million.
 
  
DIGITAL IMAGE FUNDAMENTALS AND TRANSFORMS

1. Define Image?
An Image may be defined as a two dimensional function f (x,y) where x & y
are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates
(x,y) is called intensity or gray level of the image at that point. When x,y and the
amplitude values of f are all finite, discrete quantities we call the image as Digital
Image.

2. Define Image Sampling?
Digitization of spatial coordinates (x,y) is called Image Sampling. To be suitable for computer processing, an image function f(x,y) must be digitized both
spatially and in magnitude.

3. Define Quantization ?
Digitizing the amplitude values is called Quantization. Quality of digital image
is determined to a large degree by the number of samples and discrete gray
levels used in sampling and quantization.

4. What is Dynamic Range?
The range of values spanned by the gray scale is called dynamic range of an image. Image will have high contrast, if the dynamic range is high and image
will have dull washed out gray look if the dynamic range is low.

5. Define Mach band effect?
The spatial interaction of Luminance from an object and its surround creates a
Phenomenon called the mach band effect.

6. Define Brightness?
Brightness of an object is the perceived luminance of the surround. Two objects
with different surroundings would have identical luminance but different brightness.

7. Define Tapered Quantization?
If gray levels in a certain range occur frequently while others occurs rarely, the
quantization levels are finely spaced in this range and coarsely spaced outside of
it.This method is sometimes called Tapered Quantization.

8. What do you meant by Gray level?
Gray level refers to a scalar measure of intensity that ranges from black to grays
and finally to white.



9. Define Resolutions?
Resolution is defined as the smallest number of discernible detail in an image.
Spatial resolution is the smallest discernible detail in an image and gray level
resolution refers to the smallest discernible change is gray level.

10. Write the M X N digital image in compact matrix form?
f(x,y )= f(0,0) f(0,1)………………f(0,N-1)
f(1,0) f(1,1)………………f(1,N-1)
.
.
.
f(M-1) f(M-1,1)…………f(M-1,N-1)

11. Write the expression to find the number of bits to store a digital image?
The number of bits required to store a digital image is
b=M X N X k
When M=N, this equation becomes
b=N^2k

12. What do you meant by Zooming of digital images?
Zooming may be viewed as over sampling. It involves the creation of new
pixel locations and the assignment of gray levels to those new locations.

13. What do you meant by shrinking of digital images?
Shrinking may be viewed as under sampling. To shrink an image by one
half, we delete every row and column. To reduce possible aliasing effect, it is a
good idea to blue an image slightly before shrinking it.

14. Define the term Radiance?
Radiance is the total amount of energy that flows from the light source, and
it is usually measured in watts (w).

15. Define the term Luminance?
Luminance measured in lumens (lm), gives a measure of the amount of
energy an observer perceiver from a light source.

16. What is Image Transform?
An image can be expanded in terms of a discrete set of basis arrays
called basis images. Unitary matrices can generate these basis images.
Alternatively, a given NXN image can be viewed as an N^2X1 vectors. An image
transform provides a set of coordinates or basis vectors for vector space.

17. What are the applications of transform?
1) To reduce band width
2) To reduce redundancy
3) To extract feature.

18. Give the Conditions for perfect transform?
Transpose of matrix = Inverse of a matrix. Orthoganality.

19. What are the properties of unitary transform?
1) Determinant and the Eigen values of a unitary matrix have unity magnitude
2) The entropy of a random vector is preserved under a unitary Transformation
3) Since the entropy is a measure of average information, this means information
is preserved under a unitary transformation.

20. Write the expression of one-dimensional discrete Fourier transforms?
Forward transform
The sequence of x(n) is given by x(n) = { x0,x1,x2,… xN-1}.
X(k) = (n=0 to N-1) _ x(n) exp(-j 2* pi* nk/N) ; k= 0,1,2,…N-1
Reverse transforms
X(n) = (1/N) (k=0 to N-1) _ x(k) exp(-j 2* pi* nk/N) ; n= 0,1,2,…N-1

21. Properties of twiddle factor?
1. Periodicity
WN^(K+N)= WN^K
2. Symmetry
WN^(K+N/2)= -WN^K

22. Give the Properties of one-dimensional DFT
1. The DFT and unitary DFT matrices are symmetric.
2. The extensions of the DFT and unitary DFT of a sequence and their
inverse transforms are periodic with period N.
3. The DFT or unitary DFT of a real sequence is conjugate symmetric
about N/2.

23. Give the Properties of two-dimensional DFT
1. Symmetric
2. Periodic extensions
3. Sampled Fourier transform
4. Conjugate symmetry.

24. What is cosine transform?
The NXN cosine transform c(k) is called the discrete cosine transform and is
defined as
C(k) = 1/_N , k=0, 0 _ n _ N-1
= _ (2/N) cos (pi (2n+1)/2N 1_ k _ N-1, 0_ n _ N-1

25. What is sine transform?
The NXN sine transform matrix = (k,n) also called the discrete sine transform is
defined as (k,n) = _(2/N+1) sin [ pi* (k+1) (n+1) / (N+1)] , 0_k, n_ N-1

26. Write the properties of cosine transform?
1) Real & orthogonal.
2) Fast transform.
3) Has excellent energy compaction for highly correlated data

27. Write the properties of sine transform?
1) Real, symmetric and orthogonal.
2) Not the imaginary part of the unitary DFT.
3) Fast transform.

28. Write the properties of Hadamard transform?
1) Hadamard transform contains any one value.
2) No multiplications are required in the transform calculations.
3) The no: of additions or subtractions required can be reduced from N^2 to
Nlog2N
4) Very good energy compaction for highly correlated images.

29. Define Haar transform?
The Haar functions are defined on a continuous interval Xe [0,1] and for
K=0,1,……. N-1.Where N=2^n. The integer k can be uniquely decomposed as
K=2^P+Q-1.

30. Write the expression for Hadamard transforms
Hadamard transform matrices Hn are NXN matrices where N=2^n , n=
1,2,3,… is defined as Hn= Hn-1 * H1 = H1* Hn-1
= 1/ _ 2 Hn-1 Hn-1
H2 = 1 1
1 –1

31. What are the properties of Haar transform.
1. Haar transform is real and orthogonal.
2. Haar transform is a very fast transform
3. Haar transform has very poor energy compaction for images
4. The basic vectors of Haar matrix sequency ordered.

32. What are the Properties of Slant transform?
1. Slant transform is real and orthogonal.
2. Slant transform is a fast transform
3. Slant transform has very good energy compaction for images
4. The basic vectors of Slant matrix are not sequency ordered.

33. Define of KL Transform?
KL Transform is an optimal in the sense that it minimizes the mean square
error between the vectors X and their approximations X^. Due to this idea of
using the Eigenvectors corresponding to largest Eigen values. It is also known as
principal component transform.

34. Justify that KLT is an optimal transform.
Since mean square error of reconstructed image and original image is minimum
and the mean value of transformed image is zero so that uncorrelated.

35.Explain the term digital image.
The digital image is an array of real or complex numbers that is
represented by a finite no of bits.

36.Write any four applications of DIP.
(i). Remote sensing
(ii). Image transmission and storage for business application
(iii). Medical imaging
(iv). Astronomy

37.What is the effect of Mach band pattern.
The intensity or the brightness pattern perceive a darker stribe in region D
and brighter stribe in region B.This effect is called Mach band pattern or
effect.

38.Write down the properties of 2D fourier transform.
  • Separability
  • Translation
  • Periodicity and Conjugate property
  • Rotation
  • Distributivity and scaling
  • Average value
  • Convolution and Correlation
  • Laplacian

39.Obtain the Hadamard transformation for N = 4
        N = 4 = 2n
         => n = 2

40.Write down the properties of Haar transform.
  • Real and orthogonal
  • Very fast transform
  • Basis vectors are sequentially ordered
  • Has fair energy compaction for image
  • Useful in feature extraction,image coding and image analysis
       problem





UNIT II

IMAGE ENHANCEMENT TECHNIQUES

1. What is Image Enhancement?
Image enhancement is to process an image so that the output is more suitable
for specific application.

2. Name the categories of Image Enhancement and explain?
The categories of Image Enhancement are
1. Spatial domain
2. Frequency domain
Spatial domain: It refers to the image plane, itself and it is based on direct
manipulation of pixels of an image.
Frequency domain techniques are based on modifying the Fourier transform of
an image.

3. What do you mean by Point processing?
Image enhancement at any Point in an image depends only on the gray level
at that point is often referred to as Point processing.

4. Explain Mask or Kernels?
A Mask is a small two-dimensional array, in which the value of the mask
coefficient determines the nature of the process, such as image sharpening.

5. What is Image Negatives?
The negative of an image with gray levels in the range [0, L-1] is obtained by
using the negative transformation, which is given by the expression.
s = L-1-r
Where s is output pixel.
r is input pixel.

6.Define Histogram?
The histogram of a digital image with gray levels in the range [0, L-1] is a
discrete function h (rk) = nk, where rk is the kth gray level and nk is the number of
pixels in the image having gray level rk.

7. Define Derivative filter?
For a function f (x, y), the gradient f at co-ordinate (x, y) is defined as the
vector_f = _f/_x
_f/_y
_f = mag (_f) = {[(_f/_x) 2 +(_f/_y) 2 ]} ½

8. Explain spatial filtering?
Spatial filtering is the process of moving the filter mask from point to point in
an image. For linear spatial filter, the response is given by a sum of products of
the filter coefficients, and the corresponding image pixels in the area spanned by
the filter mask.

9. Define averaging filters?
The output of a smoothing, linear spatial filter is the average of the pixels
contain in the neighborhood of the filter mask. These filters are called averaging
filters.

10. What is a Median filter?
The median filter replaces the value of a pixel by the median of the gray
levels in the neighborhood of that pixel.

11. What is maximum filter and minimum filter?
The 100th percentile is maximum filter is used in finding brightest points in
an image. The 0th percentile filter is minimum filter used for finding darkest points
in an image.

12. Define high boost filter?
High boost filtered image is defined as
HBF= A (original image)-LPF
= (A-1) original image + original image –LPF
HBF= (A-1) original image +HPF

13. State the condition of transformation function s=T(r)
1. T(r) is single-valued and monotonically increasing in the interval 0_r_1
0_T(r) _1 for 0_r_1.

14. Write the application of sharpening filters?
1. Electronic printing and medical imaging to industrial application
2. Autonomous target detection in smart weapons.

15. Name the different types of derivative filters?
1. Perwitt operators
2. Roberts cross gradient operators
3. Sobel operators

16. What is enhancement.
Image enhancement is a technique to process an image so that the result
is more suitable than the original image for specific applications;

17. What is point processing.
Enhancement at any point in an image depends only on the gray level at
that point is referred to as point processing.

18. What is gray level slicing.
Highlighting a specific range of gray levels in an image is referred to as
gray level slicing. It is used in satellite imagery and x-ray images.

19. What is histogram equalization?
It is a technique used to obtain linear histogram . It is also known as
histogram linearization. Condition for uniform histogram is Ps(s) = 1.

20. What is contrast stretching?
       Contrast stretching reduces an image of higher contrast than the original by darkening the levels below m and brightening the levels above m in the image.

21. Define image subtraction.
      The difference between 2 images f(x,y) and h(x,y) expressed as,
           G(x,y)=f(x,y)-h(x,y)  is obtained by computing the difference between all pairs of corresponding pixels from f and h.

22. What is the purpose of image averaging?
       An important application of image averaging is in the field of astronomy, where imaging with very low light levels is routine, causing sensor noise frequently to render single images virtually useless for analysis.

23. What is meant by masking?
       Mask is the small 2D array in which the values of mask co-efficient determines the nature of process.
       The enhancement techniques based on this type of approach is referred to as mask processing.

24. Give the formula for log transformation
       S=clog(1+r)
       Where c- constant  and r≥0

25. What is meant by bit plane slicing?
       Instead of highlighting gray level ranges, highlighting the contribution made to total image appearance by specific bits might be desired. Suppose that each pixel in an image is represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from bit plane 0 for LSB to bit plane 7 for MSB.




     






UNIT III

IMAGE RESTORATION

1. Define Restoration?
Restoration is a process of reconstructing or recovering an image that has
been degraded by using a priori knowledge of the degradation phenomenon.
Thus restoration techniques are oriented towards modeling the degradation and
applying the inverse process in order to recover the original image.

2. How a degradation process is modeled?
A system operator H, which together with an additive white noise term
_(x,y) a operates on an input image f(x,y) to produce a degraded image g(x,y).

3. What is homogeneity property and what is the significance of this
property?
H [k1f1(x,y)] = k1H[f1(x,y)]
Where H=operator
K1=constant
f(x,y)=input image.
It says that the response to a constant multiple of any input is equal to the
response to that input multiplied by the same constant.


4. Define circulant matrix?
A square matrix, in which each row is a circular shift of the preceding row
and the first row is a circular shift of the last row, is called circulant matrix.
Example:
he(o) he(M-1) he(M-2)………… he(1)
he(1) he(0) he(M-1)………. he(2)
.
.
he(M-1) he(M-2) he(M-3)………. he(0)

5. What is the concept behind algebraic approach to restoration?
Algebraic approach is the concept of seeking an estimate of f, denoted f^,
that minimizes a predefined criterion of performance where f is the image.

6. Why the image is subjected to wiener filtering?
This method of filtering consider images and noise as random process
and the objective is to find an estimate f^ of the uncorrupted image f such that the
mean square error between them is minimized. So that image is subjected to
wiener filtering to minimize the error.

7. Define spatial transformation?
Spatial transformation is defined as the rearrangement of pixels on an
image plane.
8. Define Gray-level interpolation?
Gray-level interpolation deals with the assignment of gray levels to pixels in
the spatially transformed image.

9. Give one example for the principal source of noise?
The principal source of noise in digital images arise image acquisition
(digitization) and/or transmission. The performance of imaging sensors is
affected by a variety of factors, such as environmental conditions during image
acquisition and by the quality of the sensing elements. The factors are light levels
and sensor temperature.

10. When does the degradation model satisfy position invariant property?
An operator having input-output relationship g(x,y)=H[f(x,y)] is said to
position invariant if H[f(x-,y-_)]=g(x-,y-_) for any f(x,y) and and _.
This definition indicates that the response at any point in the image depends only
on the value of the input at that point not on its position.

11. Why the restoration is called as unconstrained restoration?
In the absence of any knowledge about the noise ‘n’, a meaningful criterion
function is to seek an f^ such that H f^ approximates of in a least square sense
by assuming the noise term is as small as possible.
Where H = system operator.
f^ = estimated input image.
g = degraded image.

12. Which is the most frequent method to overcome the difficulty to
formulate the spatial relocation of pixels?
The point is the most frequent method, which are subsets of pixels whose
location in the input (distorted) and output (corrected) imaged is known precisely.

13. What are the three methods of estimating the degradation function?
1. Observation
2. Experimentation
3. Mathematical modeling.

14. How the blur is removed caused by uniform linear motion?
An image f(x,y) undergoes planar motion in the x and y-direction and x0(t)
and y0(t) are the time varying components of motion. The total exposure at any
point of the recording medium (digital memory) is obtained by integrating the
instantaneous exposure over the time interval during which the imaging system
shutter is open.

15. What is inverse filtering?
The simplest approach to restoration is direct inverse filtering, an estimate
F^(u,v) of the transform of the original image simply by dividing the transform of
the degraded image, G^(u,v) by the degradation function.
F^ (u,v) = G^(u,v)/H(u,v)

16. Give the difference between Enhancement and Restoration?
Enhancement technique is based primarily on the pleasing aspects it might
present to the viewer. For example: Contrast Stretching. Where as Removal of
image blur by applying a deblurrings function is considered a restoration
technique.

17.Define the degradation phenomena?
Image restoration or degradation is a process that attempts to reconstruct
or recover an image that has been degraded by using some clear
knowledge of the degradation phenomena. Degradation may be in the
form of
  • Sensor noise
  • Blur due to camera misfocus
  • Relative object camera motion

18.What is unconstrained restoration.
It is also known as least square error approach.n = g-Hf
To estimate the original image f^,noise n has to be minimized and
f^ = g/H.

19.What is blind image restoration
Degradation may be difficult to measure or may be time varying in an
unpredictable manner. In such cases information about the degradation
must be extracted from the observed image either explicitly or implicitly.
This task is called blind image restoration.

20. What are the 2 properties in Linear operator?
       * Additivity
       * Homogenity

21.Explain additivity property in Linear Operator?
    H[f1(x,y)+f2(x,y)]=H[f1(x,y)]+H[f2(x,y)]

22.What are the 2 methods of algebraic approach?
        * Unconstraint restoration approach
        * Constraint restoration approach

23. What is meant by Noise probability density function?
       The spatial noise descriptor is the statistical behavior of gray level values in the noise component of the model.

24. What are the types of noise models?
  • Guassian noise
  • Rayleigh noise
  • Erlang noise

25. What is meant by least mean square filter?
       The limitation of inverse and pseudo inverse filter is very sensitive noise. The wiener filtering is a method of restoring images in the presence of blur as well as noise.

26. What are the 2 approaches for blind image restoration?
  • Direct measurement
  • Indirect estimation




































UNIT IV

IMAGE COMPRESSION

1. What is Data Compression?
Data compression requires the identification and extraction of source
redundancy. In other words, data compression seeks to reduce the number of
bits used to store or transmit information.

2. What are two main types of Data compression?
Lossless compression can recover the exact original data after compression. It
is used mainly for compressing database records, spreadsheets or word
processing files, where exact replication of the original is essential.
��_Lossy compression will result in a certain loss of accuracy in exchange for a
substantial increase in compression. Lossy compression is more effective when
used to compress graphic images and digitised voice where losses outside visual
or aural perception can be tolerated.

3. What is the need for Compression?
In terms of storage, the capacity of a storage device can be effectively increased
with methods that compress a body of data on its way to a storage device and
decompresses it when it is retrieved. In terms of communications, the bandwidth
of a digital communication link can be effectively increased by compressed data
at the sending end and decompressing data at the receiving end. At any given
time, the ability of the Internet to transfer data is fixed. Thus, if data can
effectively be compressed wherever possible, significant improvements of data
throughput can be achieved. Many files can be combined into one compressed
document making sending easier.

4. What are different Compression Methods?
(1) Run Length Encoding (RLE)
(2) Arithmetic coding
(3) Huffman coding
(4) Transform coding

5. What is run length coding?
Run-length Encoding,( RLE) is a technique used to reduce the size of a
repeating string of characters. This repeating string is called a run; typically RLE
encodes a run of symbols into two bytes, a count and a symbol. RLE can
compress any type of data regardless of its information content, but the content
of data to be compressed affects the compression ratio. Compression is normally
measured with the compression ratio:

6. Define compression ratio.
Compression Ratio = original size / compressed size: 1

7. Give an example for Run length Encoding.
Consider a character run of 15 ’A’ characters, which normally would
require 15 bytes to store:
AAAAAAAAAAAAAAA coded into 15A
With RLE, this would only require two bytes to store; the count (15) is stored as
the first byte and the symbol (A) as the second byte.

8. What is Huffman Coding?
Huffman compression reduces the average code length used to represent
the symbols of an alphabet. Symbols of the source alphabet, which occur
frequently, are assigned with short length codes. The general strategy is to allow
the code length to vary from character to character and to ensure that the
frequently occurring characters have shorter codes.

9. What is Arithmetic Coding?
Arithmetic compression is an alternative to Huffman compression; it
enables characters to be represented as fractional bit lengths. Arithmetic coding
works by representing a number by an interval of real numbers greater or equal
to zero, but less than one. As a message becomes longer, the interval needed to
represent it becomes smaller and smaller, and the number of bits needed to
specify it increases.

10. What is JPEG?
The acronym is expanded as "Joint Photographic Expert Group". It is an
international standard in 1992. It perfectly Works with colour and greyscale
images, Many applications e.g., satellite, medical, etc,

11. What are the basic steps in JPEG?
The Major Steps in JPEG Coding involves
DCT (Discrete Cosine Transformation)
Quantization
Zigzag Scan
DPCM on DC component
RLE on AC Components
Entropy Coding

12. What is MPEG?
The acronym is expanded as "Moving Picture Expert Group". It is an international standard in 1992. It perfectly Works with video and also used in
teleconferencing.

13. What is transform coding?
Transform coding is used to convert spatial image pixel values to transform
coefficient values. Since this is a linear process and no information is lost, the
number of coefficients produced is equal to the number of pixels transformed.
The desired effect is that most of the energy in the image will be contained in a
few large transform coefficients. If it is generally the same few coefficients that
contain most of the energy in most pictures, then the coefficients may be further
coded by lossless entropy coding. In addition, it is likely that the smaller
coefficients can be coarsely quantized or deleted (lossy coding) without doing
visible damage to the reproduced image.

14. What are the different transforms used in transform coding and how the
differ?
Many types of transforms used for picture coding, are Fourier, Karhonen-Loeve,
Walsh - Hadamard, lapped orthogonal, discrete cosine (DCT), and recently,
wavelets. The various transforms differ among themselves in three basic ways
that are of interest in picture coding:
1) The degree of concentration of energy in a few coefficients;
2) The region of influence of each coefficient in the reconstructed picture;
3) The appearance and visibility of coding noise due to coarse quantization of the
coefficients.

15.Find the number of bits to store a 128_128 image with 64 gray levels.
Given: M = N = 128
L = 64 =2k
=> k=6
No. of bits = M2k
= 1282*6
= 98304 bits

16. What is image compression?
       Image compression refers to the process of redundancy amount of data required to represent the given quantity of information for digital image. The basis of reduction process is removal of redundant data.

17. Define coding redundancy.
      If the gray level of an image is coded in a way that uses more code words than necessary to represent each gray level, then the resulting image is said to contain coding redundancy.

18. Define interpixel redundancy.
      The value of any given pixel can be predicted from the values of its neighbors. The information carried by is small. Therefore the visual contribution of a single pixel to an image is redundant. Otherwise called as spatial redundant geometric redundant or interpixel redundant.
       Example : Run length coding

19. What is psycho visual redundancy?
       In normal visual processing certain information has less importance than other information. So this information is said to be psycho visual redundant.


20. Define encoder.
        Source encoder is responsible for removing the coding and interpixel redundancy and psycho visual redundancy.
        There are 2 components
a)      Source Encoder
b)       Channel Encoder

21. Define Source encoder.
            Source encoder performs 3 operations
1)      Mapper this transforms the input data into non-visual format. It reduces the interpixel redundancy.
2)      Quantizer- It reduces the psycho visual redundancy of the input images. This step is omitted if the system is error free.
3)      Symbol encoder – This reduces the coding redundancy. This is the final stage of encoding process.

22. Define channel encoder
       The channel encoder reduces the impact of the channel noise by inserting redundant bits into the source encoded data.
       Example : Hamming code

23. What are the types of decoder?
            Source decoder has 2 components.
a)      Symbol decoder – This performs inverse operation of symbol encoder.
b)      Inverse Mapping – This performs inverse operation of mapper.

24.What is Variable Length coding?
            Variable Length Coding is the simplest approach to error free compression. It reduces only the coding redundancy. It assigns the shortest possible codeword to the most probable gray levels.

25. What are the operations performed by error free compression?
            1)  Devising an alternative representation of the image in which its interpixel redundant are reduced.
            2) Coding the representation to eliminate coding redundancy.

           








UNIT V
IMAGE SEGMENTATION AND REPRESENTATION

1. What is segmentation?
The first step in image analysis is to segment the image. Segmentation
subdivides an image into its constituent parts or objects.

2. Write the applications of segmentation.
(i) Detection of isolated points.
(ii) Detection of lines and edges in an image.

3. What are the three types of discontinuity in digital image?
Points, lines and edges.

4. How the discontinuity is detected in an image using segmentation?
(i) Compute the sum of the products of the coefficient with the gray levels
contained in the region encompassed by the mask.
(ii) The response of the mask at any point in the image is
R = w1z1+ w2z2 + w3z3 +………..+ w9z9
Where zi = gray level of pixels associated with mass coefficient wi.
(iii) The response of the mask is defined with respect to its center
location.

5. Why edge detection is most common approach for detecting
discontinuities?
The isolated points and thin lines are not frequent occurrences in most
practical applications, so edge detection is mostly preferred in detection of
discontinuities.

6. How the derivatives are obtained in edge detection during formulation?
The first derivative at any point in an image is obtained by using the
magnitude of the gradient at that point. Similarly the second derivatives are
obtained by using the laplacian.

7. Write about linking edge points.
The approach for linking edge points is to analyse the characteristics of
pixels in a small neighborhood (3x3 or 5x5) about every point (x,y)in an image
that has undergone edge detection. All points that are similar are linked, forming
a boundary of pixels that share some common properties.

8. What are the two properties used for establishing similarity of edge
pixels?
(1) The strength of the response of the gradient operator used to produce
the edge pixel.
(2) The direction of the gradient.
W1 W2 W3
W4 W5 W6
W7 W8 W9

9. Explain about gradient operator.
The gradient of an image f(x,y) at location (x,y) is the vector
_f = GX = _f/_x
GY _f/_y
-The gradient vector points are in the direction of maximum rate of change of f at
(x,y)
- In edge detection an important quantity is the magnitude of this vector
(gradient) and is
denoted as _f
_f = mag (_f) = [Gx2+Gy2] 1/2
The direction of gradient vector also is an important quantity.
(x,y) = tan-1(Gy/Gx)

10. What is the advantage of using sobel operator?
Sobel operators have the advantage of providing both the differencing and
a smoothing effect. Because derivatives enhance noise, the smoothing effect is
particularly attractive feature of the sobel operators.

11. What is pattern?
Pattern is a quantitative or structural description of an object or some other
entity of interest in an image. It is formed by one or more descriptors.

12. What is pattern class?
It is a family of patterns that share some common properties. Pattern classes
are denoted as w1 w2 w3 ……… wM , where M is the number of classes.

13. What is pattern recognition?
It involves the techniques for arranging pattern to their respective classes
by automatically and with a little human intervention.

14. What are the three principle pattern arrangements?
The three principal pattern arrangements are vectors, Strings and trees.
Pattern vectors are represented by old lowercase letters such as x y z and
In the form x=[x1, x2, ……….., xn ] Each component x represents I th descriptor
and n is the number of such descriptor.

15.Name the types of connectivity and explain
(a). 4-connectivity:
Two pixels p and q with values from V are 4-connected if q is in the set N4(p)
(b). 8- connectivity:
Two pixels p and q with values from V are 8-connected if q is in the set N8(p)
(c). m- connectivity:
Two pixels p and q with values from V are m-connected if
(i). q is in N4(p) or
(ii). q is in ND(p) and the set N4(p) _N4(q) = _

16. Define the chessboard distance
It is also known as D8 distance given by
D8 (p,q) = max(_x-s_,_y-t_)
The pixels with D8 distance from (x,y) less than or equal to some value r
form a square centered at (x,y).

17. What is edge?
       An edge is a set of connected pixels that lie on the boundary between 2 regions edges are more closely modeled as having a ramplike profile. The slope of the ramp is inversely proportional to the degree of blurring in the edge.

18. Give the properties of the second derivative around an edge?
   * The sign of the second derivative can be used to determine whether an edge pixel lies on the dark or light side of an edge.
   * It produces 2 values for every edge in an image.
   * An imaginary straight line joining the extreme positive and negative values of the second derivative would cross zero near the midpoint of the edge.

19. What is meant by object point and background point?
             To execute the objects from the background is to select a threshold T that separate these modes. Then any point (x,y) for which f(x,y)>T is called an object point. Otherwise the point is called the background point.

20. What is global, local and dynamic or adaptive threshold?
       When Threshold T depends only on f(x,y) then the threshold is called global. If T depends both on f(x,y) and p(x,y) is called local. If T depends on the spatial coordinates x and y the threshold is called dynamic or adaptive where f(x,y) is the original image.

21. Define region growing?
            Region growing is a procedure that groups pixels or sub regions in to layer regions based on predefined criteria. The basic approach is to start with a set of seed points and from there grow regions by appending to each seed these neighbouring pixels that have properties similar to the seed.

22.Specify the steps involved in splitting and merging?
     Split into 4 disjoint quadrants any region Ri for which P(Ri)=FALSE
     Merge any adjacent regions Rj and Rk for which P(Rj URk)=TRUE.
     Stop when no further merging or splitting is positive.

23. What is meant by markers?
            An approach used to control over segmentation is based on markers. Marker is a connected component belonging to an image. We have internal markers associated with objects of interest and external markers associated with background.
24. What are the 2 principles steps involved in marker selection?
            The 2 steps are
1.      Preprocessing
2.      Definition of a set of criteria that markers must satisfy.

25. Define Chain codes?
            Chain codes are used to represent a boundary by a connected sequence of straight line segment of specified length and direction. Typically this representation is based on 4  or 8 connectivity of the segments. The direction of each segment is coded by using a numbering scheme.

26. Specify the various polygonal approximation methods.
  • Minimum perimeter polygons
  • Merging techniques
  • Splitting techniques

27. Name few boundary descriptors
  • Simple descriptors
  • Shape numbers
  • Fourier descriptors



      













No comments:

Post a Comment