Image Processing Notes BTech 6th semester

Click here to download hand written notes

Introduction (Module 1)

Introduction

  1. Background: Digital Image Processing (DIP) involves the manipulation of digital images using a digital computer. It has a wide range of applications, including medical imaging, remote sensing, image compression, and computer vision.
    Historical Context: Originated from analog image processing, with early developments in the 1960s and significant advancements due to the advent of digital computers and image sensors.
    Digital Image Representation
    Pixel: The smallest unit of a digital image, representing a single point in the image.
    Image: A two-dimensional array of pixels, where each pixel has an intensity value.
    Grayscale Image: Each pixel value represents the intensity of light, typically ranging from 0 (black) to 255 (white).
    Color Image: Each pixel is represented by multiple values, typically three for RGB (Red, Green, Blue) color space.
  2. Fundamental Steps in Image Processing: Image Acquisition: Capturing a digital image from a physical scene using a sensor (camera, scanner, etc.).
    • Image Enhancement: Improving the visual appearance of an image or converting the image to a form better suited for analysis. Examples include contrast adjustment and noise reduction.
    • Image Restoration: Reconstructing or recovering an image that has been degraded by known processes. Examples include deblurring and denoising.
    • Image Compression: Reducing the amount of data required to represent an image. Examples include JPEG and PNG formats.
    • Image Segmentation: Partitioning an image into meaningful regions for easier analysis. Examples include edge detection and thresholding.
    • Image Representation and Description: Representing the image in a way that is useful for further analysis, often involving feature extraction.
    • Image Recognition: Identifying objects or patterns within an image, often used in applications like facial recognition and license plate detection.
  3. Elements of Digital Image Processing:
    • Image Acquisition:
      Sensors: Devices like CCD (Charge-Coupled Device) cameras and CMOS (Complementary Metal-Oxide-Semiconductor) sensors.
      Sampling: Converting a continuous image signal into a discrete signal by sampling the intensity values at regular intervals.
      Quantization: Converting the sampled intensity values into discrete levels.
    • Image Storage:
      File Formats: Various formats for storing digital images, such as JPEG, PNG, TIFF, and BMP.
      Storage Requirements: Depends on the image resolution, color depth, and compression methods used.
    • Image Processing:
      Spatial Domain Techniques: Direct manipulation of pixel values, such as filtering and convolution.
      Frequency Domain Techniques: Manipulation of the image based on its frequency content, using transforms like the Fourier Transform.
    • Image Communication:
      Transmission: Sending digital images over networks, requiring methods to ensure data integrity and reduce transmission errors.
      Protocols: Standards like JPEG and MPEG for efficient image and video transmission.
    • Image Display
      Monitors: Devices that convert digital image data back into visual form, including CRT, LCD, and OLED screens.
      Resolution: The number of pixels displayed on the screen, affecting the detail and clarity of the image.
      Color Representation: Methods like RGB and CMYK to represent color images on different display devices.
  4. Conclusion:
    Digital Image Processing is a crucial field with a wide array of applications, encompassing various fundamental steps and elements to capture, store, process, transmit, and display images. Understanding these concepts provides a solid foundation for further study and application in diverse areas such as medical imaging, satellite image analysis, and multimedia.

Digital Image Formation (Module 2)

Introduction

Digital image formation involves several steps from capturing the scene to representing it as a digital image. This process includes modeling the image, transforming the geometric properties, projecting the 3D scene onto a 2D plane, and finally sampling and quantizing the image.

Simple Image Model

A simple image model describes how an image is represented mathematically and visually:

  • Intensity Function: An image can be represented as a two-dimensional function, f(x,y)f(x, y)f(x,y), where xxx and yyy are spatial coordinates, and the value of fff at any point (x,y)(x, y)(x,y) is the intensity (brightness) of the image at that point.
  • Grayscale Images: The intensity function f(x,y)f(x, y)f(x,y) ranges from 0 to 255 in an 8-bit image, where 0 represents black and 255 represents white.
  • Color Images: Typically represented using three intensity functions corresponding to the RGB color channels: fR(x,y)f_R(x, y)fR​(x,y), fG(x,y)f_G(x, y)fG​(x,y), and fB(x,y)f_B(x, y)fB​(x,y).

Geometric Model

Geometric transformations alter the spatial relationships between points in an image. Basic transformations include:

Translation

  • Definition: Moving an image from one location to another.
  • Transformation Matrix:
    • where tx & ty​ are the translation distances in the x and y directions, respectively.

Scaling

  • Definition: Changing the size of an image.
  • Transformation Matrix:
    • where sx​ and sy​ are the scaling factors in the x and y directions, respectively.

Rotation

  • Definition: Rotating an image around a point.
  • Transformation Matrix:
  • where θ is the angle of rotation.

Perspective Projection

  • Definition: A technique to represent a 3D object on a 2D plane, mimicking the way human eyes perceive the world.
  • Mathematical Model:
    • ​where (X,Y,Z)(X, Y, Z)(X,Y,Z) are the coordinates of a point in the 3D space, and (x′,y′)(x’, y’)(x′,y′) are the coordinates in the 2D image plane.
  • Homogeneous Coordinates: Often used to simplify the mathematics of perspective projection.

Sampling and Quantization

Sampling

  • Definition: Converting a continuous image function f(x,y)f(x, y)f(x,y) into a discrete form.
  • Uniform Sampling: Sampling at regular intervals. This is the most common method and involves creating a grid of equally spaced points.
  • Non-uniform Sampling: Sampling at irregular intervals, which may be used in adaptive methods where more samples are taken in regions with high detail.

Quantization

  • Definition: Converting the continuous intensity values into discrete levels.
  • Uniform Quantization: Dividing the intensity range into equal-sized intervals. Each pixel intensity is then mapped to the nearest interval.
  • Non-uniform Quantization: Using intervals of varying sizes, often based on the perceptual importance of different intensity ranges (e.g., finer intervals in darker regions where human eyes are more sensitive to changes).

Image Segmentation (Module 6)

Introduction

Image segmentation is a crucial step in digital image processing where an image is partitioned into its constituent regions or objects. The goal is to simplify and/or change the representation of an image into something more meaningful and easier to analyze.

Point Detection

  • Point Detection: Identifies individual points in an image that stand out from their surroundings, often used to detect features like corners.
  • Method: Use of a convolution mask (like a Laplacian filter) to identify points with a significant change in intensity compared to their neighbors.

Line Detection

  • Line Detection: Identifies lines or linear structures within an image.
  • Method: Use of convolution masks designed to respond maximally to specific orientations (horizontal, vertical, or diagonal lines). Common masks include the Roberts, Prewitt, and Sobel operators.

Edge Detection

  • Edge Detection: Identifies the boundaries between different regions in an image where there is a significant change in intensity.
  • Common Methods:
    • Sobel Operator: Computes the gradient magnitude and direction at each pixel.
    • Canny Edge Detector: A multi-step process that includes noise reduction, gradient calculation, non-maximum suppression, and edge tracking by hysteresis.

Combined Detection

  • Combined Detection: Integrates point, line, and edge detection techniques to extract more complex features from an image.
  • Approach: Utilize a combination of different convolution masks and thresholding techniques to identify various types of features in a single pass.

Edge Linking & Boundary Detection

Local Processing

  • Local Processing: Involves linking edge points to form continuous edges or boundaries within a localized region.
  • Method: Analyze the connectivity of edge points within a defined neighborhood (e.g., 8-connected or 4-connected neighbors).

Global Processing via The Hough Transform

  • Hough Transform: A global technique for detecting shapes (such as lines or circles) in an image.
  • Method: Transforms edge points from the image space to the parameter space and identifies accumulations of points in this space to detect the desired shapes.

Thresholding

Foundation

  • Thresholding: Converts a grayscale image into a binary image by assigning pixels to either foreground or background based on their intensity values.

Simple Global Thresholding

  • Simple Global Thresholding: Applies a single threshold value to the entire image.
  • Method: Choose a threshold T and classify all pixels with intensity greater than T as foreground and others as background.

Optimal Thresholding

  • Optimal Thresholding: Determines the best threshold value based on specific criteria, such as minimizing within-class variance (Otsu’s method) or maximizing between-class variance.

Region-Oriented Segmentation

Basic Formulation

  • Region-Oriented Segmentation: Divides an image into regions based on predefined criteria, such as intensity homogeneity.

Region Growing by Pixel Aggregation

  • Region Growing: Starts with seed points and grows regions by appending neighboring pixels that have similar properties (e.g., intensity).
  • Method: Iteratively add pixels to the region based on a similarity measure until no more pixels satisfy the criteria.

Region Splitting & Merging

  • Region Splitting: Divides an image into a set of disjoint regions based on a homogeneity criterion.
  • Method: Use a top-down approach, starting with the whole image and recursively splitting it into smaller regions until each region meets the criterion.
  • Region Merging: Combines adjacent regions that have similar properties.
  • Method: Use a bottom-up approach, starting with small regions and merging them based on a similarity measure until no further merging is possible.

Image Restoration (Module 5)

Introduction

Image restoration aims to reconstruct or recover an image that has been degraded by known or unknown factors. It involves reversing the effects of blurring, noise, and other distortions to retrieve the original image.

Degradation Model

  • Degradation Process: An image g(x,y) can be modeled as the original image f(x,y) degraded by a function h(x,y) and added noise 
    •  where ∗ denotes convolution.
  • Common Degradations: Includes blur due to motion or defocus, noise (Gaussian, salt-and-pepper), and geometric distortions.

Discrete Formulation

  • Discrete Model: For digital images, the degradation model can be expressed in discrete form:
    • where g[i,j] is the observed image, h[m,n] is the degradation function, f[i,j] is the original image, and η[i,j] is the noise.

Algebraic Approach to Restoration

Unconstrained Restoration

  • Objective: Find an estimate f^f^​ of the original image ff that minimizes the difference between the observed image gg and the degraded model.
  • Inverse Filtering: Simplest approach where the estimate is obtained by:
    • where F(u,v), G(u,v), and H(u,v) are the Fourier transforms of f(x,y), g(x,y), and h(x,y), respectively. This method is sensitive to noise.

Constrained Restoration

  • Objective: Incorporate constraints to improve the stability and robustness of the restoration process.
  • Regularization: Introduce a regularization term to control the smoothness of the solution.

Constrained Least Squares Restoration

  • Objective Function: Minimize a cost function that includes both the fidelity to the observed image and a regularization term:
    • where λλ is a regularization parameter and ∇f^∇f^​represents the gradient of the estimated image.

Restoration by Homomorphic Filtering

  • Homomorphic Filtering: Used to correct multiplicative noise and enhance image features.
  • Log Transformation: Convert the multiplicative process into an additive one by taking the logarithm of the image intensities:
  • Filtering: Apply linear filtering in the logarithmic domain to separate and enhance different components (illumination and reflectance).
  • Inverse Log Transformation: Transform back to the original intensity domain by taking the exponential.

Geometric Transformation

Spatial Transformation

  • Objective: Correct geometric distortions by mapping pixels from the degraded image to the original image coordinates.
  • Common Transformations:
    • Translation: Shifting the image by a certain offset.
    • Rotation: Rotating the image around a point.
    • Scaling: Resizing the image.
    • Affine Transformations: Combining linear transformations like rotation, scaling, and translation.

Gray Level Interpolation

  • Interpolation Methods: Used to estimate the intensity values at non-integer coordinates after spatial transformation.
    • Nearest Neighbor Interpolation: Assigns the value of the nearest pixel.
    • Bilinear Interpolation: Uses a weighted average of the four nearest pixel values.
    • Bicubic Interpolation: Uses a weighted average of the sixteen nearest pixel values, providing smoother results.

Updating…

One thought on “Image Processing Notes BTech 6th semester

Leave a Reply

Your email address will not be published. Required fields are marked *