OpenCV Image Processing Techniques
OpenCV Image Processing Techniques
import cv2
Blending is a technique used to combine two images into one. This can be useful
in various applications like creating effects, overlays, or transitions.
To blend two images using OpenCV in Python, you can use the [Link]()
function. This function allows you to specify the weights for each image, which
determines how much of each image will be included in the blended result.
The function [Link]() takes the two images and their respective weights,
along with the gamma value, and computes the blended result.
The values of α and β should be chosen such that their sum is typically equal to 1 for
# Resize images to the same shape if they are not already the same size
image1 = [Link](image1, (640, 480))
image2 = [Link](image2, (640, 480))
# Set the blending weights
alpha = 0.5 # Weight for the first image
beta = 0.5 # Weight for the second image
gamma = 0.0 # Scalar added to each sum
[Link]('blended_image.tif', blended_image)
This saves the resulting image as 'blended_image.tif’ in the current working
directory.
Changing Contrast and Brightness of the given image using open CV
import cv2
import numpy as np
return adjusted_image
# Load the image
image_path = r"C:\Users\saimo\Pictures\standard_test_images\[Link]"
image = [Link](image_path)
if image is None:
print("Error: Unable to load image.")
else:
# Adjust brightness and contrast
brightness = 50 # Change this value as needed
contrast = 30 # Change this value as needed
It converts the result to an absolute value (ensures pixel values remain within valid
range) and handles overflow by clipping values at 255 (the maximum for an 8-bit
image).
#Adding a text to an image
import cv2
# Load the image
image_path = r"C:\Users\saimo\Pictures\standard_test_images\[Link]"
image = [Link](image_path)
if image is None:
print("Error: Unable to load image.")
else:
text = "SVR ENGINEERING COLLEGE“ # Define the text to be added
# Set the font, scale, color, and thickness
font = cv2.FONT_HERSHEY_SIMPLEX # Font type
scale = 1 # Font scale (size)
color = (255, 255, 255) # White color in BGR (Blue, Green, Red)
thickness = 2 # Thickness of the text
[Link](r"C:\Users\saimo\Pictures\standard_test_images\image_with_text.tif",
image)
Image smoothing is a fundamental process in digital image processing aimed at reducing noise
and improving the quality of an image. It is often used as a preprocessing step in computer vision
tasks to enhance the performance of algorithms by minimizing noise that could interfere with
feature extraction or recognition processes.
There are several types of filters used for smoothing images, each with its characteristics and
applications:
1. *Mean Filter*: A simple linear filter that replaces each pixel value with the average of its
neighbors. It effectively reduces noise, but can also blur edges.
2. *Gaussian Filter*: A more sophisticated linear filter that applies a Gaussian function to weight
pixels differently based on their distance from the center. It reduces noise while preserving edges
better than the mean filter.
3. *Median Filter*: A nonlinear filter that replaces each pixel with the median of the pixel
values in its neighborhood. It is particularly effective at removing salt-and-pepper noise while
preserving edges.
4. *Bilateral Filter*: Combines domain and range filtering to preserve edges while smoothing
images. It uses both spatial distance and intensity difference to weight pixels.
The median filter is a widely used technique for noise reduction in images. Its primary
advantage is its ability to remove noise without blurring edges, making it ideal for applications
where edge preservation is crucial.
How the Median Filter Works
1. *Kernel Definition*:
- The filter uses a small, square-shaped kernel (e.g., 3x3, 5x5) that slides over the image.
- The kernel size is typically an odd number to ensure there's a central pixel.
4. *Border Handling*:
- Special strategies are employed at the image borders, such as padding with zeros, mirroring
the image, or using a smaller kernel.
Advantages of the Median Filter
- Edge Preservation: Unlike linear filters, the median filter preserves sharp edges, making it
suitable for images with significant detail.
- Non-linear Nature: The non-linear operation helps in maintaining the integrity of significant
features while altering less critical data points.
Practical Applications
1. Medical Imaging: Enhancing images from medical scans (e.g., MRI, CT) by reducing noise
while preserving anatomical structures.
2. Astronomy: Cleaning astronomical images to remove noise from sensors while maintaining
star and galaxy edges.
4. Surveillance: Improving the clarity of video frames in security footage, which often suffer
from various types of noise.
- Ineffective for Gaussian Noise: While excellent for impulsive noise, the median
filter may not be as effective for Gaussian noise, where Gaussian filters might be
preferred.
- Image Border Handling: Managing the edges of the image can be challenging,
requiring specific strategies to avoid artifacts.
Smoothing images using Median Filter:
You can smooth images using a median filter in Python with the help of the OpenCV
library. The median filter is effective in reducing noise while preserving edges in an
image.
if image is None:
print("Error: Unable to load image.")
else:
# Apply Gaussian filter
# The first parameter is the source image,
#the second parameter is the kernel size (must be odd) and
#the third is the standard-deviation in the X and Y directions.
kernel_size = (5, 5) # Must be odd numbers
sigma = 1.5 # Standard deviation for the Gaussian kernel
smoothed_image = [Link](image, kernel_size, sigma)
# Display the original and smoothed images
[Link]('Original Image', image)
[Link]('Smoothed Image', smoothed_image)
1. Weight Calculation:
- The weights are determined by the Gaussian function:
Where:
- (x, y) are the distances from the center of the kernel.
- σ is the standard deviation.
2. Convolution:
- Each pixel value in the output image is calculated as a weighted sum of the neighboring pixels,
where the weights are derived from the Gaussian function.
- This results in a smoothing effect, where pixel values are blended with their neighbors
according to the Gaussian weights.
# Image Smoothing using Bilateral Filter
import cv2
# Load the image
image_path = 'path_to_your_image.jpg' # Replace with your image path
image = [Link](image_path)
if image is None:
print("Error: Unable to load image.")
else:
# Apply Bilateral filter
# The first parameter is the source image.
# The second parameter is the diameter of the pixel neighborhood.
# The third parameter is the sigma in the color space.
# The fourth parameter is the sigma in the coordinate space.
diameter = 15 # Diameter of the pixel neighborhood
sigma_color = 75 # Filter sigma in color space
sigma_space = 75 # Filter sigma in coordinate space
smoothed_image = [Link](image, diameter, sigma_color, sigma_space)
# Display the original and smoothed images
[Link]('Original Image', image)
[Link]('Smoothed Image', smoothed_image)
Unlike linear filters (like Gaussian or mean filters), which apply a uniform smoothing effect
across the entire image, the bilateral filter considers both the spatial distance of pixels and their
color similarities.
This allows it to smooth areas of the image while preserving sharp edges, making it effective for
applications where detail is important.
Function Signature
python
dst = [Link](src, d, sigmaColor, sigmaSpace, borderType=cv2.BORDER_DEFAULT)
### Parameters
1. *src*:
- This is the source image that you want to process. It can be a grayscale or a color (BGR)
image.
2. *d*:
- This parameter specifies the diameter of the pixel neighborhood that is used during
filtering. It determines how many neighboring pixels will influence the output pixel.
- A larger value means more pixels are considered for the filter, leading to greater
smoothing.
- If set to 0, it will be calculated based on sigmaSpace.
3. *sigmaColor*:
- This parameter defines the filter sigma in the color space. It controls how much the colors of
neighboring pixels influence each other during filtering.
- A larger value allows for a wider range of colors to blend together, leading to more significant
smoothing.
- If two pixels are close in color (within sigmaColor), their values will be averaged.
4. *sigmaSpace*:
- This parameter defines the filter sigma in the coordinate space. It controls how much the spatial
distance influences the filtering.
- A larger value means that pixels further away from the center pixel will still have a significant effect
on the output.
- This parameter helps maintain edges by reducing the effect of distant pixels in areas with
significant color differences.
5. *borderType* (optional):
- This parameter defines how the borders of the image are handled. The default is
cv2.BORDER_DEFAULT, but other options like cv2.BORDER_REFLECT, cv2.BORDER_CONSTANT, etc., can
be used to manage the image boundaries.
How Bilateral Filtering Works
1. Weight Calculation:
- For each pixel in the image, the bilateral filter calculates weights based on both spatial and
color differences from the center pixel.
- The weights are calculated using Gaussian functions for both the color differences and spatial
distances as follows:
2. *Normalization and Output Calculation*: The output pixel value is calculated using the
following equation:
- The output pixel value = (Summation of weighted neighboring pixel values) / (the sum of the
weights of all neighboring pixels), i.e.,
I’ = [sum (w_i x I_i)] / [sum (w_i)]
BROWNIAN NOISE
(Fractal Noise)
Changing the shape of Images
Image Shape Transformation
1. Resizing:
- Changing the dimensions of the image (width and height).
- Can be done to fit images into a specific format or to scale them for processing.
- Maintains the aspect ratio or changes it based on the requirement.
2. Cropping:
- Removing a portion of the image to focus on a specific area.
- Useful for highlighting subjects or removing unwanted parts of an image.
3. Rotating:
- Rotating the image around its center by a specified angle.
- Commonly used to correct orientation or create artistic effects.
4. Flipping:
- Mirroring the image along a specified axis (horizontal or vertical).
- Useful for creating symmetrical designs or augmenting datasets.
5. Warping:
- Applying a non-linear transformation to alter the shape of the image.
- Can be used for artistic effects or to correct perspective distortions.
### Python Code for Changing the Shape of Images
Below is a Python script that demonstrates how to perform several common shape
transformations using OpenCV.
#Changing the shape of the image
import cv2
if image is None:
print("Error: Unable to load image.")
else:
# 1. Resizing
new_size = (300, 300) # Width, Height
resized_image = [Link](image, new_size)
# 2. Cropping
# Define the coordinates for cropping (y_start:y_end, x_start:x_end)
cropped_image = image[50:250, 100:300] # Adjust these values as needed
# 3. Rotating
angle = 45 # Angle of rotation
(h, w) = [Link][:2]
center = (w // 2, h // 2) # Center of the image
matrix = cv2.getRotationMatrix2D(center, angle, 1.0) # Rotation matrix
rotated_image = [Link](image, matrix, (w, h))
# 4. Flipping
flipped_image_horizontal = [Link](image, 1) # 1 for horizontal flip
flipped_image_vertical = [Link](image, 0) # 0 for vertical flip
- This extracts a rectangular region from the original image using slicing.
- The image is rotated by the specified angle around its center using the rotation matrix.
- The [Link]() function flips the image horizontally or vertically based on the specified
parameter.
Effecting Image Thresholding
“Effecting image thresholding” refers to the process of applying a threshold to an image to
separate the given image into different regions based on pixel intensity values.
This technique is commonly used in image processing to create binary images from grayscale
images.
Thresholding Operation:- Each pixel in gray scale image(for example) has a value representing its
intensity, typically ranging from 0 (black) to 255 (white). A specific intensity value is chosen as the
cutoff. Pixels with intensity values above the threshold are set to one value (usually white), and
those below are set to another (usually black).
This method is useful for tasks like object detection, segmentation, and simplifying images for
further analysis.
Different types of thresholding methods (e.g., global, adaptive) can be applied depending on the
image and the desired outcome.
*Segmentation*: By converting a grayscale image into a binary image, thresholding helps to
segment the image into distinct regions. This makes it easier to isolate objects of interest
from the background.
*Noise Reduction*: Applying a threshold can help eliminate noise and minor variations
within the image, making object boundaries clearer and more defined.
*Feature Extraction*: After thresholding, it becomes easier to extract features like
contours, edges, and shapes, which are essential for identifying and classifying
objects.
*Adaptive Methods*: Techniques like adaptive thresholding can adjust the threshold
dynamically based on local image characteristics, making them effective in varying
lighting conditions or complex scenes.
Classification of Thresholding Methods:
Thresholding methods can be classified into several categories based on their
approach and application. Here are the main types:
1. *Global Thresholding*In global thresholding, a single intensity threshold value is
applied to the entire image.
- *Method*: A fixed value (e.g., determined through experimentation or statistical
analysis) is chosen, and all pixel values above this threshold are set to one value
(usually white), while those below are set to another (usually black).
- *Use Case*: Effective for images with uniform lighting and contrast.
*Example*: Otsu's method, which calculates an optimal threshold to minimize intra-
class variance.
2. *Local (Adaptive) Thresholding*Local thresholding, or adaptive thresholding,
calculates the threshold for smaller regions of the image, allowing for different
thresholds in different areas.
- *Method*: The threshold is determined based on the local neighborhood of each
pixel. This is particularly useful for images with varying illumination.
- *Use Case*: Effective for images where lighting conditions change across the
image.
*Example*: Mean or Gaussian adaptive thresholding, where the threshold is
computed as the mean or weighted mean of pixel values in the surrounding region.
3. *Binarization Methods*These methods are specifically designed for converting
images into binary format (black and white).
- *Method*: Similar to global thresholding but often incorporates techniques to
handle noise or specific object characteristics.
- *Use Case*: Commonly used in document image analysis (e.g., OCR) and medical
imaging.
*Example*: Sauvola's method, which adjusts the threshold based on local standard
deviation.
4. *Multi-level Thresholding* In multi-level thresholding, multiple thresholds are
used to segment the image into several regions instead of just two.
- *Method*: More than one threshold value divides the image into multiple classes
or segments.
- *Use Case*: Useful for images with multiple objects or varying intensity levels.
intensity distributions.
5. *Fuzzy Thresholding* Fuzzy thresholding incorporates fuzzy logic principles to
determine thresholds based on the degree of membership of pixel values in various
classes.
- *Method*: Instead of a hard threshold, it assigns a degree of belonging to each
class based on fuzzy rules and membership functions.
- *Use Case*: Beneficial for images with overlapping intensity distributions and
ambiguous boundaries.
6. *Dynamic Thresholding* Dynamic thresholding adjusts the threshold in real-time
based on specific criteria or feedback.
- *Method*: The threshold can change during processing based on user interaction
or algorithmic feedback.
- *Use Case*: Useful in video processing where lighting conditions vary rapidly.
# Different Thresholding Techniques
import cv2
import numpy as np
import [Link] as plt
# Global Thresholding
_, global_thresh = [Link](image, 127, 255, cv2.THRESH_BINARY)
# Otsu's Thresholding
otsu_thresh, otsu_result = [Link](image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
# Adaptive Thresholding
adaptive_thresh_mean = [Link](image, 255,
cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY,
11, 2)
[Link](2, 3, 1)
[Link]('Original Image')
[Link](image, cmap='gray')
[Link]('off')
[Link](2, 3, 2)
[Link]('Global Thresholding')
[Link](global_thresh, cmap='gray')
[Link]('off’)
[Link](2, 3, 3)
[Link]("Otsu's Thresholding")
[Link](otsu_result, cmap='gray')
[Link]('off')
[Link](2, 3, 4)
[Link]('Adaptive Thresholding (Mean)')
[Link](adaptive_thresh_mean, cmap='gray')
[Link]('off')
[Link](2, 3, 5)
[Link]('Adaptive Thresholding (Gaussian)')
[Link](adaptive_thresh_gaussian, cmap='gray')
[Link]('off')
plt.tight_layout()
[Link]()
Explanation:
- *Loading the Image*: The code starts by loading a grayscale image using OpenCV.
- *Global Thresholding*: A fixed threshold of 127 is applied to create a binary image.
- *Otsu's Thresholding*: Automatically calculates an optimal threshold using Otsu’s
method.
- *Adaptive Thresholding*: Two types of adaptive thresholding are demonstrated:
- *Mean*: Uses the mean of the neighborhood area.
- *Gaussian*: Uses a weighted sum of the neighborhood area, giving more weight
to closer pixels.
- *Plotting Results*: Finally, the original and thresholded images are displayed using
Matplotlib.
The statement
otsu_thresh, otsu_result = [Link](image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
is used in OpenCV to apply Otsu's thresholding method to a grayscale image. Here's a breakdown
of each component of the statement:
1. *[Link]*: This is a function from the OpenCV library used to apply a thresholding
operation to an image.
2. *image*: This is the input image to which the thresholding operation is applied. It should be a
grayscale image, where each pixel has a value representing its intensity (typically from 0 to 255).
3. *0*: This parameter is intended to be the threshold value. However, when using Otsu's method,
this value is ignored because Otsu's algorithm automatically calculates the optimal threshold value
based on the image histogram.
4. *255*: This is the maximum value that will be assigned to pixels exceeding the threshold. In
binary images, pixel values greater than the threshold will be set to this value (white).
6. *cv2.THRESH_OTSU*: This flag tells the [Link] function to use Otsu's method to
automatically determine the optimal threshold value from the image histogram. Otsu's method
maximizes the variance between the two classes of pixels (foreground and background), making
it effective for images with bimodal intensity distributions.
Return Values
- *otsu_thresh*: This variable will hold the calculated threshold value determined by Otsu's
method. Although you can ignore it in further processing, it can be useful for analysis or
debugging.
- *otsu_result*: This variable contains the resulting binary image after applying the thresholding
operation. Pixels in this image will either be set to 255 (white) or 0 (black).
Summary
In summary, this statement applies Otsu's thresholding to the input grayscale image,
automatically determining the best threshold to separate the foreground from the background.
The result is a binary image where the pixels are classified based on the optimal threshold,
making it a popular choice for image segmentation tasks.
The statement adaptive_thresh_mean = [Link](image, 255,
cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 2) applies adaptive thresholding to
a grayscale image using OpenCV. Here’s a breakdown of each component:
1. *[Link]*: This is a function from the OpenCV library used to apply adaptive
thresholding to an image. Unlike global thresholding, adaptive thresholding calculates the
threshold value for smaller regions of the image, allowing for more flexibility in areas with
varying lighting conditions.
2. *image*: This is the input grayscale image that you want to threshold. It should be a single-
channel image where pixel values range from 0 to 255.
3. *255*: This parameter specifies the maximum value to be assigned to pixels that exceed the
calculated threshold for their respective neighborhoods. In a binary image, this value typically
represents white.
4. *cv2.ADAPTIVE_THRESH_MEAN_C*: This flag indicates the adaptive thresholding method to be
used. ADAPTIVE_THRESH_MEAN_C computes the threshold value for a pixel as the mean of the
pixel values in a neighborhood around it, minus a constant (which is specified next). This is
effective for images with varying illumination.
6. *11*: This parameter defines the size of the neighborhood (block size) used to calculate the
mean for adaptive thresholding. It must be an odd number (e.g., 11) because the neighborhood is
centered around each pixel.
7. *2*: This constant is subtracted from the mean calculated in the neighborhood. It helps to fine-
tune the thresholding process. The value can be adjusted based on the particular characteristics
of the image to enhance the results.
The statement adaptive_thresh_gaussian = [Link](image, 255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2) applies adaptive thresholding to
a grayscale image using the Gaussian method in OpenCV. Here’s a detailed breakdown of each
component:
### Components Explained
1. *[Link]*: This function from the OpenCV library is used for adaptive
thresholding, which calculates the threshold for smaller regions of the image rather than using a
single global threshold. This is particularly useful in images with varying lighting conditions.
2. *image*: This is the input grayscale image that you want to process. It should be a single-
channel image, where pixel values range from 0 (black) to 255 (white).
3. *255*: This parameter specifies the maximum value that will be assigned to pixels that exceed
the calculated threshold in their respective neighborhoods. In binary images, this value typically
represents white.
4. *cv2.ADAPTIVE_THRESH_GAUSSIAN_C*: This flag indicates the adaptive thresholding method to
use. ADAPTIVE_THRESH_GAUSSIAN_C calculates the threshold for each pixel based on the
weighted sum of the neighborhood pixel values, where weights are assigned according to a
Gaussian window. This method tends to give more influence to pixels closer to the center of the
neighborhood.
6. *11*: This parameter defines the size of the neighborhood (block size) used to calculate the
Gaussian-weighted mean for adaptive thresholding. It must be an odd number (e.g., 11), as the
neighborhood is centered around each pixel.
7. *2*: This constant is subtracted from the calculated Gaussian-weighted mean. It acts as a
tuning parameter to adjust the thresholding result. By modifying this value, you can enhance or
reduce the sensitivity of the thresholding based on the characteristics of the image.
Calculating Gradients
Gradient: -
The ‘steepness of intensity values’ in an image refers to the rate of change of pixel intensity
values across the image.
In the context of image processing and computer vision, a gradient represents the the
steepness of intensity values in an image. That is, a gradient represents the rate of change of
pixel intensity values across the image.
Gradients are fundamental in understanding how the pixel values change in the spatial
domain, and they are crucial for various tasks, such as edge detection, image segmentation,
and feature extraction.
Let us consider the digital image with z represents the pixel intensity
values as shown below:
*Magnitude of the Gradient*:
- The magnitude of the gradient is computed using the formula:
Where Gx and Gy are the gradients in the x (horizontal) and y (vertical) directions,
respectively.
- This magnitude provides a single value representing how steep the change in intensity is at
a given pixel location.
1. *Edge Detection*:
- Areas where the steepness of intensity values is high (i.e., where the gradient magnitude is
large) correspond to edges in the image. These edges are crucial for identifying shapes,
objects, and boundaries.
2. *Texture Analysis*:
- The steepness of intensity changes can also indicate texture. For instance, a surface with
abrupt changes in intensity might appear rough, while a smooth surface would have gentle
gradients.
3. *Image Segmentation*:
- Steepness helps in segmenting an image into different regions based on intensity
variations, allowing for better identification of objects and features.
### Example
- *Sharp Edge*: In an area where there is a sudden change in intensity (e.g., from
black to white), the steepness is high, resulting in larger gradient values.
### Visual Representation
Here’s a simple example of how steepness can be visualized:
- *Grayscale Image*:
- A smooth gradient from dark gray (intensity value 50) to light gray (intensity value 200)
will have a low gradient.
- An area with a sharp transition from dark gray (50) to white (255) will produce a high
gradient, representing a steep change.
### Conclusion
The Gradients (steepness of intensity values)in an image is a fundamental concept that
helps in understanding how brightness and color vary across the image. By analyzing
gradients, one can detect edges, identify regions, and extract meaningful features from
images, which are critical tasks in image processing and computer vision.
#### Calculating Gradients Using Convolution
import cv2
import numpy as np
if image is None:
print("Error: Unable to load image.")
else:
# Calculate gradients using the Sobel operator
sobel_x = [Link](image, cv2.CV_64F, 1, 0, ksize=3) # Gradient in x direction
sobel_y = [Link](image, cv2.CV_64F, 0, 1, ksize=3) # Gradient in y direction
- The magnitude of the gradient is calculated using [Link](), which computes the
Euclidean distance of the gradient vector:
magnitude = sqrt{(Gx^2 + Gy^2)}
- The direction of the gradient is calculated using [Link](), which provides the angle of
the gradient vector in degrees.
### Applications of Gradients
- Edge Detection: Gradients are commonly used in edge detection algorithms (e.g., Canny
edge detector) to identify areas of significant intensity change.
- Feature Extraction: Gradients provide important features that can be used in various
computer vision applications, such as texture analysis and shape recognition.
Performing Histogram Equalization
Histogram equalization is a technique in image processing used to improve the contrast of an
image.
It works by redistributing the intensity values of the image to cover the full range of possible
values, resulting in a more balanced histogram.
This process can enhance the visibility of features in an image that may be obscured(out of sight)
due to poor contrast.
Contrast refers to the difference in luminance or color that makes an object distinguishable from
other objects and the background in an image. It is a critical aspect of image quality that affects
how well details can be perceived. In simpler terms, contrast is the degree to which the lightest
and darkest parts of an image differ from each other.
Histogram: - A histogram is a graphical representation of the distribution of pixel intensity values
in an image. It shows how many pixels fall within each intensity level (from 0 to 255 for an 8-bit
grayscale image). - For example, a histogram may indicate that most pixels in an image are dark
(low intensity values), which can lead to a loss of detail in darker regions.
Purpose of Histogram Equalization: - The primary goal is to enhance the contrast of an image by
spreading out the most frequent intensity values. This makes the image appear more defined and
improves the visibility of details.
Cumulative Distribution Function (CDF): - Histogram equalization uses the cumulative distribution
function of the histogram to transform the pixel values. The CDF represents the cumulative sum of
the probabilities of pixel intensities.
How Histogram Equalization Works
The process of histogram equalization can be summarized in the following steps:
# Display the original and equalized images along with their histograms
[Link]('Original Image', image)
[Link]('Equalized Image', equalized_image)
# Calculate histograms
hist_original = [Link]([image], [0], None, [256], [0, 256])
hist_equalized = [Link]([equalized_image], [0], None, [256], [0, 256])
# Plot histograms
[Link](figsize=(12, 6))
[Link](1, 2, 1)
[Link]('Original Histogram')
[Link](hist_original)
[Link]([0, 256])
[Link](1, 2, 2)
[Link]('Equalized Histogram')
[Link](hist_equalized)
[Link]([0, 256])
[Link]()
- None:
- This parameter is used to specify a mask. If you want to calculate the histogram for a specific
region of the image, you would provide a mask that defines that region. By passing None, we are
indicating that we want to include the entire image without any masking.
- [256]:
- This parameter specifies the number of bins to use for the histogram. In this case, 256 bins are
used, which is standard for an 8-bit grayscale image where intensity values range from 0 to 255.
Each bin will represent the count of pixels that fall within specific ranges of intensity values.
- [0, 256]:
- This argument defines the range of intensity values. Here, it specifies that we want to consider
the range from 0 to 256.
- Although pixel values in an 8-bit image range from 0 to 255, the upper limit of 256 ensures
that all pixel values are included in the histogram calculation. The last bin typically captures the
count of pixels with an intensity of 255.