0% found this document useful (0 votes)
37 views36 pages

Python Basics for Image Processing

The document outlines practical exercises in Python focused on data types, conditional statements, and loops, particularly in the context of computer vision. It covers the use of libraries like NumPy and OpenCV for image processing tasks such as reading, displaying, and manipulating images, as well as implementing techniques like histogram equalization and filtering. Each practical includes code examples and explanations of various image processing operations.

Uploaded by

snalin731
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views36 pages

Python Basics for Image Processing

The document outlines practical exercises in Python focused on data types, conditional statements, and loops, particularly in the context of computer vision. It covers the use of libraries like NumPy and OpenCV for image processing tasks such as reading, displaying, and manipulating images, as well as implementing techniques like histogram equalization and filtering. Each practical includes code examples and explanations of various image processing operations.

Uploaded by

snalin731
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

LY-IT-2 Batch-D 220410116043

Practical 1
Aim: Write a python program to study different data types, conditional statements, and
looping.

Understanding Data Types in Computer Vision


An image in programming is typically represented as a matrix (or array) of pixel values. Each pixel may have
one or more channels depending on the type of image. Grayscale images have a single channel representing
intensity, usually with values ranging from 0 to 255. Color images generally have three channels—red, green,
and blue—with similar ranges for each.

Pixels are stored as integers or floating-point numbers depending on the format and the precision required.
Common data types in computer vision include integers (for pixel intensity values), floats (for normalized or
precise values), booleans (used in binary masks or segmentation maps), lists and arrays (to hold sequences of
pixels or multiple images), tuples (to represent coordinates or color values), and dictionaries (to store
metadata such as bounding box coordinates or object properties).

Common Data Types Used


● Integer: Used for pixel intensity values (0-255).
● Float: Used when precision is needed, e.g., normalized pixel values (0.0 - 1.0).
● Boolean: Used for binary masks or thresholds (True for object, False for background).
● Lists/Arrays: Store sequences of pixels or collections of images.
● Tuples: Represent coordinates (x, y) or color values (R, G, B).
● Dictionaries: Store metadata or properties (e.g., bounding box with keys like "x", "y", "width",
"height").
Conditional Statements in Computer Vision
Conditional logic allows decision-making based on pixel values or image properties.

1. Thresholding
● A basic technique to segment images.
● Example: If pixel intensity > threshold → pixel belongs to the object (foreground).
● Else → pixel belongs to background.
2. Filtering Based on Conditions
● Filter out noise by applying conditions (e.g., remove pixels below a certain brightness).
● Detect features like edges by checking gradient magnitude against a threshold.
LY-IT-2 Batch-D 220410116043
3. Application of Conditionals
● Determine if a pixel is part of an object.

● Classify regions of an image.


● Decide when to stop iterative processes (e.g., when image quality improvement is below a threshold).
Loops in Computer Vision
Loops are essential to process every pixel or multiple images.

1. For Loops
● Iterate over rows and columns of an image matrix.
● Apply operations like pixel value modification, counting features, or collecting data.
2. While Loops
● Useful for iterative algorithms (e.g., iterative image smoothing).
● Run until a certain condition is met, like minimal change between iterations.
3. Practical Examples
● Loop through all pixels to convert a color image to grayscale.
● Iterate through detected contours to draw bounding boxes.
● Process video frames one by one for real-time analysis.
Basic Image Processing Techniques Using Conditionals and Loops
1. Reading and Displaying Images
● Images are loaded as arrays.
● Visualize to understand pixel intensity and color distribution.
2. Color Space Conversion
● Convert RGB images to grayscale by calculating weighted sums of channels.
● Loop through pixels and apply the conversion formula.
3. Thresholding for Segmentation
● Apply conditional checks on pixel values.
● Generate binary images highlighting areas of interest.
4. Edge Detection and Contour Extraction
● Detect boundaries using gradients.
● Use loops and conditionals to identify connected regions.
5. Object Detection and Annotation
● Use conditionals to filter detected objects by size or shape.
● Loop through each object to annotate and display results. import random as ran
LY-IT-2 Batch-D 220410116043

Age = int(input("Enter Your Age:")) num


= [Link](1,60)

if(Age == num): print("You Are


Lucky :)") else:
print("You Are Unlucky :(") for i in
range(1,num):
if(i%3 == 0):
print(i,": Fizz") elif(i%5 == 0):
print(i ,": Buzz")
elif(i%3 == 0 and i%5 == 0): print(i ,": Fizz-
Buzz")
else:
print(i)

Output :
LY-IT-2 Batch-D 220410116043

Practical 2
Aim: Write a python program to study user-defined function, NumPy array and
OpenCV library.

Numpy:
NumPy is the fundamental package for scientific computing in Python. It is a Python library that
provides a multidimensional array object, various derived objects (such as masked arrays and matrices),
and an assortment of routines for fast operations on arrays, including mathematical, logical, shape
manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical
operations, random simulation and much more.

Cv2:
OpenCV is the huge open-source library for computer vision, machine learning, and imageprocessing
and now it plays a major role in real-time operation which is very important in today’s systems. By
using it, one can process images and videos to identify objects, faces, or even the handwriting of a
human. When it integrated with various libraries, such as Numpy, python is capableof processing the
OpenCV array structure for analysis. To Identify image patterns and its various features we use vector
space and perform mathematical operations on these features.

imread:
[Link]() method loads an image from the specified file. If the image cannot be read (because of
missing file, improper permissions, unsupported or invalid format) then this method returns an empty
matrix.

imshow:
[Link]() method is used to display an image in a window. The window automatically fits
to theimage size.

waitKey:
The binding function on the keyboard, which waits for a specified number of milliseconds for any
keyboard event to happen, is called waitKey() function in OpenCV, and if the value 0 or any
negativevalue is provided as a parameter to the waitKey() function, it is a special value that causes the
currently running thread to wait infinitely for the key event to happen and in case if no keyboard events
happen before the time specified in milliseconds as the parameter to the waitKey() function,

the waitKey() function returns -1 and passing 0 as an argument to the waitKey() function is very suitable
while displaying images.
LY-IT-2 Batch-D 220410116043

destroyAllWindows:
Python Opencv destroyAllWindows() function allows users to destroy or close all windows at any time
after exiting the script. If you have multiple windows open at the same time and you want to close then
you would use this function. It doesn’t take any parameters and doesn’t return anything. It is similar to
destroyWindow() function but this function only destroys a specific window unlike
destroyAllWindows().

Input:

# user defined
function def add(x,y):
sum=x+y
return sum
a=1
0
b=9
0 ans=add(a,b) print("THE SUM OF "+str(a)+" AND "+str(b)+"
IS = "+str(ans))

Output:
THE SUM OF 10 AND 90 IS = 100

Input:

import numpy as np
a=[Link]([1,2,3])
print(a)

Output:
[1 2 3]

Input:
LY-IT-2 Batch-D 220410116043
import numpy as np a =
[Link]([(8,9,10),(11,12,13)])
print(a) a=[Link](3,2
) print(a)

Output:

[[ 8 9 10] [11 12 13]] [[ 8 9] [10 11] [12 13]]

Input:

import cv2 img =


[Link]('[Link]')
[Link]('Image',img
) [Link]()
[Link](
)
Output:
LY-IT-2 Batch-D 220410116043

Practical 3
Aim: Implementing various basic image processing operations in python:
Reading image, writing image, conversion of images, and complement of an
image.

imwrite:
[Link]() method is used to save an image to any storage device. This will save the imageaccording
to the specified format in current working directory.

bitwise-not:
Bitwise operations are used in image manipulation and used for extracting essential parts in theimage.
cv2.bitwise_not() is a inversion of input array elements.

Reading Image:

import cv2
img=[Link]('[Link]'
) [Link](img)
[Link]()
[Link]()

Writing Image:

import cv2 img =


[Link]('[Link]') status =
[Link]('[Link]',img)
print("Image written to file-system
:
",status)[Link]() [Link]()

Conversion of Image :

import numpy as np
import cv2
LY-IT-2 Batch-D 220410116043
img1 = [Link]('[Link]') img2
= [Link]('[Link]',0)

print([Link] ) print(img1)
print([Link] ) print(img2)
[Link]('original image',
img1)

for i in range([Link][0]):
for j in
range([Link][1]):if
img2[i,j]<127: img2[i,j]
= 0else: img2[i,j]
= 255

[Link]('blacknwhite', img2)
[Link]()
[Link]()

Complement of Image:

import cv2 img =


[Link]('[Link]')
[Link]('Original',img)
compImg=cv2.bitwise_not(img)
[Link]('Complement',compIm
g) [Link]()
[Link]()

Output :

READING IMAGE: WRITING IMAGE:


LY-IT-2 Batch-D 220410116043

CONSERVATION OF IMAGE: COMPLIMENT OF IMAGE:


LY-IT-2 Batch-D 220410116043

Practical 4
Aim: Implement contrast adjustment of an image. Implement Histogram processing
and equalization.

hstack:
[Link]() function is used to stack the sequence of input arrays horizontally (i.e. column wise)to
make a single array.

matplotlib:
Matplotlib is a low level graph plotting library in python that serves as a visualization utility.

[Link]:
[Link]() starts an event loop, looks for all currently active figure objects, and opens one or more
interactive windows that display your figure or figures.

[Link]:
The hist() function in pyplot module of matplotlib library is used to plot a histogram.

[Link]:
The numpy.round_() is a mathematical function that rounds an array to the given number of decimals.

[Link]:
stem plots vertical lines from a baseline to the y-coordinate and places a marker at the tip.

[Link]:
Histogram equalization is a method in image processing of contrast adjustment using the image’s
histogram.
This method usually increases the global contrast of many images, especially when the usable data of
the image is represented by close contrast values. Through this adjustment, the intensities can be better
distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast.
Histogram equalization accomplishes this by effectively spreading out the most frequent intensity
values. The method is useful in images with backgrounds and foregrounds that are both bright or both
dark.
OpenCV has a function to do this, [Link](). Its input is just grayscale image and output is our
histogram equalized image.

Contrast Adjustment of an Image:


LY-IT-2 Batch-D 220410116043
import cv2 import
numpy as np img =
[Link]('[Link]'
) img1=img+100
img2=img-100
img3=img*10
0
res=[Link]((img,img1,img2,img3)
) [Link]('Contrast', res)
[Link]()
[Link]()

Histogram Processing:

import cv2 from matplotlib import


pyplot as pltimg
= [Link]('[Link]')
[Link]([Link](),256,[0,256])
[Link]() [Link]()
[Link]()

Histogram Equalization:

Simple Histogram Equalization# import the


necessary packages import
argparse import cv2
# construct the argument parser and parse the argumentsap
= [Link]() ap.add_argument("-i", "--image", type=str,
required=True, help="[Link]")args = vars(ap.parse_args())

# load the input image from disk and convert it to


grayscaleprint("[INFO] loading input image...") image
= [Link](args["image"]) gray =
[Link](image, cv2.COLOR_BGR2GRAY)

# apply histogram equalization


print("[INFO] performing histogram equalization...")equalized =
[Link](gray)

# show the original grayscale image and equalized


image [Link]("Input", gray)
LY-IT-2 Batch-D 220410116043
[Link]("Histogram Equalization",
equalized) [Link](0)

Adaptive Histogram Equalization# import


the necessary packages import argparse import
cv2

# construct the argument parser and parse the argumentsap


= [Link]() ap.add_argument("-i", "--image", type=str, required=True,
help="[Link]")ap.add_argument("-c","-
-clip", type=float, default=2.0, help="2-5") ap.add_argument("-t", "--tile", type=int,
default=8, help="tile x tile") args = vars(ap.parse_args())

# load the input image from disk and convert it to


grayscaleprint("[INFO] loading input image...") image
= [Link](args["image"]) gray =
[Link](image, cv2.COLOR_BGR2GRAY)

# apply CLAHE (Contrast Limited Adaptive


Histogram Equalization)print("[INFO] applying CLAHE...") clahe
= [Link](clipLimit=args["clip"],
tileGridSize=(args["tile"],args["tile"])) equalized =
[Link](gray) # show the original grayscale image and CLAHE
output image [Link]("Input", gray) [Link]("CLAHE",
equalized) [Link](0)

Output:
LY-IT-2 Batch-D 220410116043
Histogram Equalization : (Simple Histogram
Equalization

(Adaptive Histogram Equalization):


LY-IT-2 Batch-D 220410116043

Practical 5
Aim: Implement the various low pass and high pass filtering mechanisms.

medianBlur:
The Median blur operation is similar to the other averaging methods. Here, the central element of the
image is replaced by the median of all the pixels in the kernel area. This operation processes the edges
while removing the noise.

laplacian:
The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The Laplacian of an
image highlights regions of rapid intensity change and is therefore often used for edge detection.
commonly used small kernels [[0 , -1 , 0] , [-1 , 4 , -1] ,[0 , -1 , 0]]

Input:

Box Filter :
import numpy as np import cv2 from
matplotlib import pyplot as pltimg
= [Link]("[Link]",0) kernel =
[Link]((5,5),np.float32)/25dst
= cv2.filter2D(img,-1,kernel)
res=[Link]((img,dst))
[Link]("BOX FILTER",res)
[Link](0)
[Link]() Median
Filter:

import numpy as np import cv2


img=[Link]("[Link]",0)
medianFilter = [Link](img,5)
[Link]("MEDIAN
FILTER",[Link]((img, medianFilter)))
[Link](0) [Link]()
High Pass Filter:
import numpy as np import
cv2 img =
[Link]("[Link]",0)
LY-IT-2 Batch-D 220410116043
kernel = [Link]([[0 , -1 , 0] , [-1 , 4 , -1] ,[0 , -1 , 0]])
sharp_img = cv2.filter2D(img , -1 , kernel = kernel)
res=[Link]((img,sharp_img))
[Link]("LAPLACIAN",res) [Link](0)
[Link]()

Output:

Box Filter:

Median Filter :

High Pass Filter :


LY-IT-2 Batch-D 220410116043
LY-IT-2 Batch-D 220410116043

Practical 6
Aim: Use of Fourier transform for filtering the image
sqrt():
sqrt() function is an inbuilt function in Python programming language that returns the square root of
any number.

exp():
The [Link]() method returns E raised to the power of x (Ex).
'E' is the base of the natural system of logarithms (approximately 2.718282) and x is the number passed
to it.

subplot():
subplot() function adds subplot to a current figure at the specified grid position. It is similar to the
subplots() function however unlike subplots() it adds one subplot at a time. So to create multiple plots
you will need several lines of code with the subplot() function. Another drawback of the
subplotfunction is that it deletes the preexisting plot on your figure.

fft.fft2():
With the help of np.fft2() method, we can get the 2-D Fourier Transform by using np.fft2() method.

[Link]():
With the help of [Link]() method, we can shift the lower and upper half of vector by using
fastfourier transformation and return the shifted vector by using this method.

[Link]():
The inverse of fftshift. Although identical for even-length x, the functions differ by one sample for odd-
length x.

fft.ifft2() :
Compute the 2-dimensional inverse discrete Fourier Transform.

subplot_adjust():
The subplots_adjust() function in pyplot module of matplotlib library is used to tune the subplotlayout.
LY-IT-2 Batch-D 220410116043
Input:

import cv2 import numpy as


np from scipy import
fftpackfrom math import sqrt
from math import exp import
[Link] as plt
img=[Link]("original-
[Link]",0)

def distance(point1,point2):
return sqrt((point1[0]-point2[0])**2 + (point1[1]-point2[1])**2)

def gaussianHP(D0,imgShape):
base=[Link](imgShape[:
2 ])
rows,cols=imgShape[:2]
center=(rows/2,cols/2) for
x in range(cols): for y in
range(rows):
base[y,x]= 1 - exp(((-distance((y,x),center)**2)/(2*(D0**2))))return base

def try_d0s_hp(d0): [Link](figsize=(25,5),constrained_layout=False)


[Link](161),[Link](img,"gray"),[Link]("ORIGINAL
IMAGE") original=[Link].fft2(img)

[Link](162),[Link]([Link](1+[Link](original)),"gray"),[Link]("Spectrum")
center=[Link](original)
[Link](163),[Link]([Link](1+[Link](center)),"gray"),[Link]("Centered
Spectrum") HighPassCenter=center*gaussianHP(d0,[Link])

[Link](164),[Link]([Link](1+[Link](HighPassCenter)),"gray"),[Link]("Centered
Spectrum Multiply High Pass Filter")
HighPass=[Link](HighPassCenter)

[Link](165),[Link]([Link](1+[Link](HighPass)),"gray"),[Link]("Decentralized
" ) inverse_HighPass=[Link].ifft2(HighPass)

[Link](166),[Link]([Link](1+[Link](inverse_HighPass)),"gray"),[Link]("Processed
Image") [Link]("D0 : "+str(d0),fontweight="bold") plt.subplots_adjust(top=1.1
) [Link]()
LY-IT-2 Batch-D 220410116043

for i in [100,50,30,20,10]:
try_d0s_hp(i)

[Link](0)
[Link]()

Output:
LY-IT-2 Batch-D 220410116043
LY-IT-2 Batch-D 220410116043

Practical 7
Aim: Utilization of SIFT and HOG features for image analysis.

SIFT :
import cv2

# Loading the image img =


[Link]('[Link]')
[Link]("Original",img
)
# Converting image to grayscale gray=
[Link](img,cv2.COLOR_BGR2GRAY)

# Applying SIFT detector


sift=cv2.SIFT_create() kp
= [Link](gray, None)

# Marking the keypoint on the image using circles img=[Link](gray


,
kp ,
img ,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

[Link]('image-with-keypoints', img)

[Link](0) [Link]()

HOG:

#importing required libraries from


[Link] import imread from
[Link] import resizefrom
[Link] import hog
import [Link] as plt

#reading the image


img =
imread('[Link]')
[Link]("off")
[Link](img)
print([Link])
#resizing image
LY-IT-2 Batch-D 220410116043
resized_img =
resize(img, (128*4,
64*4)) [Link]("off")
[Link](resized_img)
[Link]()
print(resized_img.shape)

#creating hog features fd, hog_image = hog(resized_img, orientations=9, pixels_per_cell=(8,


8), cells_per_block=(2,
2), visualize=True, multichannel=True)
print([Link])
print(hog_image.shape) [Link]("off")
[Link](hog_image,
cmap="gray")[Link]()

# save the images [Link]("resized_img.jpg",


resized_img) [Link]("hog_image.jpg", hog_image,
cmap="gray")

Output:
SIFT:

HOG :
LY-IT-2 Batch-D 220410116043
LY-IT-2 Batch-D 220410116043

Practical 8
Aim: Performing image segmentation: Implement Active Contour Detection.

import cv2 from matplotlib import pyplot as plt


import numpy as np #loading img
img = [Link]("[Link]") img = [Link](img,
cv2.COLOR_BGR2RGB)
[Link]("Original Image",img)
#conversion
gray = [Link](img, cv2.COLOR_BGR2GRAY) thresh = [Link](gray, 255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 21, 5)

#Func to find contours


contours, _ = [Link](thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) detected_contours =
[Link]() [Link](detected_contours, contours, -1, (0, 255, 0), -1) [Link]("Simple
Retr_Tree",detected_contours)

#RETR EXTERNAL
contours, _ = [Link](thresh, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
highlight = [Link]() [Link](highlight, contours, -1, (0, 255, 0), -1)
[Link]("Simple RETR External",highlight)

# RETR LIST
contours, _ = [Link](thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) highlight = [Link]()
[Link](highlight, contours, -1, (0, 255, 0), -1) [Link]("Simple RETR List",highlight)

#Func to find contours


contours, _ = [Link](thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE) detected_contours =
[Link]() [Link](detected_contours, contours, -1, (0, 255, 0), -1) [Link]("NONE
Retr_Tree",detected_contours)

#RETR EXTERNAL
contours, _ = [Link](thresh, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
highlight = [Link]() [Link](highlight, contours, -1, (0, 255, 0), -1)
[Link]("NONE RETR External",highlight)

# RETR LIST
LY-IT-2 Batch-D 220410116043
contours, _ = [Link](thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
highlight = [Link]() [Link](highlight, contours, -1, (0, 255, 0), -1)
[Link]("NONE RETR List",highlight)

# RETR CCOMP
contours, _ = [Link](thresh, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE) highlight = [Link]()
[Link](highlight, contours, -1, (0, 255, 0), -1) [Link]("Simple RETR CCOMP",highlight)

# RETR CCOMP
contours, _ = [Link](thresh, cv2.RETR_CCOMP,
cv2.CHAIN_APPROX_SIMPLE)
highlight = [Link]() [Link](highlight, contours, -1, (0, 255, 0), -1)
[Link]("NONE RETR CCOMP",highlight)

[Link](0) [Link]()

Output:
LY-IT-2 Batch-D 220410116043

Practical 9
Aim: Demonstrate the use of optical flow in any image processing application.

Input:

import numpy as np
import cv2 as cv
import argparse

parser = [Link](description='This sample demonstrates Lucas-Kanade Optical Flow


calculation. \
The example file can be downloaded
from: \

[Link]
ack ing/slo w_traffic_small.mp4') parser.add_argument('image', type=str,
help='path to image file')args = parser.parse_args()
cap = [Link]([Link])

# params for ShiTomasi corner detection feature_params = dict(


maxCorners = 100,
qualityLevel = 0.3,
minDistance
= 7,
blockSize =
7)

# Parameters for lucas kanade optical flowlk_params


= dict( winSize = (15,15), maxLevel = 2, criteria = (cv.TERM_CRITERIA_EPS |
cv.TERM_CRITERIA_COUNT,
10,
0.03))

# Create some random colors color =


[Link](0,255,(100,3))

# Take first frame and find corners in itret, old_frame


= [Link]() old_gray = [Link](old_frame,
cv.COLOR_BGR2GRAY) p0 =
[Link](old_gray, mask = None,
**feature_params)

# Create a mask image for drawing purposesmask


LY-IT-2 Batch-D 220410116043
= np.zeros_like(old_frame)

while(1):
ret,frame = [Link]() frame_gray = [Link](frame,
cv.COLOR_BGR2GRAY)

# calculate optical flow p1, st, err =


[Link](old_gray, frame_gray, p0, None, **lk_params)

# Select good pointsif p1


is not None:
good_new =
p1[st==1] good_old
= p0[st==1]

# draw the tracks for i,(new,old) in enumerate(zip(good_new,


good_old)):a,b =

[Link]() c,d = [Link]()


mask=[Link](mask,(int(a),int(b)),(int(c),int(d)),color[i].tolist(),2)
frame = [Link](frame,(int(a),int(b)),5,color[i].tolist(),-1)

img = [Link](frame,mask)

[Link]('frame',img) k =
[Link](30) & 0xffif k ==
27:
break;

# Now update the previous frame and previous pointsold_gray = frame_gray.copy() p0 = good_new.reshape(-1,1,2)

Output:
LY-IT-2 Batch-D 220410116043
LY-IT-2 Batch-D 220410116043

Practical 10
Aim: Object detection and Recognition on available online image datasets

Input:
import cv2
import requests import
[Link] as plt from
ultralytics import YOLO import
numpy as np

url =
"[Link]
[Link]" image_path =
"[Link]" with
open(image_path, "wb") as f:
[Link]([Link](url).content)

model = YOLO("[Link]") # nano model, fast and small results

= model(image_path)

result_img = results[0].plot() # draw boxes on image


[Link](figsize=(10, 6)) [Link]([Link](result_img,
cv2.COLOR_BGR2RGB)) [Link]("off")
[Link]()

Output :
LY-IT-2 Batch-D 220410116043

Beyond Syllabus:

Practical 11
Aim: Write a python program to translate and rotate the image without help of OpenCV
functions.

Input:[Translate]

import cv2 import


numpy as np

img1=[Link]("[Link]",0
) img2=[Link]([Link] , dtype =
'u1')tx=100 ty=100
t=[Link]([[1,0,tx],[0,1,ty],[0,0,1]
]) print([Link])
print([Link]) for i in
range([Link][0]):
for j in range([Link][1]):
c1=[Link]([j,i,1])
n1=[Link](t,c1)
if 0<n1[0]<[Link][1] and 0<n1[1]<[Link][0]:
img2[n1[1],n1[0]]=img1[i,j]

print(img2)
res=[Link]((img1,img2))
[Link]("TRANSLATED
IMAGE",res) [Link](0)
[Link]()

Output:
LY-IT-2 Batch-D 220410116043

Input:[Rotate]

image = imread('[Link]'); figure(1),


clf, hold on subplot(1,2,1) imshow(image);
title("ORIGINALIMAGE"); degree = 100;
switch mod(degree, 360) case 0 imagerot
= image; case 90 imagerot
=
rot90(image); case 180 imagerot = image(end:-1:1, end:-
1:1); case 270 imagerot = rot90(image(end:-1:1, end:-
1:1)); otherwise a = degree*pi/180;
R = [+cos(a) +sin(a); -sin(a) +cos(a)]; [m,n,p] =
size(image); dest = round( [1 1; 1 n; m 1; m
n]*R ); dest = bsxfun(@minus, dest, min(dest))
+ 1; imagerot = zeros([max(dest)
p],class(image)); for ii = 1:size(imagerot,1) for
jj = 1:size(imagerot,2) source = ([ii jj]-
dest(1,:))*R.';

if all(source >= 1) && all(source <= [m n]) C = ceil(source); F =


floor(source);
A = [...
((C(2)-source(2))*(C(1)-source(1))),...
((source(2)-F(2))*(source(1)-F(1))); ((C(2)-source(2))*(source(1)-
F(1))),...
((source(2)-F(2))*(C(1)-source(1)))]; cols =
bsxfun(@times,A, double(image(F(1):C(1),F(2):C(2),:)));
imagerot(ii,jj,:) = sum(sum(cols),2);
LY-IT-2 Batch-D 220410116043
end
end
end
end

subplot(1,2,2)
imshow(imagerot);

title("ROTATED IMAGE");

Output:
LY-IT-2 Batch-D 220410116043

Beyond Syllabus:

Practical 12
Aim: Write a python program to perform Motion Detection using frame difference in
Video

import numpy as np import cv2

cap = [Link]("219299_small.mp4") ret,curr


= [Link]()

while(1):
ret, frame = [Link]() curr_gray = [Link](curr,
cv2.COLOR_BGR2GRAY) frame_gray = [Link](frame,
cv2.COLOR_BGR2GRAY) diff2 = [Link](curr_gray,
frame_gray) [Link]('Frame Subtraction cv2',diff2)
[Link]('Original',frame) curr = frame k = [Link](30)
& 0xff if k == 27: break;

[Link]()
.destroyAllWindows()

Output :
LY-IT-2 Batch-D 220410116043
LY-IT-2 Batch-D 220410116043

Beyond Syllabus:

Practical 13
Aim: Write a python program to perform Background Subtraction from Video.

Input:
import numpy as np import cv2

cap = [Link]("219299_small.mp4")

fgbg = cv2.createBackgroundSubtractorMOG2() fgbg2


= [Link]()

while(1):
ret, frame = [Link]()

fgmask = [Link](frame) fgmask2


= [Link](frame)

[Link]('Background subtracted frame',fgmask)


[Link]('Background subtracted frame KNN',fgmask2)
[Link]('Original',frame) k = [Link](30) & 0xff
if k == 27:
break

[Link]() [Link]()

Output:
LY-IT-2 Batch-D 220410116043

You might also like