0% found this document useful (0 votes)
45 views61 pages

Micrometer Scale, Interval, and Range

Uploaded by

vighneshshah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views61 pages

Micrometer Scale, Interval, and Range

Uploaded by

vighneshshah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

MODULE 1: INTRODUCTION

____________________________________________________________________________

MODULE 1: INTRODUCTION
●​ Preliminary Idea of Physical Measurements: Principle of
Measurement, Error of Measurement, Correction, Correctness of
Measurement Reliability of Measurements, Verification, Calibration.

Measurement:
Measurement is a complex of operation is carried out by means of measuring
instruments to determine the numerical value of the size, which describes the

AH
object of measurement.

Physical Measurement:
It is defined as the act to derive quantitative information about a physical object
or action by comparison with reference.

H
Important Elements:
.S
1.​ Measurand: Physical quantity or property like length, angle, etc., that is
being measured.
2.​ Comparison/Comparator: The means of comparing measured with
some reference to render judgement.
.K

3.​ Reference: The physical quantity or property to which quantitative


comparisons are made.

Explanation of Measurand, Comparison/Comparator, and Reference


.V

1.​ Measurand
○​ Definition: The physical quantity or property that is being measured
in a measurement process.
R

○​ Examples: Length, angle, weight, temperature, pressure, voltage,


etc.
M

○​ Explanation: The measurand is the primary focus of the


measurement process. For instance, if you are measuring the length
of a rod, the "length" is the measurand.
2.​ Example 1:
○​ Measuring the length of a table using a measuring tape.
○​ Measurand: Length of the table.
3.​ Example 2:
○​ Measuring the temperature of a room with a thermometer.
____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 1
MODULE 1: INTRODUCTION
____________________________________________________________________________

○​ Measurand: Temperature of the air in the room.

2.​ Comparison/Comparator
○​ Definition: The process or device used to compare the measurand
with a standard or reference.
○​ Examples: Vernier caliper, micrometer, optical comparator,
Wheatstone bridge.
○​ Explanation: The comparator facilitates the measurement process

AH
by providing a means to determine the difference between the
measurand and the standard. For example, a Vernier caliper
compares the physical dimension of an object with the scale marked
on the caliper.

H
3.​ Example 1:
○​ Using a Vernier caliper to measure the diameter of a cylindrical rod.
○​ Comparator: Vernier caliper.
.S
4.​ Example 2:
○​ Using an optical comparator to inspect the dimensions of a machine
part.
○​ Comparator: Optical comparator, which magnifies and compares the
.K

part's outline to a predefined standard.


.V

3.​ Reference
○​ Definition: The known standard or baseline with which the
measurand is compared. This could be a physical standard, a
calibration source, or a universally accepted value.
R

○​ Examples: Meter stick (length), standard kilogram (mass),


international volt standard (voltage).
M

○​ Explanation: The reference provides the standard against which the


measurand's value is judged. For example, the meter stick serves as
the reference for measuring length.
4.​ Example 1:
○​ Measuring the mass of an object using a balance.
○​ Reference: Standard weight used for calibration.
5.​ Example 2:
○​ Measuring the pressure in a system using a pressure gauge.
____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 2
MODULE 1: INTRODUCTION
____________________________________________________________________________

○​ Reference: The calibrated standard pressure values marked on the


gauge.

Bringing It Together with an Example:

Scenario: Measuring the length of a metal rod using a meter stick.

●​ Measurand: The length of the metal rod.


●​ Comparator: The process of aligning and reading the rod's length on

AH
the meter stick.
●​ Reference: The meter stick, which is a standard physical representation
of length.

H
Principle of Measurement: It is the physical Phenomenon utilised in the
measurement.
.S
Method of Measurement: It is the way the measuring principles and measuring
Means are used.
Nominal Size (Basic Size): It is the size on which the limits of size are based
.K

and which is assigned to a part in accordance with its function.


True size: It is the theoretical size of dimension, which is free from any errors of
measurement.
Actual size: It is the value of size obtained through measurement with
.V

permissible measurement.
Exact size: It is the value of the size obtained with highest metrological
accuracy, attainable in practice.
Approximate size: It is the value of the size obtained with an error, exceeding
R

the permissible error of measurement and requiring refinement.


M

1.​ Principle of Measurement


○​ Definition: The physical phenomenon utilized in the measurement
process.
○​ Examples:
■​ Thermal Expansion: Used in a thermometer where the liquid
expands or contracts with temperature changes.
■​ Electromagnetic Induction: Used in an ammeter to measure
current.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 3
MODULE 1: INTRODUCTION
____________________________________________________________________________

■​ Refraction of Light: Used in refractometers to measure the


refractive index of liquids.
○​ Example: Measuring temperature with a mercury thermometer relies
on the principle of thermal expansion, where mercury expands
proportionally to the rise in temperature.

2.​ Method of Measurement


○​ Definition: The way the measuring principles and measuring

AH
instruments are applied to achieve the measurement.
○​ Types:
■​ Direct Method: The measurand is compared directly with a
standard (e.g., using a ruler to measure length).

H
■​ Indirect Method: Measurements are derived using
mathematical calculations (e.g., using a multimeter to
measure resistance indirectly through Ohm's law).
.S○​ Example: Measuring the weight of an object directly using a
weighing scale (direct method) or calculating weight using force and
acceleration due to gravity (indirect method).
.K

3.​ Nominal Size (Basic Size)


○​ Definition: The size on which the limits of size are based and which
.V

is assigned to a part in accordance with its function.


○​ Example:
■​ A shaft designed for a nominal size of 50 mm is labeled as
such regardless of tolerances or deviations in manufacturing.
R
M

4.​ True Size


○​ Definition: The theoretical size of a dimension that is free from any
measurement errors.
○​ Example:
■​ A bolt with a true diameter of exactly 10 mm as per the design.
This value exists in theory and is the ideal value for the
dimension.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 4
MODULE 1: INTRODUCTION
____________________________________________________________________________

5.​ Actual Size


○​ Definition: The value of size obtained through measurement,
accounting for permissible measurement errors.
○​ Example:
■​ Measuring a bolt with a micrometer yields an actual size of
10.02 mm. This is the size obtained within permissible limits of

AH
the measurement system.

6.​ Exact Size

H
○​ Definition: The value of size obtained with the highest metrological
.S accuracy attainable in practice.
○​ Example:
■​ Using a high-precision coordinate measuring machine (CMM)
to determine the diameter of a part as 10.0001 mm.
.K

7.​ Approximate Size


○​ Definition: The value of size obtained with an error exceeding the
.V

permissible error of measurement and requiring refinement.


○​ Example:
■​ Measuring the length of a rod with a rough tape measure and
obtaining 100.5 cm, when the true length is 100.00 cm. This
R

requires further refinement for accuracy.


M

Example Bringing All Concepts Together:

Scenario: Measuring the diameter of a cylindrical rod for an engineering


application.

1.​ Nominal Size (Basic Size): The design specifies the rod's diameter as 50
mm.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 5
MODULE 1: INTRODUCTION
____________________________________________________________________________

2.​ True Size: The ideal diameter as per design is 50.000 mm, free of any
errors.
3.​ Actual Size: Measurement with a micrometer gives a value of 50.020 mm,
which is within acceptable tolerance limits.
4.​ Exact Size: Using a high-precision laser micrometer gives a reading of
50.0001 mm.
5.​ Approximate Size: A rough measurement using a ruler gives 49 mm,
exceeding the permissible error and requiring refinement.

In this example, the principle of measurement could involve mechanical

AH
contact for a micrometer or laser reflection for a laser micrometer. The method
of measurement involves using direct comparison with these instruments to
achieve the desired precision.

H
Error of Measurement: It is a difference between the true value of the size
being measured and the value found by measurement. Error pertains to a
measurement and not to an instrument.
.S
Correction: It is the amount which should be algebraically added to the
indicated value To obtain the actual value of the size being [Link]
correction is numerically equal to the error, but opposite in sign.
Correctness of Measurement: It is a quantitative characteristic, showing how
.K

close to 0, our systematic errors of measurement results.


Reliability of Measurements: It is a quantitative characteristic, which implies
confidence in the measured results, depending on whether or not the frequency
.V

distribution characteristics of the deviations from the true values of the


corresponding quantities are known.
Verification: It is the process of testing an instrument for the purpose of
assessing the indication errors and determining whether setting standards or
R

Measuring instruments meet the prescribed specifications.


Calibration: It is the process of determining the values of the quantity being
M

measured, corresponding to pre-established arbitrary Scale.

Detailed Explanation with Examples

1.​ Error of Measurement


○​ Definition: The difference between the true value of the size being
measured and the value obtained through measurement.
○​ Example:

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 6
MODULE 1: INTRODUCTION
____________________________________________________________________________

■​ True value of a rod’s length = 100.00 cm.


■​ Measured value using a ruler = 99.80 cm.
■​ Error: True Value−Measured Value=100.00−99.80=0.20 cm
■​ The error here is −0.20 cm−0.20cm (negative, indicating
under-measurement).

2.​ Correction
○​ Definition: The value that should be algebraically added to the

AH
measured value to get the actual value. It is numerically equal to the
error but opposite in sign.
○​ Example:
■​ Measured value of a rod’s length = 99.80 cm.

H
■​ Error = −0.20cm.
■​ Correction: −(−0.20)=+0.20 cm−(−0.20)=+0.20cm.
■​ Actual length = Measured
.S Value+Correction=99.80+0.20=100.00 cm
.K

3.​ Correctness of Measurement


○​ Definition: A quantitative measure of how close the systematic
errors are to zero.
○​ Example:
.V

■​ Using a well-calibrated scale to measure the mass of an


object. If the object has a mass of 10.00 kg and the scale
consistently shows 10.01 kg, the correctness of
measurement is high as the systematic error is minimal
R

(+0.01 kg).
■​ A poorly calibrated scale showing 9.50 kg would have low
M

correctness due to a larger systematic error


(−0.50 kg−0.50kg).

4.​ Reliability of Measurements

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 7
MODULE 1: INTRODUCTION
____________________________________________________________________________

○​ Definition: A measure of confidence in the measured results,


depending on the frequency distribution of deviations from the true
value.
○​ Example:
■​ Measuring the voltage of a circuit 10 times with a voltmeter
and recording small deviations: 9.9,10.1,10.0,10.0,10.0,10.1,
■​ The distribution shows minimal variation around the true value
of 10.0 V, making the measurements reliable.
■​ If the voltmeter readings were widely scattered (e.g.,
9.5,10.5,9.0,11.0,10.09.5,10.5,9.0,11.0,10.0), the reliability

AH
would be low.

H
5.​ Verification
○​ Definition: Testing an instrument to assess its indication errors and
ensure it meets specified standards.
.S ○​ Example:
■​ Verifying a digital thermometer involves testing it against a
standard temperature source (e.g., a water bath maintained at
50°C).
.K

■​ If the thermometer reads 50.2°C, it has an error of


+0.2°C+0.2°C. Based on the acceptable error range (e.g.,
±0.5°C±0.5°C), the thermometer would pass verification.
.V

■​ A reading of 51.0°C would fail, as it exceeds the permissible


error.
R
M

6.​ Calibration
○​ Definition: Determining the values of a measured quantity
corresponding to a pre-established arbitrary scale.
○​ Example:
■​ Weighing Scale Calibration:
■​ Place a standard weight of 1 kg on the scale.
■​ If the scale reads 0.98 kg, adjust the scale to display
1.00 kg for the same weight.
____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 8
MODULE 1: INTRODUCTION
____________________________________________________________________________

■​ Thermometer Calibration:
■​ Test a thermometer in a boiling water bath at 100°C
(standard boiling point of water at sea level).
■​ If it reads 99.5°C, adjust the scale so it shows 100°C.

Combined Example:

Scenario: Measuring the diameter of a cylindrical rod using a vernier caliper.

1.​ True Value: The rod’s theoretical diameter is 50.00 mm.

AH
2.​ Measured Value: The caliper shows 49.95 mm.
3.​ Error: 50.00−49.95=+0.05 mm50.00−49.95=+0.05mm.
4.​ Correction: Add +0.05 mm to the measured value to get the true value.
5.​ Verification: The caliper is verified by comparing its reading against a
gauge block of known size (e.g., 50.00 mm). If it consistently reads 50.00

H
mm for the block, it passes verification.
6.​ Calibration: Adjust the caliper so its readings match the standard gauge
.S
block sizes to eliminate systematic errors.

________________________________________________________________
____________
.K

●​ Measuring Instruments: Measuring range, Sensitivity, Scale Intervals,


Response time, repeatability, Inaccuracy, Precision, Accuracy.
.V

Measuring Instruments
●​ Measuring Instruments are measuring devices that transform the
measured quantity or a related quantity into an indication or information.
●​ Measuring instruments can either indicate directly the value of the
R

measured quantity or only indicate its equality to a known measure of the


same quantity (e.g. equal arm balance, or null detecting galvanometer).
M

They may also indicate the value of the small difference between the
measured quantity and the measure having a value very near to it
(comparator).
●​ Measuring instruments usually utilise a measuring sequence in which the
measured quantity is transformed into a quantity perceptible to the
observer (length, angle, sound, luminous contrast).
●​ Measuring instruments may be used in conjunction with separate material
measures (e.g. balances using standard masses to compare unknown
mass), or they may contain internal parts to reproduce the mit (like

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 9
MODULE 1: INTRODUCTION
____________________________________________________________________________

graduated rules, a precision thread, etc.)

Measuring instruments transform a measured quantity into a form perceptible


to the observer, such as a visual, auditory, or tactile indication. They serve
various purposes: direct indication, comparison, or detecting small differences
between quantities.

1. Direct Indication of Measured Quantity

AH
●​ Definition: Instruments that directly show the value of the measured
quantity without comparison to a reference.
●​ Examples:
○​ Thermometer: Directly indicates temperature on a scale.

H
○​ Digital Multimeter: Displays voltage, current, or resistance values
.S digitally.
○​ Weighing Scale: Shows the weight of an object in kilograms or
pounds.

Example:
.K

●​ A digital thermometer measuring room temperature shows 25°C directly on


its screen.
.V

2. Indication of Equality with a Known Measure

●​ Definition: Instruments that indicate when the measured quantity is equal


R

to a known standard or reference.


●​ Examples:
M

○​ Equal-Arm Balance: Compares an unknown mass with standard


masses.
○​ Null-Detecting Galvanometer: Indicates when the current or
voltage in a bridge circuit is balanced (zero).

Example:

●​ Using an equal-arm balance, place a standard 500 g mass on one pan and
add objects on the other until the balance is level, indicating equality.
____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 10
MODULE 1: INTRODUCTION
____________________________________________________________________________

3. Indication of Small Differences (Comparator)

●​ Definition: Instruments that measure the small difference between the


measured quantity and a nearby known value.
●​ Examples:
○​ Dial Indicator: Measures small deviations from a reference plane.
○​ Optical Comparator: Compares the measured part’s profile against
a magnified reference.

AH
Example:

●​ A dial indicator is used to measure the flatness of a machine table. The


instrument shows the deviation from the reference plane as a small needle

H
movement.
.S
4. Transformation of Measured Quantity into Perceptible Quantities

●​ Definition: Measuring instruments transform the measured physical


.K

quantity into a perceptible form such as length, angle, sound, or light.


●​ Examples:
○​ Pressure Gauge: Converts pressure into a readable needle
.V

position.
○​ Microphone: Converts sound pressure into an electrical signal.
○​ Photometer: Transforms light intensity into a numerical value.

Example:
R

●​ A pressure gauge on a gas cylinder shows the internal pressure in bars


M

through needle movement on a calibrated scale.

5. Use with Separate Material Measures

●​ Definition: Instruments that rely on external standards or material


measures for comparison.
●​ Examples:
____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 11
MODULE 1: INTRODUCTION
____________________________________________________________________________

○​ Balance Scale with Standard Masses: Compares unknown weight


with known weights.
○​ Length Measurement with Vernier Caliper: Uses the caliper’s
internal graduated scale as a material measure.

Example:

●​ Using a balance, a 1 kg standard mass is placed on one side, and an


unknown object is balanced against it, determining its mass.

AH
6. Instruments with Internal Measures

●​ Definition: Instruments that include internal parts reproducing a

H
measurement standard, like graduated rules or precision threads.
●​ Examples:
○​ Vernier Caliper: Uses its internal graduated scale for length
.S measurement.
○​ Screw Gauge (Micrometer): Employs a precision-threaded spindle
to measure small dimensions.
.K

Example:

●​ A micrometer measures the thickness of a sheet of metal. The internal


screw mechanism provides a precise reading of the thickness.
.V

Combining the Concepts with a Scenario


R

Measuring a Metal Cylinder’s Diameter and Weight


M

1.​ Diameter Measurement:


○​ Instrument: Vernier Caliper.
○​ Type: Direct indication; internal graduated scale is used.
○​ Process: The cylinder is measured, and the caliper directly indicates
a value of 50.2 mm.
2.​ Weight Measurement:
○​ Instrument: Equal-arm balance.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 12
MODULE 1: INTRODUCTION
____________________________________________________________________________

○​ Type: Comparison; standard weights are used to determine equality.


○​ Process: The cylinder is balanced against standard masses,
indicating a weight of 500 g.
3.​ Flatness Check:
○​ Instrument: Dial Indicator (Comparator).
○​ Type: Measures small differences.
○​ Process: The cylinder is placed on a reference surface, and
deviations are noted on the dial.

This shows how various measuring instruments operate based on their unique

AH
principles and application scenarios.

Measuring Range
●​ It is the range of values of the measured quantity for which the error

H
obtained from a single measurement under normal conditions of use does
not exceed the maximum permissible error.
●​ The measuring range is limited by the maximum capacity and the
.S
minimum capacity.
●​ Maximum capacity is the upper limit of the measuring range and is
dictated by the design considerations or by safety requirements or both.
●​ Minimum capacity is the lower limit of the measuring range. It is usually
.K

dictated by accuracy requirements. For small values of the measured


quantity in the vicinity of zero, the relative error can be considerable even if
the absolute error is small.
.V

●​ The measuring range may or may not coincide with the range of scale
indication.

Explanation of Measuring Range with Examples


R

Definition:​
The measuring range refers to the interval within which the measured quantity's
M

value can be accurately determined by a measuring instrument without


exceeding the maximum permissible error. This range is defined by:

1.​ Maximum Capacity: The upper limit of the measuring range.


2.​ Minimum Capacity: The lower limit of the measuring range.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 13
MODULE 1: INTRODUCTION
____________________________________________________________________________

Key Considerations

1.​ Maximum Capacity:


○​ Determined by design considerations, safety requirements, or both.
○​ Exceeding this capacity may lead to inaccurate measurements,
damage to the instrument, or safety risks.
2.​ Minimum Capacity:
○​ Determined by accuracy requirements.
○​ Near zero, even small absolute errors can lead to significant relative
errors, limiting usability for very small values.

AH
3.​ Range of Scale Indication:
○​ The scale of the instrument may display values beyond the
measuring range, but measurements outside the defined range
might not meet accuracy requirements.

H
.S
Examples

1.​ Weighing Scale:


○​ Maximum Capacity: 50 kg (upper limit).
.K

○​ Minimum Capacity: 500 g (lower limit).


○​ If the scale is used to measure a 60 kg object, it may give no reading
or an inaccurate result, exceeding the measuring range. Similarly, if
an object weighs 200 g, the error percentage may be significant,
.V

making the result unreliable.


2.​ Pressure Gauge:
○​ Maximum Capacity: 300 bar.
R

○​ Minimum Capacity: 10 bar.


○​ A gauge designed for this range cannot reliably measure pressures
below 10 bar, as the relative error for small pressures becomes
M

significant. Measuring pressures above 300 bar may damage the


gauge.
3.​ Digital Multimeter:
○​ Voltage measuring range: 0−1000 V.
○​ If measuring a 2 mV signal (close to zero), the absolute error might
still be small (e.g., ±1 mV), but the relative error becomes large:

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 14
MODULE 1: INTRODUCTION
____________________________________________________________________________

○​
4.​ Micrometer:
○​ Measuring range: 0 mm−25 mm.
○​ Trying to measure an object smaller than 0.01 mm may lead to
significant relative errors. Similarly, attempting to measure beyond
25 mm exceeds the instrument's capacity.

AH
Measuring Range vs. Scale Indication

Sometimes, the scale range of an instrument extends beyond its measuring


range.

H
●​ Example: A pressure gauge may have a scale range of 0−400 bar, but its
.S
measuring range is limited to 10−300 bar. Measuring pressures below 10
bar or above 300 bar could either lead to inaccurate results or damage the
instrument.
.K
.V

Practical Implications

1.​ Safety: Measuring beyond the range may harm the instrument or
compromise safety. For example, overloading a weighing scale can
R

damage its load cell.


2.​ Accuracy: Operating within the measuring range ensures that the
maximum permissible error is not exceeded.
M

3.​ Instrument Selection: Choose instruments with a measuring range


suitable for the application to avoid errors or damage.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 15
MODULE 1: INTRODUCTION
____________________________________________________________________________

Summary Table

Instrument Measuring Example Use Case


Range

Weighing Scale 500 g to 50 kg Measuring luggage weight.

Pressure Gauge 10 bar to 300 bar Monitoring pressure in a gas cylinder.

Digital Multimeter 0 to 1000 V Measuring voltage in an electrical circuit.

AH
Micrometer 0 mm to 25 mm Measuring the diameter of a small metal rod.

Understanding the measuring range and its limits ensures accurate and reliable
measurements in various applications.

H
Scale Interval
It is the difference between two successive scale marks in units of the measured
.S
quantity. (In the case of numerical indication, it is the difference between two
consecutive numbers).
The scale interval is an important parameter that determines the ability of the
instrument to give accurate indication of the value of the measured quantity.
.K

The scale spacing, or the length of scale interval, should be convenient, for
estimation of fractions.

Explanation of Scale Interval with Examples


.V

Definition:​
The scale interval is the difference between two successive scale marks or
numbers in units of the measured quantity. For digital instruments, it corresponds
R

to the difference between two consecutive displayed values.


M

Importance of Scale Interval

1.​ Accuracy and Readability:


○​ Smaller scale intervals allow for more precise readings but may be
harder to interpret visually.
○​ Larger intervals make readings quicker and easier but reduce
precision.
____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 16
MODULE 1: INTRODUCTION
____________________________________________________________________________

2.​ Estimation of Fractions:


○​ The scale spacing (distance between markings) should allow the
user to visually estimate fractions of the interval conveniently.
3.​ Suitability for Application:
○​ The chosen scale interval must balance precision and usability
based on the specific measurement application.

Examples

AH
1. Analog Thermometer

●​ Scale Interval: 1°C (difference between two consecutive scale marks).


●​ Use: Measures temperature with a precision of ±0.5°C if users estimate

H
between markings.
●​ Estimation: If the thermometer reads slightly above the 25°C mark, the
.S
user might estimate the temperature as 25.3°C.

2. Digital Multimeter

●​ Scale Interval: 0.1 V (for a specific range).


.K

●​ Use: Measures voltage with a display increment of 0.1 V.


●​ Limitation: The smallest change it can display is 0.1 V, so smaller
changes are not detected.
.V

3. Vernier Caliper

●​ Main Scale Interval: 1 mm (difference between two marks on the main


scale).
R

●​ Vernier Scale Interval: 0.1 mm (smallest increment read using the Vernier
scale).
M

●​ Precision: By combining the scales, the caliper achieves higher precision


than the main scale alone.

4. Kitchen Weighing Scale

●​ Scale Interval: 1 g.
●​ Use: Measures small objects like spices or ingredients.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 17
MODULE 1: INTRODUCTION
____________________________________________________________________________

●​ Estimation: A small deviation within the interval (e.g., 0.5 g) may be


noticeable but cannot be directly read.

5. Ruler

●​ Scale Interval:
○​ For a standard ruler: 1 mm (smallest marking on the scale).
○​ For some rulers: 0.5 cm (smallest division marked prominently).
●​ Use: Measuring length with an error margin of half the smallest division
(e.g., ±0.5 mm).

AH
Scale Interval vs. Scale Spacing

H
Scale Interval

●​ Refers to the numeric difference between consecutive markings (e.g., 1


.S
mm, 0.1°C, etc.).

Scale Spacing
.K

●​ Refers to the physical distance between the markings (e.g., 1 mm apart on


a ruler or 2 mm apart on a thermometer).
●​ Importance: Adequate spacing ensures readability and the ability to
estimate fractions effectively.
.V

Factors Affecting Scale Interval


R

1.​ Instrument Precision:


○​ Higher precision requires smaller scale intervals.
M

2.​ Application Needs:


○​ A micrometer has a small scale interval for high-precision tasks,
whereas a fuel gauge in a car may have larger intervals for
readability.
3.​ Human Factors:
○​ Scale spacing should allow users to easily distinguish and estimate
values.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 18
MODULE 1: INTRODUCTION
____________________________________________________________________________

Summary Table

Instrument Scale Example Comments


Interval Reading

Analog 1°C 25.3°C User estimates between


Thermometer (estimated) markings.

Digital Multimeter 0.1 V 12.4 V Smallest displayed

AH
change: 0.1 V.

Vernier Caliper 0.1 mm 25.4 mm Combines main and


Vernier scales.

H
Kitchen Weighing 1 g 250 g Accurate for small objects.
Scale

Ruler
.S 1 mm 23.5 mm Limited by smallest
(estimated) marking.
.K

Sensitivity.
It is the quotient of the increase in observed variable (indicated by pointer and
.V

scale) and the corresponding increase in the measured quantity.


It is also equal to the length of any scale division divided by the value of that
division expressed in terms of the measured quantity.
The sensitivity may be constant or variable along the scale. In the first case, we
R

get linear transmission and in the second case, we get a non-linear transmission.
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 19
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 20
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 21
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R

Response - time
It is the time which elapses after a sudden change in the measured quantity until
M

the instrument gives an indication differing from the true value by an amount less
than a given permissible error.
The curve showing the change of indication of an instrument due to sudden
change of measured quantity can take different forms according to the relation
between capacitances that have to be filled, inertia elements and damping
elements. When inertia elements are small enough to be negligible, we get first
order response which is due to filling the capacitances in the system through

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 22
MODULE 1: INTRODUCTION
____________________________________________________________________________

finite channels. The curve of change of indication with time in that case is an
exponential curve. (Refer Fig. 1.1)

AH
H
.S
.K

Response Time of Instruments

Definition:​
Response time is the time taken by an instrument to reach an indication that
.V

differs from the true value by less than a given permissible error, following a
sudden change in the measured quantity.
R

Explanation
M

When a sudden change occurs in the measured quantity, an instrument does not
display the updated value instantaneously due to various factors like inertia,
capacitance, or damping. The time it takes for the instrument to stabilize and
provide an accurate reading defines its response time.

1.​ Factors Affecting Response Time:

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 23
MODULE 1: INTRODUCTION
____________________________________________________________________________

○​ Capacitance: Systems with storage elements (e.g., electrical,


thermal, or mechanical capacitance) take time to "fill up" to the new
state.
○​ Inertia: Mass or system inertia resists sudden changes, delaying the
response.
○​ Damping Elements: Damping is used to reduce oscillations and
stabilize readings but can also increase response time.
2.​ Types of System Response:
○​ First-Order Response:
■​ Occurs when inertia elements are negligible, and the delay is

AH
primarily due to capacitance.
■​ The response follows an exponential curve, indicating the
system gradually approaches the final value.
○​ Higher-Order Response:

H
■​ Occurs when inertia and damping are significant.
■​ The response may involve oscillations before stabilizing,
.S depending on the system dynamics.
.K
.V
R
M

Repeatability:
It is the ability of the measuring instrument to give the same value every time the
measurement of a given quantity is repeated.
Any measurement process effected using a given instrument and method of
measurement is subject to a large number of sources of variation like

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 24
MODULE 1: INTRODUCTION
____________________________________________________________________________

environmental changes, variability in operator performance and in instrument


parameters. The repeatability is characterized by the dispersion of indications
when the same quantity is repeatedly measured. The dispersion is described by
two limiting values or by the standard deviation.
The conditions under which repeatability is tested have to be specified.

1. Repeatability

Definition:​
Repeatability is the ability of a measuring instrument or system to provide

AH
consistent results when the same measurement is performed multiple times
under the same conditions.

●​ Key Characteristics:
○​ Involves repeated measurements of the same quantity.

H
○​ Conditions remain constant (e.g., same operator, instrument,
environment, etc.).
.S ○​ Expressed as the variation in repeated readings (often quantified by
standard deviation).
●​ Example:​
A thermometer is used to measure the temperature of a water sample at
.K

40°C multiple times:


○​ Results: 40.1°C, 40.0°C, 39.9°C, 40.2°C, 40.0°C.
○​ Since the readings are very close to each other, the thermometer
.V

has good repeatability.


●​ Importance:​
High repeatability indicates that the instrument is reliable for producing
consistent measurements under the same conditions.
R

Inaccuracy. It is the total error of a measure or measuring instrument under


M

specified conditions of use and including bias and repeatability errors.


Inaccuracy is specified by two limiting values obtained by adding and subtracting
to the bias error the limiting value of the repeatability error.
If the known systematic errors are corrected, the remaining inaccuracy is due to
the random errors and the residual systematic errors that also have a random
character.
This inaccuracy is called the "uncertainty of measurement".

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 25
MODULE 1: INTRODUCTION
____________________________________________________________________________

2. Inaccuracy

Definition:​
Inaccuracy is the deviation of the measured value from the true or reference
value of the measured quantity. It reflects the overall error in the measurement
process.

●​ Key Characteristics:
○​ Includes both systematic errors (bias) and random errors.
○​ Expressed as the difference between the true value and the average

AH
measured value.
●​ Example:​
If a micrometer is measuring a rod with a true diameter of 10.00 mm but
consistently gives readings of 10.05 mm, the inaccuracy is

H
+0.05 mm+0.05mm.
●​ Formula:​
Inaccuracy=Measured Value−True Value
.S
●​ Importance:​
Inaccuracy gives an overall sense of how far measurements deviate from
reality, helping identify and correct systematic errors.
.K

Precision & accuracy: Both these terms are associated with the measuring
process. Precision is defined as the repeatability of a measuring process, while
.V

accuracy is the agreement of the result of a measurement with the true value of
the measured quantity. In most measurements it is the precision which is of
greater importance. The chief concern is with comparing the dimension of
measurement relative to each other, it being assumed that the scale used for
R

measurement is a standard and accepted one. This would be clear from the
example given below.
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 26
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K

3. Precision

Definition:​
Precision refers to the closeness of agreement between repeated
.V

measurements of the same quantity, regardless of their proximity to the true


value. It reflects the consistency or scatter of the readings.

●​ Key Characteristics:
R

○​ Does not account for how close the measurements are to the true
value.
M

○​ High precision means measurements are tightly grouped.


○​ Often confused with accuracy, but they are different.
●​ Example:​
Suppose you measure the length of a rod five times:
○​ Results: 10.01 cm, 10.02 cm, 10.00 cm, 10.01 cm, 10.03 cm.
○​ The measurements are close to each other (small scatter), so the
instrument is precise.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 27
MODULE 1: INTRODUCTION
____________________________________________________________________________

○​ However, if the true length of the rod is 9.95 cm, the instrument is
not accurate, but it is still precise.

If a carpenter had to cut a board to fit the shelf into two projections in the wall, it
does not matter whether his scale is accurate or not, provided he uses the same
scale for the measurement of board and the distance between projections in the
wall. Here the precision with which he measures two is of importance.
Now supposing he had to order for the board from market then it is necessary
that the scale used by him and the one in market are in agreement with each
other. One way to achieve this is that both use the accurate scales in accordance

AH
with the standard scales. In this case, accuracy of the scale is important and it
should be manufactured such that its units are in accordance with the standard
units set.
The distinction between the precision and accuracy will become clear by the

H
following example (shown in Fig. 1.3), in which several measurements are made
on a component by different types of instruments and results plotted.
.S
From Fig. 1.3, it will be obvious that precision is concerned with a process or a
set of measurements, and not a single measurement. In any set of
measurements, the individual measurements are scattered about the mean, and
the precision tells us to how well the various measurements performed by same
.K

instrument on the same component agree with each other. It will be appreciated
that poor repeatability is a sure sign of poor accuracy. Good repeatability of the
instrument is a necessary but not a sufficient condition of good accuracy.
Accuracy can be found by taking root mean square of repeatability and
.V

systematic error
R
M

Comparison of Repeatability, Inaccuracy, and Precision

Property Definition Key Focus Example

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 28
MODULE 1: INTRODUCTION
____________________________________________________________________________

Repeatabil Consistency of Stability A digital scale gives


ity measurements under across consistent readings of 2.50
the same conditions. repeated kg every time the same
trials. object is placed on it.

Inaccuracy Deviation of the Closeness A thermometer measures


measured value from to the true 100°C as 102°C, meaning it
the true value. value. is inaccurate by +2°C.

Precision Closeness of Scatter of A ruler consistently

AH
repeated measureme measures the same length
measurements to nts. as 10.02 cm, 10.03 cm, and
each other, 10.02 cm, even if the true
regardless of the true length is 10.00 cm.

H
value.
.S
Relationship Between Precision, Repeatability, and Inaccuracy

●​ Precision and repeatability are related to the consistency of


.K
measurements.
●​ Inaccuracy reflects how far the measurement is from the true value.
●​ An instrument can be:
○​ Precise but not accurate: Measurements are consistent but far
.V

from the true value (e.g., consistent error due to calibration issues).
○​ Accurate but not precise: Measurements are scattered but their
average is close to the true value.
○​ Accurate and precise: Measurements are both consistent and
R

close to the true value.


M

Error is the difference between the mean of set of readings on same component
and the true value. Less is the error, more accurate is the instrument. Since the
true value is never known, uncertainty creeps in, and the magnitude of error must
be estimated by other means. The estimate of uncertainty of a measuring
process can be made by taking care of systematic and constant errors, and other
contributions to the uncertainty due to scatter of the results about the mean.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 29
MODULE 1: INTRODUCTION
____________________________________________________________________________

So wherever great precision is required in the manufacture of mating


components, they are manufactured in a single plant, where measurements are
taken with the same standards and internal measuring precision can achieve the
desired results. If they are to be manufactured in different plants and
subsequently assembled in another, the accuracy of the measurement of two
plants with true standard value is important.

An accurate measuring instrument should fulfil the following requirements :


(i) It should possess the requisite and constant accuracy.
(iï) As far as possible, the errors should be capable of elimination by adjustment

AH
contained within the instrument itself.
(iii) Every important source of inaccuracy should be known.
(iv) When an error can't be eliminated, it should be made as small as possible.
(v) When an error can't be eliminated, it should be capable of measurement by

H
the instrument itself and the instrument calibrated accordingly.
.S
Sources of error: Static error, Environmental error, Characteristic error dynamic
error

Sources of Errors
.K

In any measurement, there is always a degree of uncertainty resulting from


measurement error, i.e. all measurements are inaccurate to some extent
Measurement error is the difference between the indicated and actual values of
the measurand. The error could be expressed either as an absolute error or on a
.V

relative scale, most commonly as a percentage of full scale. It is important to


examine fully the errors in measurement systems that cause these uncertainties,
the meaning and interpretations of these errors and methods of reducing or
circumventing of errors. Each component of the measuring system has sources
R

of errors that can contribute to measurement error.


Instrument or indication errors may be caused by defects in manufacture of
M

adjustment of an instrument, imperfections in design, etc.


The error of measurement is the combined effect of component errors due to
various causes. There may be errors due to method of location, environmental
errors, errors due to the properties of object of measurement, viz. form deviation,
surface roughness, rigidity, change in size due to ageing etc., observation errors.
The total error of measurement includes indication errors, errors of gauge blocks
or setting standards, temperature change errors, and errors caused by the
measuring force of the instrument.
____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 30
MODULE 1: INTRODUCTION
____________________________________________________________________________

During measurement several types of errors may arise such as static errors,
instrument loading errors or dynamic errors, and these errors can be broadly
classified into two categories viz. controllable errors and random errors.

Static Errors. These result from the physical nature of the various components
of the measuring system as that system responds to a fixed measurand input.
Static errors result from the intrinsic imperfections or limitations in the hardware
and apparatus compared to ideal instruments. The environmental effect and
other external influences on the properties of the apparatus also contribute to
static erros. Other sources of static errors could be inexactness in the calibration

AH
of the system, displaying the output of the measuring system in a way that
requires subjective interpretation by an observer. From above it could be
concluded that static errors stem from three basic sources : reading error,
characteristic error and environmental error. In the measurement of length of a

H
surface table with a rule, these errors will be encountered when aligning the ends
of the rule and surface table, and when estimating the length of the table. The
.S
static error divided by the measurement range (difference between the upper and
lower limits of measurement) gives the measurement precision. Reading error
describes such factors as parallax, interpolation, optical resolution (readability or
output resolution). Reading errors apply exclusively to the readout device and
.K

have no direct relationship with other types of errors within the measuring
system.
Attempts have been made to reduce or eliminate the reading errors by relatively
simple techniques.
.V

Where there is possibility of error due to parallax, the use of mirror behind the
readout pointer or indicator virtually eliminates occurrence of this type of error.
Interpolation error can be tackled by increasing the optical resolution by using a
R

magnifier over the scale in the vicinity of the pointer.


The use of digital readout devices is increasing tremendously for display
purposes as it eliminates most of the subjective reading errors usually made by
M

the observer. However, there exists a possibility of plus or minus one count error
in digital readout devices also and its value can be effectively reduced by
arranging full range to correspond to huge number of pulses so that one pulse
has very negligible value. Digital counting devices are capable of counting each
and every pulse, however short may be the duration, but it is only during start
and at stop that one pulse is likely to be missed which can lead to error.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 31
MODULE 1: INTRODUCTION
____________________________________________________________________________

Environmental errors result from effect of surrounding temperature, pressure


and humidity on measuring system. It can be reduced by controlling the
atmosphere according to estipulated requirements. External influences like
magnetic or electric fields, nuclear radiation, vibration or shock, periodic or
random motion etc., also lead to errors. It is important to note that these factors
affect both the measuring system and measurand, and usually the effects of
these factors on each component are independent. Thus the environmental
errors of each component of the measuring system make a separate contribution
to the static errors. Due to this reason, the number of environmental variables
and external influences that could affect the measurement should be minimised

AH
and where it is not possible to do so then their effect should be computed and
taken into account.

Characteristic error is defined as the deviation of the output of the measuring

H
system under constant environmental conditions from the theoretically predicted
performance, or from nominal performance specifications. If the theoretical
.S
output is a straight line, then linearity, hysteresis, repeatability, and resolution
errors are part of the characteristic error. Linearity errors, hysteresis and
repeatability errors are present to some degree in each component of a
measuring system. Other characteristic errors include gain errors and zero offset,
.K

often collectively called calibration errors.


Similar characteristic errors in each component of the measuring system tend to
be additive.
Thus, system linearity is usually the sum of the errors in individual components;
.V

and as such the study of combination and accumulation of errors is very


important and will be discussed later.
It has been found that the static erros introduced by the components of the
R

measuring system are the cause of major concern. However, the loading errors
and dynamic errors which are generally encountered in process measurements
and not in the field of Metrology, will also be discussed in brief here to complete
M

the subject.
Dynamic error is caused by time variations in the measurand and results from
the inability of a measuring system to respond faithfully to a time-varying
measurand. Usually the dynamic response is limited by inertia, damping, friction
or other physical constraints in the sensing or readout or display system.
Dynamic error is characterised by the frequency and phase response (Bode
criterion) of the system for the cyclic or periodic variations in the measurand

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 32
MODULE 1: INTRODUCTION
____________________________________________________________________________

input. For random or transient inputs, the dynamic error is described by the time
constant of response time.
In both the cases, it is essential that dynamic characteristics of the measuring
system be known before putting the system to measure time varying inputs.
It is thus seen that different errors entering into any observation arise due to a
variety of reasons. Many times it may not be possible to identify the source of
errors. Therefore it is more fruitful to classify errors according to the effects they
produce rather than on the basis of sources which produce them.
For statistical study and the study of accumulation of errors, errors are
categorised as controllable errors and random errors.\

AH
Sources of error in measurements and systems can be broadly categorized
based on their origin and nature. Here’s an explanation of the mentioned types of

H
errors:

1. Static Errors
.S
●​ Definition: Errors that occur when the measurement system is in a
steady-state condition or when the input signal is constant over time.
●​ Causes:
.K

○​ Instrument Bias: Systematic deviation from the true value due to


calibration issues or inherent inaccuracies in the instrument.
○​ Hysteresis: The instrument’s inability to follow changes in the input
.V

signal precisely due to mechanical or material properties.


○​ Drift: Gradual changes in the measurement system’s response over
time, often caused by aging or environmental factors.
●​ Example: A thermometer consistently reads 2°C higher than the actual
R

temperature due to calibration errors.


M

2. Environmental Errors

●​ Definition: Errors caused by external conditions that influence the


measurement system but are not part of the input signal.
●​ Causes:
○​ Temperature Variations: Expansion or contraction of components
affects accuracy.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 33
MODULE 1: INTRODUCTION
____________________________________________________________________________

○​ Humidity: Can alter the properties of materials (e.g., resistance in


electrical circuits).
○​ Pressure or Vibration: Mechanical disruptions or fluctuations
impact the system’s stability.
○​ Electromagnetic Interference (EMI): Induced signals from external
electrical sources affect the readings.
●​ Example: A sensitive electronic sensor gives fluctuating readings due to
nearby electromagnetic waves from machinery.

AH
3. Characteristic Errors

●​ Definition: Errors arising due to the inherent properties or design


limitations of the measurement system or instrument.

H
●​ Causes:
○​ Nonlinearity: The relationship between the input and output
.S deviates from the ideal linear behavior.
○​ Resolution: Limitations in the smallest measurable change, often
due to digital system constraints.
○​ Response Lag: The time delay between the input signal and the
.K

instrument’s response.
●​ Example: A strain gauge has a nonlinear response, leading to
inaccuracies at extreme input levels.
.V

4. Dynamic Errors
R

●​ Definition: Errors that occur when the input signal changes over time, and
the system cannot respond accurately or quickly enough.
M

●​ Causes:
○​ Inertia: Mechanical systems may not move fast enough to track
rapid input changes.
○​ Damping: Overdamped systems may underreport changes, while
underdamped systems may overshoot.
○​ Frequency Response: Inability to handle high-frequency variations
due to limited bandwidth.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 34
MODULE 1: INTRODUCTION
____________________________________________________________________________

●​ Example: A pressure sensor fails to track rapid pulsations in a hydraulic


system due to insufficient frequency response.

Statistical Treatment of errors: Sample mean, Sample Standard deviation,


Population Mean, Population standard Deviation, Principles of least Squares

Statistical Treatment of Errors. Random and systematic errors are evaluated


and studied by statistical procedures which make it possible to state from a
limited group of data the most probable value of a quantity, the probable
uncertainty of a single observation, and the probable limits of uncertainty of the

AH
best value that can be derived from the data. It may be noted that the object of
the statistical methods; based on laws of chance which operate only on random
errors and not on systematic errors; is to achieve consistency (precision) of value
and not their accuracy (approach to the truth).

H
It is also important to note that in quality control of a product we must consider
variations in the repeat measurement of a single part as well the variations in the
.S
single measurements of a large number of 'so-called' identical parts. The first is
largely due to error in the instrument whereas in the second there is also a
contribution caused by variations as a result of the manufacturing process. The
first is the study of errors (dealt here) and the second is the subject of statistical
.K

quality control (dealt in chapter 18).


Let us first understand some terms used in statistical analysis as under :
Population of Measurement. An infinite number of independent measurements
carried out for determination of a certain quantity constitute a population.
.V

Sample of Measurements. In practice, only a finite number of measurements


are carried out for determination of a certain quantity which constitute a sample.
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 35
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 36
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 37
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 38
MODULE 1: INTRODUCTION
____________________________________________________________________________

1. Calculate Population Mean and Population Standard Deviation

Question:

The weights (in kilograms) of a population of five students are:​


45, 50, 55, 60, and 65.

Calculate:​
a) The Population Mean (μ).​
b) The Population Standard Deviation (σ).

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 39
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 40
MODULE 1: INTRODUCTION
____________________________________________________________________________

2. Calculate Sample Mean and Sample Standard Deviation

Question:

The scores obtained by a sample of five students in a test are:​


80, 85, 90, 95, and 100.

Calculate:​
a) The Sample Mean (x̄).​
b) The Sample Standard Deviation (S).

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 41
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 42
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
Principle of Least Squares

The principle of least squares is a mathematical method used to determine the

H
most probable value of a set of observed data by minimizing the sum of the
squares of the residual errors. The residual errors are the differences between
.S
the observed values and the estimated values.
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 43
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 44
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 45
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 46
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 47
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 48
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 49
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 50
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 51
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

Q. Find the Most Probable Value You have the following repeated readings of the
length of a component (in millimeters):
x1=102.4, x2=102.5, x3=102.3, x4=102.6, x5=102.4 Using the Principle of Least
Squares, find the most probable value for the length.

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 52
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K

Questions for practice:

1. Sample Mean
.V

1.​ Given the sample data: {5, 7, 9, 11, 13}, find the sample mean.​
Answer: 9
2.​ A sample of five values: {20, 30, 40, 50, 60}. Find the sample mean.​
R

Answer: 40
3.​ A student records the weights of five apples as {150g, 160g, 155g, 165g,
M

170g}. Calculate the sample mean.​


Answer: 160g
4.​ The test scores of five students are {75, 80, 85, 90, 95}. Determine the
sample mean.​
Answer: 85
5.​ A sample contains the numbers {2, 4, 6, 8, 10}. Compute the sample
mean.​
Answer: 6
____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 53
MODULE 1: INTRODUCTION
____________________________________________________________________________

2. Sample Standard Deviation

1.​ Given the sample {4, 6, 8, 10, 12}, find the sample standard deviation.​
Answer: 3.16

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 54
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
2.​ A sample consists of {3, 7, 7, 19, 21}. Calculate the sample standard
deviation.​
Answer: 8.54

H
3.​ Heights of five students: {150 cm, 160 cm, 170 cm, 180 cm, 190 cm}.
Compute the sample standard deviation.​
.S
Answer: 15.81 cm
4.​ A teacher records the marks {50, 55, 60, 65, 70}. Find the sample standard
deviation.​
Answer: 7.91
.K

5.​ Given {2, 4, 6, 8, 10}, determine the sample standard deviation.​


Answer: 3.16
.V

3. Population Mean

1.​ A population contains values {10, 20, 30, 40, 50}. Find the population
R

mean.​
Answer: 30
M

2.​ The weekly sales of a store in five weeks are {1000, 1500, 2000, 2500,
3000}. Determine the population mean.​
Answer: 2000
3.​ A company has 6 employees with salaries {40000, 42000, 44000, 46000,
48000, 50000}. Compute the population mean salary.​
Answer: 45000

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 55
MODULE 1: INTRODUCTION
____________________________________________________________________________

4.​ The lifespans (in years) of five LED bulbs are {45, 50, 55, 60, 65}. Find the
population mean.​
Answer: 55
5.​ A factory produces items with weights {5.5 kg, 6.0 kg, 6.5 kg, 7.0 kg, 7.5
kg}. Find the population mean.​
Answer: 6.5 kg

4. Population Standard Deviation

AH
1.​ Given the population {4, 6, 8, 10, 12}, find the population standard
deviation.​
Answer: 2.83

H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 56
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
2.​ The ages of five individuals are {20, 25, 30, 35, 40}. Compute the
population standard deviation.​
Answer: 7.91

H
3.​ A manufacturer tests the strength of five cables: {200, 220, 240, 260, 280}
Newtons. Find the population standard deviation.​
.S
Answer: 28.28 N
4.​ A city records daily temperatures (°C) as {15, 18, 20, 22, 25}. Determine
the population standard deviation.​
Answer: 3.81 °C
.K

5.​ The number of calls received by a helpline on five consecutive days: {50,
60, 70, 80, 90}. Find the population standard deviation.​
Answer: 14.14
.V

Problem 1:
R

You have taken five repeated measurements of the weight of a small object (in
grams):
M

w1=50.2,w2=50.3,w3=50.1,w4=50.4,w5=50.2

Using the Principle of Least Squares, find the most probable value of the
weight.

Problem 2:

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 57
MODULE 1: INTRODUCTION
____________________________________________________________________________

A researcher measured the diameter of a metal rod five times (in millimeters):

d1=25.1,d2=25.0,d3=24.9,d4=25.2,d5=25.1

Determine the most probable value of the diameter of the rod.

Problem 3:

The boiling point of a liquid is measured five times (in °C):

AH
T1=98.6,T2=98.7,T3=98.5,T4=98.8,T5=98.6

Find the most probable value of the boiling point.

H
Problem 4:
.S
A surveyor records five measurements of the height of a hill (in meters):

h1=312.4,h2=312.5,h3=312.3,h4=312.6,h5=312.4
.K

Using the least squares principle, compute the most probable height of the hill.
.V

Problem 5:

An astronomer records the brightness of a star over five observations (in arbitrary
units):
R

b1=4.32,b2=4.35,b3=4.30,b4=4.36,b5=4.33
M

Find the most probable brightness of the star.

________________________________________________________________

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 58
MODULE 1: INTRODUCTION
____________________________________________________________________________

Answers to above questions:

1.​ 50.24 g
2.​ 25.06 mm
3.​ 98.64°C
4.​ 312.44 m
5.​ 4.332

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 59
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 60
MODULE 1: INTRODUCTION
____________________________________________________________________________

AH
H
.S
.K
.V
R
M

____________________________________________________________________________
PHYSICS OF MEASUREMENTS AND SENSORS (BSC2023)​ ​ ​​ 61

You might also like