🖥️ BCA 1st Semester - Computer System
Architecture
📚 UNIT – 1
🏗️ (a) Introduction to Computer Organization
🔧 Basic Structure of a Computer System
A computer system is an integrated collection of hardware and software
components that work together to process data and execute instructions.
The basic structure forms the foundation of all computing devices, from
simple calculators to supercomputers.
🧠 i) Von Neumann Architecture
The Von Neumann architecture, proposed by mathematician John von
Neumann, is the fundamental design principle for most modern
computers. This revolutionary concept established the blueprint for
computer design that continues to influence today's systems.
🌟 Key Characteristics of Von Neumann Architecture:
Single Memory Space: Both program instructions and data are stored
in the same memory unit
Sequential Execution: Instructions are executed one after another in
sequence
Stored Program Concept: Programs are stored in memory just like
data
Binary Representation: All information is represented in binary form
Centralized Control: A single control unit manages the entire system
🔄 Von Neumann Bottleneck: The architecture faces a fundamental
limitation where the CPU must share a single bus for both instruction and
data access, creating a performance bottleneck in modern high-speed
processors.
⚙️ ii) Functional Units
Modern computers consist of five essential functional units that work in
harmony to process information:
📥 Input Unit
Accepts data and instructions from external sources
Converts human-readable information into machine-readable format
Examples include keyboards, mice, scanners, microphones
Performs data validation and error checking
Interfaces with various input devices through device drivers
📤 Output Unit
Presents processed results to users in human-readable form
Converts machine-readable data into user-friendly formats
Examples include monitors, printers, speakers, projectors
Handles different output formats (text, graphics, audio, video)
Manages output buffering and formatting
🧠 Memory Unit
Primary Memory (RAM): Temporary storage for active programs and
data
Secondary Memory: Permanent storage for programs and data
Cache Memory: High-speed memory for frequently accessed data
Provides storage hierarchy for optimal performance
Manages memory allocation and deallocation
🔢 Arithmetic Logic Unit (ALU)
Performs all arithmetic operations (addition, subtraction, multiplication,
division)
Executes logical operations (AND, OR, NOT, XOR)
Handles comparison operations (greater than, less than, equal to)
Contains registers for temporary data storage
Generates status flags for operation results
🎛️ Control Unit
Coordinates and controls all computer operations
Fetches instructions from memory in proper sequence
Decodes instructions and determines required actions
Manages data flow between different units
Handles interrupt processing and exception handling
💾 iii) Stored Program Concept
The stored program concept is a revolutionary idea that distinguishes
modern computers from early mechanical calculators.
🌟 Fundamental Principles:
Program Storage: Programs are stored in the same memory as data
Modifiability: Programs can be modified without hardware changes
Flexibility: Different programs can be loaded and executed
Self-Modification: Programs can modify themselves during execution
Universal Computing: Same hardware can solve different problems
📈 Advantages:
Eliminates need for rewiring hardware for different programs
Enables rapid program switching and multitasking
Allows for program debugging and modification
Supports complex program structures and subroutines
Facilitates software development and distribution
🔄 iv) Instruction Cycle and Data Path
The instruction cycle represents the fundamental operational sequence of
a computer processor.
🔁 Instruction Cycle Phases:
1️⃣ Fetch Phase
Program Counter (PC) contains address of next instruction
CPU sends address to memory via address bus
Memory returns instruction via data bus
Instruction is loaded into Instruction Register (IR)
Program Counter is incremented
2️⃣ Decode Phase
Control unit analyzes the instruction in IR
Determines operation type and required operands
Identifies source and destination addresses
Prepares control signals for execution
Activates appropriate functional units
3️⃣ Execute Phase
ALU performs specified arithmetic or logical operation
Data is transferred between registers and memory
Results are stored in appropriate locations
Status flags are updated based on operation results
Next instruction address is determined
🛤️ Data Path Components:
Registers: High-speed storage locations within CPU
Buses: Communication pathways for data transfer
Multiplexers: Select appropriate data sources
Control Lines: Carry control signals throughout system
Clock: Synchronizes all operations
🔢 (b) Number Systems and Codes
🌐 Number Systems
Number systems form the mathematical foundation of computer
operations and data representation.
2️⃣ Binary System (Base-2)
The binary system uses only two digits: 0 and 1, making it perfect for
digital computers.
🌟 Characteristics:
Base: 2
Digits: 0, 1
Position Values: Powers of 2 (1, 2, 4, 8, 16, 32, ...)
Natural Representation: Matches electronic switches (ON/OFF)
Fundamental Unit: Bit (Binary Digit)
💡 Applications:
Digital circuit design
Computer memory addressing
Data transmission protocols
Boolean logic operations
Cryptographic algorithms
8️⃣ Octal System (Base-8)
The octal system uses eight digits and provides a convenient shorthand for
binary representation.
🌟 Characteristics:
Base: 8
Digits: 0, 1, 2, 3, 4, 5, 6, 7
Position Values: Powers of 8 (1, 8, 64, 512, ...)
Binary Relationship: Each octal digit represents 3 binary digits
Compact Representation: Reduces binary string length
💡 Applications:
File permission systems in Unix/Linux
Memory dump representations
Assembly language programming
Network addressing schemes
🔟 Decimal System (Base-10)
The decimal system is the standard number system used in everyday
mathematics and human communication.
🌟 Characteristics:
Base: 10
Digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Position Values: Powers of 10 (1, 10, 100, 1000, ...)
Human Natural: Corresponds to counting on fingers
Universal Standard: Used in most mathematical operations
💡 Applications:
User interfaces and displays
Financial calculations
Scientific measurements
Input/output operations
Human-computer interaction
1️⃣6️⃣ Hexadecimal System (Base-16)
The hexadecimal system provides an efficient way to represent large binary
numbers in compact form.
🌟 Characteristics:
Base: 16
Digits: 0-9, A, B, C, D, E, F (where A=10, B=11, C=12, D=13, E=14,
F=15)
Position Values: Powers of 16 (1, 16, 256, 4096, ...)
Binary Relationship: Each hex digit represents 4 binary digits
Programmer Friendly: Easy conversion to/from binary
💡 Applications:
Memory addresses in debugging
Color codes in web design
Assembly language programming
Cryptographic key representation
Hardware register values
🔄 Conversions Between Number Systems
Number system conversions are essential skills for computer science
students and professionals.
➡️ Decimal to Binary Conversion
Method: Successive Division by 2
Divide the decimal number by 2
Record the remainder (0 or 1)
Continue until quotient becomes 0
Read remainders in reverse order
➡️ Binary to Decimal Conversion
Method: Positional Value Multiplication
Multiply each binary digit by its position value (power of 2)
Sum all the products
Result is the decimal equivalent
➡️ Decimal to Octal Conversion
Method: Successive Division by 8
Divide the decimal number by 8
Record the remainder (0-7)
Continue until quotient becomes 0
Read remainders in reverse order
➡️ Decimal to Hexadecimal Conversion
Method: Successive Division by 16
Divide the decimal number by 16
Record the remainder (0-15, use A-F for 10-15)
Continue until quotient becomes 0
Read remainders in reverse order
➡️ Binary-Octal-Hexadecimal Conversions
Binary to Octal: Group binary digits in sets of 3 from right Binary to
Hexadecimal: Group binary digits in sets of 4 from right Octal to Binary:
Replace each octal digit with 3 binary digits Hexadecimal to Binary:
Replace each hex digit with 4 binary digits
➖➕ 1's and 2's Complement
Complement systems provide efficient methods for representing negative
numbers and performing subtraction in digital computers.
1️⃣ 1's Complement
The 1's complement is obtained by inverting all bits in a binary number.
🌟 Characteristics:
Formation: Flip every bit (0 becomes 1, 1 becomes 0)
Range: For n-bit system: -(2^(n-1) - 1) to +(2^(n-1) - 1)
Zero Representation: Two representations (+0 and -0)
Addition: Requires end-around carry
Simple Implementation: Easy to generate in hardware
💡 Applications:
Early computer systems
Floating-point number representation
Checksum calculations
Error detection systems
2️⃣ 2's Complement
The 2's complement is the most widely used method for representing
signed integers in modern computers.
🌟 Characteristics:
Formation: 1's complement + 1
Range: For n-bit system: -2^(n-1) to +(2^(n-1) - 1)
Zero Representation: Single representation
Addition/Subtraction: Standard binary arithmetic works
Modern Standard: Used in virtually all computers
💡 Applications:
Integer arithmetic in processors
Memory addressing calculations
Digital signal processing
Control system implementations
🔤 Coding Systems
Coding systems enable computers to represent and process various types
of information beyond simple numbers.
📊 BCD (Binary Coded Decimal)
BCD represents each decimal digit using a 4-bit binary code.
🌟 Characteristics:
Encoding: Each decimal digit (0-9) uses 4 bits
Range: 0000 to 1001 (represents 0-9)
Unused Codes: 1010 to 1111 are invalid
Direct Conversion: Easy decimal-to-binary conversion
Precision: Maintains decimal accuracy
💡 Applications:
Digital clocks and calculators
Financial calculations
Display systems
Measurement instruments
Accounting software
🔤 ASCII (American Standard Code for Information Interchange)
ASCII provides a standard encoding for text characters in computer
systems.
🌟 Characteristics:
Size: 7-bit code (128 characters)
Extended: 8-bit extended ASCII (256 characters)
Coverage: Letters, digits, punctuation, control characters
Universal: Widely supported across platforms
Human Readable: Direct text representation
💡 Applications:
Text file storage
Data communication protocols
Programming languages
Document processing
Internet communications
🎨 Gray Code (Reflected Binary Code)
Gray code ensures that consecutive values differ by only one bit.
🌟 Characteristics:
Single Bit Change: Adjacent codes differ by exactly one bit
Cyclic: Forms a complete cycle
Error Minimization: Reduces switching errors
Reflection Property: Generated by reflection method
Position Independent: No weighted positions
💡 Applications:
Rotary encoders
Analog-to-digital converters
Error correction systems
Mechanical position sensing
Communication systems
3️⃣ Excess-3 Code (XS-3)
Excess-3 code adds 3 to each decimal digit before converting to binary.
🌟 Characteristics:
Formation: BCD + 0011 (binary 3)
Range: 0011 to 1100 (represents 0-9)
Self-Complementing: 9's complement obtained by bit inversion
Arithmetic Advantage: Simplified addition operations
Unique Representation: Each digit has distinct code
💡 Applications:
Early computer arithmetic units
Digital calculators
Specialized arithmetic processors
Legacy system compatibility
Educational demonstrations
🎯 Learning Objectives Summary
By completing Unit 1, students will understand:
🔹 The fundamental architecture and organization of computer systems
🔹 How different functional units work together in processing
🔹 The importance of stored program concept in modern computing
🔹 Various number systems and their practical applications
🔹 Conversion techniques between different number systems
🔹 Complement systems for representing negative numbers
🔹 Different coding schemes for data representation
🚀 Future Applications
This foundational knowledge enables students to:
✨ Design and analyze computer hardware systems
✨ Understand processor architecture and performance
✨ Develop efficient algorithms and data structures
✨ Work with embedded systems and microcontrollers
✨ Pursue advanced studies in computer engineering
✨ Contribute to next-generation computing technologies
🎓 This comprehensive unit provides the essential foundation for
understanding computer systems and prepares students for advanced topics
in computer architecture and organization.