Table of Contents
The System Development Life Cycle (SDLC) serves as a conceptual model in project management, outlining the stages of an information system development project, from the initial feasibility study to the maintenance of the completed application. Various SDLC methodologies, such as the waterfall model (the original SDLC method), rapid application development (RAD), joint application development (JAD), and the fountain model and spiral model, have been developed to guide the process.
Overview:
The SDLC has three primary objectives: ensuring the delivery of high-quality systems, establishing robust management controls over projects, and maximizing the productivity of systems staff. To fulfill these objectives, the SDLC must meet specific requirements, supporting projects and systems of various scopes and types, encompassing all technical and management activities, maintaining high usability, and offering guidance on installation.
Technical activities include system definition (analysis, design, coding), testing, system installation, and production support. Management activities involve setting priorities, defining objectives, project tracking and status reporting, change control, risk assessment, etc.
System Life Cycle:
The System Life Cycle is an organizational process for developing and maintaining systems. It aids in creating a system project plan by providing a comprehensive list of processes and sub-processes necessary for system development.
III. Detailed system study/Investigation study
VII. Maintenance
VIII. Study review
Students are permitted to make corrections to assessments provided by the teacher, with the teacher supporting them to guide the process.
The primary phase in the system development life cycle involves conducting a concise examination of the system in question, providing a clear understanding of the physical system. In practical terms, this initial system study entails the creation of a ‘System Proposal,’ outlining the problem definition, study objectives, and terms of reference.
Upon management approval of the system proposal, the subsequent step involves scrutinizing the feasibility of the proposed system. The feasibility study essentially evaluates the proposed system’s practicality, its ability to meet user requirements, efficient resource utilization, and, of course, cost-effectiveness.
This phase encompasses a thorough investigation into various operations performed by a system and their relationships within and outside the system. Data are gathered through interviews, questionnaires, and on-site observations.
System analysis is the process of collecting factual data, understanding the involved processes, identifying problems, and proposing viable suggestions to enhance system functionality.
Derived from user requirements and a detailed analysis of the existing system, the new system must be meticulously designed.
Upon obtaining user acceptance of the newly developed system, the implementation phase commences. Implementation is the stage where theory is translated into practical application.
VII. Maintenance
Maintenance is crucial for rectifying errors during the system’s operational lifespan and fine-tuning it to adapt to variations in its working environment.
VIII. Study Review
Review activities occur at various points in this phase. Each review results in one of the following decisions: the system is operating as intended and meeting performance expectations; the system requires corrections and modifications as it is not operating as intended; or an assessment of user satisfaction with the system’s operation and performance.
A program is a systematically arranged set of instructions that, when executed, directs the computer to operate in a predetermined manner. Numerous programming languages exist, including C, C++, Pascal, BASIC, FORTRAN, COBOL, and LISP, all falling under the category of high-level programming languages. Additionally, there are low-level languages like assembly languages and machine language.
Ultimately, all programs must undergo translation into machine language for computer understanding. This translation process is facilitated by compilers, interpreters, and assemblers. When purchasing software, one typically acquires an executable version of the program, indicating that it has already been compiled, assembled, and is ready for execution.
Guidelines For Program Development
During the development of a program, certain precautions should be observed:
Step 1: Specification of the Program
The development of a program involves several stages, as illustrated in the diagram below:
Step 2: Designing the Program
Step 6: Maintaining the Program
Step 4: Testing the Program
Step 3: Writing the Program Code
Step 5: Documenting the Program
This phase formalizes the task, outlining inputs and outputs, processing requirements, system constraints (execution time, accuracy, response time), and error handling methods.
This step involves understanding the problem to be solved with a computer program, resolving programmer queries, and comprehending program requirements.
Once the overall problem is identified, the next software development stage is program design. The programmer envisions a step-by-step procedure, resulting in a flowchart diagram.
To test the logic of a program, desk checking involves executing algorithm statements manually on sample data.
This step transforms the program logic design into a computer language format, translating the design into computer instructions.
The compilation process involves lexical analysis, syntactic analysis, code generation (often assembly language), and code optimization.
This stage discovers and corrects programming errors. Debugging is crucial, as few programs run correctly on the first attempt. Testing validates the program’s performance.
This stage involves creating documentation to ensure that users and maintainers can understand and extend the program for further applications.
A compiler translates text from a high-level language (source language) into another computer language (target language). The process includes lexical analysis, syntactic analysis, code generation, and code optimization.
An interpreter executes other programs and differs from a compiler, which does not execute its input program. The process involves a syntax analyzer, semantic analyzer, code generator, lexical analyzer, intermediate language code, interpreter, external libraries, and source code.
Compiler Flow Diagram:
Source Language → Lexical Analysis → Syntactic Analysis → Code Generation → Code Optimization → Target Language
Interpreter Flow Diagram:
Source Code → Lexical Analyzer → Syntax Analyzer → Semantic Analyzer → Code Generator → Intermediate Language Code → Interpreter → Execution Result
Examples of Interpreted Languages: BASIC, COBOL
Examples of Compiled Languages: C, ALGOL, Visual Foxpro, etc.
To instruct a computer to perform tasks, one must compose a computer program, delineating the desired actions step by step. The essence of programming lies in formulating algorithms—sets of rules and sequential steps that prescribe the finite and ordered sequence for solving specific problems.
An algorithm implemented in a computer is commonly referred to as a program, signifying its pivotal role in programming. Essentially, an algorithm serves as the core of programming, elucidating the methodology by which a given task is executed.
A visual representation of the procedural steps and decisions involved in problem-solving is encapsulated in a flow chart, utilizing symbols to illustrate the various operations.
Typically, an algorithm starts with “BEGIN” and concludes with “END,” employing sequential steps denoted by “Step1,” “Step2,” and so forth.
An illustration of an algorithm to print ten odd numbers involves defining the starting number as input and outputting the first ten odd numbers through specified steps.
Similarly, an algorithm to determine the average of a given set of numbers involves inputting the total number count, calculating the sum iteratively, and ultimately deriving the average.
Examples of Compiled and Interpreted Programs:
Lastly, students are permitted to provide corrections to assessments, with teacher support aimed at guiding them through the learning process.
BASIC has a number of built-in functions that greatly extend its capability. These functions perform such varied task as taking the square root of a number, counting the number of characters in a string, and capitalizing letters. Functions associate with one or more values, called input, a single value called output.
NUMERIC FUNCTIONS: SQR, INT
SQR calculates the square root of a number. The function INT finds the greatest integer less than or equal to a number. Therefore INT discards the decimal parts of a number.
Example:
SQR(9) is 3
INT(2.7) is 2
The terms in the parentheses can be numbers (as above), variables, or expressions.
ABS FUNCTION
Its full form is absolute. It is used to find the absolute value of a number. Absolute value of a number means the number without a sign.
Examples:
ABS(+3.4) = 3.4
ABS(- 3.4) = 3.4
RND FUNCTION
RND is a special function that gives us a random number between 0 and 1. This can be used in dice games to make it more interesting.
COS, SIN, TAN, ATN FUNCTIONS
COS (Cosine)
SIN(Sine)
TAN(Tangent)
ATN(Arctangent, inverse of TAN)
Example:
CONST pi =3.14
PRINT COS(pi/4)
PRINT SIN(pi/4)…….etc.
MOD FUNCTION
It means remainder. This function returns the remainder after division
Example:
16 MOD 5 = 1
30 MOD 5 = 0
EXP FUNCTION
The EXP function is used to calculate the exponential function i.e raise e to some power, where value of e is 2.13. The general form of exponential is EXP(X). e.g :
EXP(4) = 54.59815
EXP(-5) = 6.73794 E-03
LOG FUNCTION
LOG(X) gives the natural logarithm of X
MATHEMATICAL EXPRESSION BASIC NOTATION
(X -Y)/(X+Y) (X -Y)/(X+Y)
EXP (X^2 + Y) – SIN(X + N*y)
b =1/ 4ac B =1/ 4*A* C
Student are allowed to give corrections to the assessment given by the teacher ,while the teacher support s them in order to guide them.
BASIC PROGRAMS
FIND SUQARE ROOT OF NUMBERS
10 REM FIND SUQARE ROOT OF NUMBERS
20 INPUT “ENTER FIRST NUMBER OF RANGE”; A
30 INPUT “ENTER LAST NUMBER OF RANGE”; B
40 FOR I = A TO B
50 PRINT “THE SUARE ROOT OF “; A; “IS “;SQR(A)
60 NEXT A
70 END
FIND SQUARE ROOT OF S ROUNDED UP TO AN INTEGER
10 REM FIND SUQARE ROOT S
20 INPUT “ENTER NUMBER”; A
30 S = INT(SQR(A))
40 PRINT “THE SQUARE ROOT OF S ROUNDED OFF “;A; “IS “; S
50 END
FIND THE COSINE OF KNOW VALUES
10 REM FIND TANGENT OF GIVEN ANGLE
20 INPUT “ENTER NUMBER”; A
30 S = TAN(A)
40 PRINT “THE TANGENT OF “;A; “IS “; S
50 END
We can get the y position of the sine wave at any of these 360 points by using the SIN command. Unfortunately, the SIN command requires the input Radians but we can convert degrees to radians but multiplying the degrees by 0.017453. so to get the Y position of the sine wave at 90 degrees we would use this
PRINT SIN(90*0.017453)
Of course using a scale of -1 to +1 is a bit limited in my opinion and it requires a bit of multiplication and addition so that we can get a range of something like 0 to 100 with 50 being the midpoint. To make this so:
10 PRINT (SIN (dg% *0.017453)*50)+50
20 SCREEN 13
30 FOR X%= 0 TO 360
40 PSET (X%, (SIN(X%*0.017453)*50)+50),15
50 NEXT X%
60 END
PLOT COSINE CURVE
20 SCREEN 13
30 FOR x% = 0 to 360
40 PSET (x%, (COS(x%*0.017453)*50)+50),15
50 NEXT x%
60 END
The internet comprises a globally interconnected computer network that offers an extensive range of information on virtually any conceivable topic. The World Wide Web, commonly referred to as the Web, constitutes a subset of the Internet.
Definition of Internet
The Internet is a global network of computers designed for sharing information, with information sharing and communication as its primary purposes.
There are various Internet browsers available for navigating the web, with some of the most widely used ones being Internet Explorer, Netscape Navigator, Opera, and Firefox.
It is challenging to envision life without the Internet, given its profound impact on our lives.
Computer Hardware
The term “computer hardware” encompasses the tangible components that together form a computer system. It refers to the physical constituents, such as the monitor, mouse, keyboard, computer data storage, hard disk drive (HDD), system unit (inclusive of graphic cards, sound cards, memory, motherboard, and chips), among others. These are physical objects that can be physically touched. In contrast, software comprises instructions that can be stored and executed by hardware.
Input and Output Devices
An input device refers to any piece of computer hardware used to transmit data into the computer’s storage for processing. Examples include the keyboard, card reader, mouse, scanner, microphone, joystick, and more.
A computer output device is a tool that releases processed data from the computer to either the user or other storage devices. Examples of such devices include monitors, printers, speakers, projectors, and the like.
The primary function of the CPU is to execute mathematical computations on binary numbers, yet there are additional purposes for utilizing the CPU. It comprises various hardware components like the motherboard and circuits.
A central processing unit (CPU) represents the hardware within a computer responsible for executing a computer program’s instructions by conducting fundamental arithmetic, logical, control, and input/output operations within the system.
As depicted in the image above, the CPU consists of the following:
Two common components of a CPU are the Arithmetic Logic Unit (ALU), which executes arithmetic and logical operations, and the Control Unit (CU), which retrieves instructions from memory, and decodes and executes them, involving the ALU when necessary.
The control unit is a constituent of a computer’s central processing unit (CPU) that guides the processor’s operations. It instructs the computer’s memory, arithmetic/logic unit, and input/output devices on how to respond to program instructions.
By furnishing timing and control signals, the control unit directs the operation of other units. The CU (Control Unit) manages all computer resources, dictating the data flow between the Central Processing Unit (CPU) and other devices. In contemporary computer designs, the control unit typically resides internally in the CPU, maintaining its overarching role and operation.
An ALU performs fundamental arithmetic and logic operations, including addition, subtraction, multiplication, division, and logical comparisons such as NOT, AND, and OR. Logical operations involve comparing values like numbers, letters, and special characters, and relational operations (=,<,>) describe the comparison operations executed by the ALU.
Memory in a computer serves as the repository for data and instructions required for processing. It retains programs and data only for the duration of the operation of the corresponding program.
This is the type of memory directly accessible by the CPU, engaging in constant interaction. It retrieves stored data, processes instructions, and executes them based on requirements. Information, data, and applications are uniformly loaded into primary memory. An example is RAM (Random Access Memory), a volatile (temporary) but fast form of memory. In addition to the main RAM, primary memory comprises two sub-layers:
(i) Processor registers within the processor, serving as one of the fastest forms of data storage containing a word of data (usually 32 or 64 bits).
(ii) Processor cache, designed to enhance computer performance by linking fast registers to slower main memory. Cache memory loads actively used duplicated information and is faster than main memory but has limited storage capacity.
Uses of Primary Memory
The primary storage unit is utilized for the following activities:
(i) Input and output operations.
(ii) Text manipulation and calculation operations.
(iii) Logical or comparison operations.
(iv) Storage and retrieval operations.
Examples of Primary Memory
(i) RAM chips provide volatile storage, requiring a power supply to retain stored data without special regenerator circuits.
(ii) ROM chips retain stored data even when the power supply is cut. Unlike RAM chips, ROM chips are non-volatile. ROM may contain microprograms for specific operations, such as computer startup. ROM only reads and does not accept user instructions.
Types of ROM
(a) Programmable Read-Only Memory (PROM): Allows user programming for converting critical and lengthy operations into microprograms.
(b) Erasable Read-Only Memory (EROM): Can be erased and reprogrammed, requiring exposure to ultraviolet light.
(c) Electrical Erasable and Programmable Read-Only Memory (EEPROM): Reprogrammable with special electrical pulses.
RAM vs. ROM
RAM vs. ROM: A Detailed Comparison
Elaboration:
Accessibility:
Working Type:
Storage:
Speed:
Data Preserving:
Structure:
Cost:
Chip Size:
Types:
Secondary Memory
Secondary memory or storage refers to non-volatile memory located externally from the computer. It is utilized for storing large amounts of data, offering permanent or long-term storage for data or programs, as well as serving as a backup storage solution.
Various criteria can be used to classify secondary storage media:
(i) Retrieval speed: The time it takes to locate and retrieve stored data.
(ii) Size/Storage capacity: The ability to store data, with a preference for larger storage capacities.
(iii) Cost per bit of capacity: Preference for lower costs.
Types of Secondary Memory
(i) Magnetic Disk: A Mylar or metallic platter storing electronic data as tiny magnetic spots on its iron oxide coating. The access time is determined by seek time and search time. Transfer rate depends on data density and rotational speed.
Magnetic Disks are categorized as:
(a) Fixed disk or Hard disk: Made from materials like aluminum, with options for permanent installation or as removable disks.
Forms of Fixed disks or hard disks include:
(b) Floppy Disk: Available in three sizes – 8-inch portable floppy disk, 5¼-inch portable floppy disks, and compact floppy disks measuring less than 4 inches in diameter.
Optical Technology Storage
Involves the use of laser beams, highly concentrated beams of light, in the form of:
(a) Optical laser disks such as Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), Digital Versatile Disk (DVD), or flash. These can be read by CD-ROM drives, CD-RW drives, DVD-ROM drives, DVD-RW drives, and Flash drives.
(b) DVD Drive: Similar to CD but with a larger data capacity, made up of several layers of plastic totaling about 1.2 millimeters thick. Various types and formats are available, similar to CDs.
Flash Drives
These compact devices facilitate easy file transportation, with capacities ranging from 16 MB to 1 GB. Flash drives are faster and more reliable than floppy disks.
Comparison of Memory Devices
The characteristics of different data storage types in the storage hierarchy are summarized without the need for a table.
Distinguishing Primary and Secondary Memory
Primary Storage Devices Secondary Storage Devices
1 Characterized by temporary storage. Characterized by permanent storage.
2 Typically more costly. Typically more budget-friendly.
3 Notable for higher speed and, consequently, higher cost. Connected via cables, slower, and more economical.
4 Possesses lower storage capacity. Boasts higher storage capacity.
5 Includes RAM, ROM, etc. Includes FDD, HDD, etc.
(a) BIT (Binary Digit): The smallest unit of information, representing either 0 or 1.
(b) NIBBLE: A grouping of four bits forming one nibble.
(c) BYTE: A fixed number of bits constituting a unit of information; a combination of 8 bits.
(d) CHARACTER: Represented by one byte, it can be a letter, digit, punctuation mark, or special character.
(e) WORD: Formed by the combination of 2, 4, or 8 bytes.
Data Measurement:
8 bits = 1 byte
1024 bytes = 1 kilobyte (KB)
1024 KB = 1 Megabyte (MB)
1024 MB = 1 Gigabyte (GB)
1024 GB = 1 Terabyte (TB)
Logic Gates
A fundamental component of digital circuits, a logic gate is a basic building block. Typically, these gates possess two inputs and one output. At any given moment, each terminal is in either a low (0) or high (1) binary condition. The state of a terminal in a logic gate can frequently change as the circuit processes data. Functioning in a logical manner, the gate processes one or more input signals. Based on the input value or voltage, the logic gate will produce either a ‘1’ for ON or a ‘0’ for OFF.
The three primary logic gates are AND, OR, and NOT.
Binary Code
Logic gates operate within a digital framework and employ a binary numbering system, known as binary code. This language, consisting solely of the numbers 1 or 0, is the same as that used by computers.
Inputs And Outputs
Gates feature two or more inputs, with the exception of a NOT gate, which has only one input. Regardless of the number of inputs, all gates have a single output. Typically, inputs and outputs are labeled using letters such as A, B, C, etc.
Logic Symbol Of The “AND” Gate
AND Gate
How does the AND gate work?
An ‘AND’ gate operates akin to two or more switches in series. For the lamp (output) to turn on, all switches must be closed (ON or with a value of 1). If any of the inputs are not ‘ON,’ the output remains ‘OFF.’
INPUT INPUT OUTPUT
A B C
0 0 0
0 1 0
1 0 0
1 1 1
For the output value of the AND gate to be ‘1,’ all input values must be ‘1.’ Any other combination of inputs results in a zero output.
An ‘OR’ gate is comparable to two or more switches in parallel. Only one switch needs to be closed (ON or with a value of 1) for the lamp (output C) to turn ON with a value of 1.
LOGIC SYMBOL FOR “OR” GATE
OR Gate
TRUTH TABLE OF “OR” GATE
INPUT INPUT OUTPUT
A B C
0 0 0
0 1 1
1 0 1
1 1 1
An input value of ‘1’ in either or both inputs of the OR gate results in an output value of ‘1.’ An input value of ‘0’ for both inputs yields an output of ‘0.’
The NOT gate has a singular input and output. It reverses the input signal value. If the input is ‘1,’ the output is ‘0,’ and vice versa.
LOGIC SYMBOL FOR “NOT” GATE
NOT Gate
TRUTH TABLE FOR “NOT” GATE
INPUT OUTPUT
A C
0 1
1 0
The “NOT” gate, also referred to as an inverter, produces an output that is always the opposite of the input signal.
Logic Equations
In addition to depicting the operation of a logic gate through a truth table and grammatical definition, logic equations serve the purpose of representing not only logic gates and circuits but also, employing certain theorems and equivalences, streamlining the equation by reducing the number of terms involved. In logic equations, each Boolean variable is assigned a letter or symbol, similar to the algebraic representation of unknown numerical values using letters. This approach is known as Boolean algebra.
Symbolic logic encompasses values, variables, and operations, where TRUE is denoted as 1, and FALSE as 0. Variables are expressed by letters and can have values of either 0 or 1. Operations are functions involving one or more variables.
AND gate equation
The operation of an AND gate can also be articulated through a Boolean algebraic equation. For a 2-input AND gate, the equation is:
X = A dot B
The symbol for the AND gate operation is a center dot, representing conjunction. This equation signifies that the output of the gate is a logic 1 when both A and B inputs are in their logic 1 states.
OR gate equation
The Boolean algebraic expression for an OR gate is given as:
X = A + B
This equation conveys that the output of the gate is a logic 1 when either A or B inputs are in their logic 1 states.
NOT gate equation
The operation of a NOT gate can be expressed by a Boolean algebraic equation:
[ X = barA ]
Here, a complement bar is placed over the assigned input letter. The expression is read as “X is equal to not A,” indicating that the output state is the opposite of the logic state applied to the input.
Logic gates are indeed the fundamental components of digital electronics, constructed by combining transistors to perform various digital operations such as logical OR, AND, and NOT. Every digital device, including computers, mobile phones, calculators, and digital watches, contains logic gates. The utility of logic gates can be elucidated through examples, such as the single-bit full adder in digital electronics, which is a logic circuit performing the logical addition of two single-bit binary numbers.
These are gates that are formed from a combination of two logic gates. There are two types of alternative logic gates:
Table of Contents
A NAND gate is the combination of an AND gate and NOT gate. It operates the same as an AND gate but the output will be opposite. Remember, the NOT gate does not always have to be the output leg; it could be used to invert an input signal also.
Logic Symbol For The “NAND” Gate
NAND Gate
Notice the circle on output C.
TRUTH TABLE FOR THE “NAND” GATE
INPUT INPUT OUTPUT
A B C
0 0 1
0 1 1
1 0 1
1 1 0
NAND Gate Equation
The NAND gate operation can also be expressed by a Boolean algebra equation. For a 2 – input NAND gate, the equation is:
X = A.B
This equation read X equal to A and B NOT, which simply means that the output of the gate is not a logic 1 when A and B inputs are their 1 states.
A NOR gate is the combination of both an OR gate and NOT gate. It operates the same as an OR gate, but the output will be the opposite.
NOR Gate
TRUTH TABLE FOR THE “NOR” GATE
INPUT INPUT OUTPUT
A B C
0 0 1
0 1 0
1 0 0
1 1 0
NOR GATE EQUATION
The NOR gate operation can also be expressed by a Boolean algebra equation. For a 2 – input NAND gate, the equation is:
X = A + B
The expression is the same as the OR gate with an over bar above the entire portion of the equation representing the input. This equation read X equal to A or B NOT, which simply means that the output of the gate is not a logic 1 when A or B are in their 1 states.
Logic gates are in fact the building block of digital electronics, they are formed by the combination of transistors (either BJT or MOSFET) to realise some digital operations like logical OR, NOT, AND etc. Every digital product like computers, mobile phones, calculators, even digital watches contains logical gates.
The XOR (exclusive – OR) gate acts in the same way as the logical “either or”. The output is “True” if either but not both, of the inputs are “true”. The output is “false” or if both inputs are “true”.
Logic Symbol For “XOR” Gate
XOR Gate
TRUTH TABLE FOR THE “XOR” GATE
INPUT INPUT OUTPUT
A B Y
0 0 0
0 1 1
1 0 1
1 1 0
Comparator is a combinational logic circuit that compares the magnitudes of two binary quantities to determine which one has the greater magnitude. In order word, comparator determines the relationship of two binary quantities. A XOR can be used as basic comparator.
As you can see, the only difference between these two symbols is that the XNOR has a circle on its output to indicate that the output is inverted.
XOR Combination
One of the most common uses for XOR gates is to add two binary numbers. For this operation to work, the XOR gate must be used in combination with an AND gate.
XNOR Combination1
To understand how the circuit works, review how binary addition works:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 10
If you wanted, you could write the results of
each of the preceding addition statements by
using two binary digits, like this:
0 + 0 = 00
0 + 1 = 01
1 + 0 = 01
1 + 1 = 10
When results are written with two binary digits, as in this example, you can easily see how to use an XOR and an AND circuit in combination to perform binary addition.
If you consider just the first binary digit of each result, you’ll notice that it looks just like the truth table for an AND circuit and that the second digit of each result looks just like the truth table for an XOR gate.
The adder circuit has two outputs. The first is called the Sum, and the second is called the Carry. The Carry output is important when several adders are used together to add binary numbers that are longer than 1 bit.
A computer, often described as a remarkable marvel of modern technology, is a specialized multi-purpose electronic device, commonly referred to as a machine. This sophisticated apparatus possesses the extraordinary ability to receive instructions in the form of data, adeptly store and process this information, and subsequently deliver precise results at an astonishingly high speed.
Comprising an intricate system, a computer is fundamentally composed of two integral components:
Hardware encapsulates the tangible, physical units or components that collectively form the structural and functional framework of the computer system. These components encompass a diverse array of elements, including but not limited to central processing units (CPUs), memory modules, storage devices, input/output peripherals, and networking interfaces. The harmonious collaboration of these hardware components enables the seamless execution of various computational tasks.
Software constitutes the intangible, programmatic aspect of the computer system. These programs, comprising sequences of instructions, are meticulously processed by the hardware to solve specific problems or accomplish designated tasks. Software encompasses a wide spectrum, ranging from operating systems that facilitate communication between hardware and user applications to application software tailored for diverse functionalities.
At its core, a computer is designed for computation, effortlessly executing mathematical and logical operations with unparalleled efficiency.
One of the defining features of a computer is its programmability, allowing users to alter its behaviour by loading different sets of instructions, known as programs.
Computers wield substantial processing power, capable of handling complex calculations and data manipulations with remarkable efficacy.
Renowned for its rapid data processing capabilities, a computer operates at speeds inconceivable to human cognition, executing tasks swiftly and efficiently.
Computers excel in the storage and retrieval of vast amounts of information, leveraging various storage mediums to store data and retrieve it promptly upon request.
Exhibiting an exceptional degree of precision, computers consistently deliver accurate results in a diverse range of applications, minimizing errors in computations and tasks.
Computers are remarkably versatile, and adaptable to a myriad of applications and tasks across diverse domains, from scientific research and business operations to creative endeavors.
Operating tirelessly and without fatigue, computers exhibit a high level of diligence, capable of executing repetitive tasks with consistency and reliability.
In essence, the computer, with its hardware-software synergy, stands as an indispensable tool that has revolutionized the way information is processed, tasks are accomplished, and problems are solved in the contemporary technological landscape.
Definition of Computer Hardware:
Computer hardware refers to the tangible physical units or components that constitute the configuration of a computer. This term encompasses the visible and touchable parts of a computer system.
Examples of Hardware Components:
Computer hardware is categorized into two main divisions:
The system unit contains crucial components of the computer, with the central processing unit (CPU) performing the majority of processing tasks. It includes the motherboard, which houses the microprocessor chip or CPU, memory chips, expansion slots and cards, buses, and ports.
Peripheral:
Peripherals are devices attached to a computer to enhance its functionality, performing specific tasks. Examples include webcams, printers, and mice. Peripheral devices are unable to function independently, such as keyboards and joysticks.
Peripheral devices are further classified into:
Input Peripheral Devices:
Output Peripheral Devices:
Software refers to programs processed by hardware. Programs are sequences of instructions executed by hardware to solve problems or perform tasks.
Software is categorized into two types:
System software is designed to operate a computer’s hardware and application programs. Examples include BIOS (Basic Input Output System) and the BOOT program.
Application software comprises instructions written by vendors for specific tasks, such as MS Word, Skype, and Explorer.
Development of Computing Devices Through History
Computing devices trace their roots back to the time when our ancestors utilized early calculation tools for counting purposes. One such method involved counting with fingers and toes, where calculations were performed manually before the advent of computers. As the need for more sophisticated counting methods arose, early humans introduced the use of pebbles or stones to facilitate counting their possessions, such as flocks of animals.
Before the introduction of alternative counting methods, fingers, toes, stones, and beads were commonly employed for simple arithmetic calculations, including additions and subtractions. However, this rudimentary counting system proved impractical for handling large numbers. This method persisted until the development of the Abacus device.
The Abacus, originating around 500 B.C., emerged as a revolutionary device designed to replace manual counting methods like finger and stone calculations. Employed in various regions, including China, Greece, and Rome, the Abacus featured a rectangular wooden frame with horizontal rods, each containing beads made of stones.
Counting with the Abacus involved shifting beads from one place to another, enabling manual addition and subtraction. The device comprised several columns, with each column representing different place values, such as ones, tens, and hundreds.
The primary uses of the Abacus were centered around addition and subtraction, marking a significant improvement over previous counting methods.
The slide rule, colloquially known as a “slip stick,” emerged as a mechanical analog computer primarily used for multiplication and division. Invented by William Oughtred, the slide rule utilized a cursor moving up and down various scales, employing logarithmic principles.
Featuring components similar to today’s calculators, the slide rule found applications in multiplication and division, as well as functions like roots, logarithms, and trigonometry.
Following the Abacus, a crucial development occurred with the introduction of John Napier’s invention – a device with rods made of bone for multiplication calculations. The rods featured printed numbers arranged to mimic a multiplication table.
Napier’s bone facilitated multiplication by arranging individual strips to represent the numbers involved. This device served as a type of calculator for multiplication.
Blaise Pascal, a French mathematician, introduced the Pascal Calculator (Pascaline) in 1642 at the age of 19. This mechanical digital calculator could perform addition and subtraction on whole numbers. Comprising interlocking rotating cog wheels with ten segments each, the Pascal Calculator marked a significant leap in computational capabilities.
The Pascal Calculator’s uses included addition and subtraction operations, making it a pioneering mechanical digital calculator of its time.
Invented by German mathematician Gottfried Von Leibniz, the Leibniz Multiplier featured a wheel mechanism and measured approximately 67 cm (26 inches) in length. Constructed from polished brass and steel, the machine was housed in an oak case.
The Leibniz Multiplier enabled efficient long multiplication and division through a process involving repeated addition, expanding the capabilities of mechanical calculation devices during its time.
Historical Development Of Computing Devices 2 (Pre-Computer Age To 19th Century)
Jacquard’s Loom (Features, Components, And Uses)
Joseph Marie Jacquard invented the Jacquard loom in 1801, a mechanical device simplifying the production of textiles with intricate patterns like brocade, damask, and matelassé. Controlled by punched cards containing holes, each row corresponding to a design row, these cards, featuring multiple rows of holes, are interconnected to form the textile design.
Born on December 26, 1791, into a London banking family, Charles Babbage proposed a machine capable of 60 calculations per second, laying the groundwork for modern computing. Celebrating his 200th birth anniversary on November 1, 1991, scientists and engineers constructed the Difference Engine No. 2 based on his concept, honoring the father of modern computers.
The computer includes the following features:
Analytical Engine (Components, Features, And Uses)
The analytical engine, conceived by Charles Babbage, represented a novel mechanical computer capable of complex calculations, including multiplication and division. Its fundamental components mirror those found in today’s computers, featuring a CPU, Mill, and memory known as the “store.” The device had a reader for inputting instructions and a printer, the precursor to modern inkjet and laser printers, for recording results on paper.
Herman Hollerith’s tabulator, an electrically operated system, processed census data by reading holes on paper punch cards. Key components included:
In 1884, William Burroughs constructed his first experimental adding machine with printed output. Noteworthy features included a high sloping keyboard, a beveled glass front, and a concealed printing mechanism at the machine’s rear. The machine exclusively performed additions, lacking direct subtraction or complement addition. It featured two large keys for totals and subtotals and three smaller keys for non-add operations.
Development of Computing Devices in the 20th Century
The computing devices of the 20th century underwent significant historical development, marked by the emergence of computer generations utilizing vacuum tubes for circuitry and magnetic drums for memory. These early computers were massive, occupying entire rooms, and were characterized by high costs, substantial heat generation, often leading to damage.
ENIAC, the Electronic Numeric Integrator and Computer, stands out as the world’s first general-purpose electronic digital computer. Devised by John Mauchly and J. Presper Eckert, ENIAC was versatile, capable of solving a multitude of problems, and employed vacuum tubes for its electronic components.
ENIAC Features:
ENIAC Components included vacuum tubes, crystal diodes, relays, resistors, capacitors, and hand-soldered joints. Input was possible via a card punch, and the output was obtained using similar methods.
ENIAC Uses:
ENIAC could perform 5,000 simple addition and subtraction operations.
EDVAC, or Electronic Discreet Variable Automatic Computer, was an early electronic computer also developed by Mauchly and Eckert.
EDVAC Features:
EDVAC Components consisted of vacuum tubes, diodes, and a power consumption of 56kw.
EDVAC Uses:
EDVAC was utilized for addition, subtraction, multiplication, programmed division, and automatic checking with an ultrasonic serial memory.
UNIVAC, the Universal Automatic Computer, marked the first commercial computer in the United States, invented by Mauchly and Eckert.
UNIVAC Features:
UNIVAC Components included tubes, crystal diodes, and relays.
UNIVAC Uses:
A desktop computer, referred to as a PC, is positioned in a fixed location, distinguishing it from mobile laptops or portables.
Features of a Personal Computer:
Components include a motherboard, CPU, RAM, power pack, monitor, system software, application software, keyboard, carcass, screen, and mouse.
Desktop Computer Components:
Desktop Computer Uses:
Word processing, spreadsheet applications, browsing, internet, digital messaging, multimedia playback, and computer games.
Laptops, also known as notebooks, are mobile personal computers powered by electricity or rechargeable batteries.
Features of Laptop & Notebook Computers:
Include a webcam, wireless connection, fingerprint scanner, Bluetooth, audio jack, and card reader.
Components and Uses:
Similar to desktop components.
Palmtop computers are handheld, portable devices with an operating system compatible with desktop computers.
Palmtop Features:
Color screen, expansion slot, wireless capability, and a voice recorder.
Palmtop Components and Uses:
Similar to laptop components, but palmtops are typically powered by off-sheet batteries like A-cells.
An input device refers to any computer hardware equipment utilized to transmit or submit data to the primary storage of a computer for further processing.
Many input devices can be categorized based on two criteria:
Modern Input Devices:
Earliest Input Devices:
The keyboard serves as an input device for entering data into a computer system and remains a crucial interface between users and computers. It is an electronic device with groups of keys electronically linked to the processor when connected to a computer system.
There are two main types of keyboards:
Features of a Standard Keyboard:
Features of an Enhanced Keyboard:
Keyboard Sections:
The keyboard is segmented into five distinct sections:
III. Control Keys – Del, Ctrl, Esc, and Alt: Used in conjunction with other keys to execute specific tasks.
Keyboard Features:
Keyboard Uses:
It is employed to:
The mouse, like the keyboard, serves as an interface for communication with a computer screen, utilized for drawing and plotting images. Mice are categorized into two types:
Basic Mouse Functions:
Mouse Functions:
Mouse Features:
Mouse Operation:
To use the mouse, only two fingers are required: the right thumb and the third finger. The first finger is placed on the left button, while the third finger is placed on the right. These operations may vary between different laptop models.
Output Devices
An output device in computing refers to a tool that disseminates processed data from the computer to either the user or storage devices. Various devices generate data in diverse formats such as audio, visual, and hard copy. These output devices, categorized as peripheral devices, connect to computers through cables or wireless networks.
A monitor, also known as a screen, Visual Display Unit (VDU), Cathode Ray Tube (CRT), or console, functions as an output device. Resembling a TV screen, it allows users to view the results of ongoing operations. While data seems to appear directly on the screen, it is, in fact, sent to the system unit, with the displayed content termed as softcopy. Monitors come in various sizes, such as 12”, 13”, 16”, and 21”.
External Features of the Monitor
Uses of the Monitor
Types of Monitor
Types of Color Monitors:
Cathode Ray Tube and Flat Panel Monitors
The Cathode Ray Tube is the most common type, known for its affordability and precision. Flat Panel Monitors, including LCDs, are commonly used in laptops, notebooks, palmtops, and are less bulky.
There are three main types of computer printers:
Various printers exhibit different technologies:
Printers Categorization
Printers can be classified based on whether the printer head strikes the paper. If it does, the printer is considered an impact printer; if not, it falls under the category of a non-impact printer. Non-impact printers, being faster, minimize physical movement during the printing process.
LaserJet printers use advanced xerography and an electronically controlled laser beam to produce characters on the photoconductive surface of a rotating drum. These printers are known for their high-quality and precise output.
Thermal printers operate as non-impact printers, forming characters by selectively heating specially treated paper. While they may be slower in comparison, they are widely used for applications such as receipt printing due to their simplicity and reliability.
Electrothermal printers function by charging paper electrically and passing it through a toner solution. Ink particles adhere to the charged areas of the paper, resulting in the creation of characters. This technology is employed in certain specialized printing applications.
Daisy wheel printers utilize a rotating disk with spokes, each containing raised characters. When a character needs to be printed, the corresponding spoke rotates into position, striking an inked ribbon against the paper to form the character. While not as common today, they were once popular for producing high-quality text.
Dot matrix printers create characters by striking an inked ribbon against the paper through a matrix of tiny pins. Each pin produces a dot, and combinations of dots form characters. Despite being relatively slow, dot matrix printers are robust and can handle multipart forms, making them suitable for certain specific applications.
Inkjet Printers:
Inkjet printers are impact printers that operate by spraying a controlled jet of special ink onto the printing surface. These printers are commonly used for a variety of purposes, especially in homes and offices, offering a balance between print quality and speed.
In summary, non-impact printers encompass a range of technologies, each with its unique characteristics, while impact printers, exemplified by inkjet printers, utilize physical contact between the print head and the paper to create characters.
Meaning of Data
Data refers to raw facts or entities to which no meaning has been assigned, as they have not undergone any processing. It encompasses information about individuals, locations, objects, and their respective activities.
When data undergoes manipulation to organize and derive meaning or achieve a specific outcome, it is considered processed. The outcome of processed data is termed INFORMATION.
Data can manifest in various forms:
Data can be collected through methods such as counting using counters, observing people, activities, transactions, or events, administering questionnaires, conducting face-to-face interviews, and form filling.
Information is defined as any factual piece of data or news discovered, heard, or communicated verbally, in writing, or through other means.
Difference Between Data And Information
For instance, in market research surveys, completed questionnaires from the public represent the data. Processing and analysis of this data yield a report on the survey, which is considered information and can be used for decision-making.
DATA INFORMATION