|Everything HP200LX: Knowledge, Products, Service|
A non-programmer's introduction to programmingHopefully, this introduction will de-mystify some terms used to describe computer programming. A computer program is a series of instructions that tell the computer how to behave. A program may tell the computer to act like a word processor, a database program, a game, etc. Programs can be simple, comprised of a few lines of instructions, or quite complicated, thousands of lines long.
Computers only understand 0's and 1's. They only respond to a small set of instructions that do simple math and store the results in memory. For a computer to do anything it has to be fed a series of instructions and memory locations for data. This is a program and it is also made up of 0's and 1's.
Since it is difficult for people to make sense out of long lists of 0's and 1's. Programmers created symbols for each machine instruction. Then they wrote programs that would translate one human readable symbol, like ADD, into one machine readable instructions like 0110. This was the first "programming language," an "assembler." It allowed programmers to write a program in a text editor using recognizable symbols like ADD, SUB, MOV. The assembler translates the file into the machine executable instructions that are saved into another file. That "executable" file can than be run on the computer. This saved a lot of time and allowed programmers to write large programs.
They then realized that there were many routines within a program that they were writing repeatedly, like printing or saving a file. Each one of these routines might take 50 or more instructions. So they wrote a program where one human readable instruction would translate into (or compile) the whole series of instructions needed to PRINT or SAVE. The instructions were still typed into a text file and then "compiled" into an executable file. Languages like COBOL, Fortran, and C are examples.
When you made a mistake in a program you would have to re-compile it every time. Some one thought of combining the editor and compiler into one development program to make writing software easier. This is called an "interpreted" programming language. You type the human readable instruction at the terminal and the instructions are translated and run in one step. This allows for easy testing and prototyping at the keyboard. Languages like BASIC, AWK, FORTH, and TIPI are examples.
Additional programming help
Even with all the developments and improvements in programming languages, the process still has many problems and pit falls. To help overcome the remaining limitations programmers use many additional software "tools." Taken as a group these additional support programs are referred to as a Workbench. One of the most basic tools is a "debugger", which is a program that helps pinpoint the location of bugs in the program as it runs. Programmers also use collections external routines, sets of pre-programmed instructions, which can be called and included in the program they are writing. These external routines are stored in files referred to as "libraries."
Internally a programming language has to interpret the instructions that it receives, store the temporary results and provide accurate output. To interpret instruction it needs to read a line of instructions and separate out the instruction from memory locations, and data to be processed. This is called "parsing". Some programming languages allow you to enter a mathematical expression as you would write them in an algebra class, Basic, Fortran, C, etc. (A + B). Others, like FORTH and TIPI use Reverse Polish Notation, RPN, to write math expressions, (A B +). (The HP Palmtop's HP Calc let's you use either Algebraic or RPN. Refer to the HP Palmtop User's Guide, index under RPN.)
A memory area is set aside to store a sequential series of intermediate results. This is called a "stack". Some languages manage the stack automatically, while others like FORTH and TIPI allow programmer direct access to the stack.
Numbers used in calculations are mostly handled in one of two forms in the computer, integer and floating point. Integer is simply 1, 2, 3, 10, 5000, etc. Floating point is 2.45, 10.35665, etc. Floating point math requires a specialized set of routines, which may be part of the programming language or part of the computer's processor or math coprocessor. The type of number used will have an effect on the accuracy of results and is the source of rounding errors.
Technical Editor, The HP Palmtop Paper
Copyright © 2010 Thaddeus Computing Inc