Analysis of information sources in references of the Wikipedia article "LISP" in Chinese language version.
The programs to be hand-compiled were written in an informal notation called M-expressions intended to resemble FORTRAN as much as possible. Besides FORTRAN-like assignment statements and go tos, the language allowed conditional expressions and the basic functions of LISP. Allowing recursive function definitions required no new notation from the function definitions allowed in FORTRAN I - only the removal of the restriction - as I recall, unstated in the FORTRAN manual - forbidding recursive definitions. The M-notation also used brackets instead of parentheses to enclose the arguments of functions in order to reserve parentheses for list-structure constants. It was intended to compile from some approximation to the M-notation, but the M-notation was never fully defined, because representing LISP functions by LISP lists became the dominant programming language when the interpreter later became available. A machine readable M-notation would have required redefinition, because the pencil-and-paper M-notation used characters unavailable on the IBM 026 key punch.
The READ and PRINT programs induced a de facto standard external notation for symbolic information, e.g. representingx + 3y + z
by(PLUS X (TIMES 3 Y) Z)
and(∀x)(P (x) ∨ Q(x, y)
by(ALL (X) (OR (P X) (Q X Y)))
.
……
Many people participated in the initial development of LISP, and I haven’t been able to remember all their contributions and must settle, at this writing, for a list of names. I can remember Paul Abrahams, Robert Brayton, Daniel Edwards, Patrick Fischer, Phyllis Fox, Saul Goldberg, Timothy Hart, Louis Hodes, Michael Levin, David Luckham, Klim Maling, Marvin Minsky, David Park, Nathaniel Rochester of IBM, and Steve Russell.
It was McCarthy's fortune that he found a financial basis for his work: The MIT Artificial Intelligence Project was founded on the first of September, 1958. McCarthy became an assistant professor in the department of Electrical Engineering (his branch was communication sciences), Minsky was already assistant professor in the department of mathematics. They received support from the Research Laboratory for Electronics in the form of the two programmers, a secretary, a typewriter and six graduate students.
When Maling started his job at the first of September he met S. Russell who seems to have something to do already. Maling remembers: "We were housed in an office in the basement of one of MIT's buildings, near the computer center. Initially there was John, Steve Russell, Carol - a buxom Boston Irish secretary and I. Steve was a brilliant programmer who had worked for John at Dartmouth (I believe). He was what we then called a 'Programming bum' ...".
As far as we know the first universal function was indeedapply
-eval
was not present at all or not seen to be important. When McCarthy was working on this function S. Russel saw it and suggested translating it by hand - as he had done so often - and adding it to the program system. McCarthy recalls: "… thisEVAL
was written and published in the paper and Steve Russell said, look, why don't I program thisEVAL
and you remember the interpreter, and I said to him, ho, ho, you're confusing theory with practice, thisEVAL
is intended for reading not for computing. But he went ahead and did it. That is, he compiled theEVAL
in my paper into 704 machine code fixing bugs and then advertised this as a LISP interpreter which it certainly was, so at that point LISP had essentially the form that it has today, the S-expression form ...".
In this history of the evolution of Lisp, we have seen that Lisp seems to have a more elaborate and complex history than languages with wider usage. It would seem that almost every little research group has its own version of Lisp and there would appear to be as many Lisps as variations on language concepts. It is natural to ask what is so special or different about Lisp that explains it.
There are six basic reasons: ……
Its theoretical foundations. Lisp was founded on the footing of recursive function theory and the theory of computability. The work on Scheme aligned it with Church's lambda calculus and denotational semantics. Its purest form is useful for mathematical reasoning and proof. ……
Its expressiveness. Lisp has proved itself concerned more with expressiveness than anything else. ……
Its malleability. It is easy with Lisp to experiment with new language features, because it is possible to extend Lisp in such a way that the extensions are indistinguishable to users from the base language. Primarily this is accomplished through the use of macros, which have been part of Lisp since 1963 [Hart 1963]. ……
Its interactive and incremental nature. It is easy to explore the solutions to programming problems in Lisp, because it is easy to implement part of a solution, test it, modify it, change design, and debug the changes. ……
Its operating system facilities. Many Lisp implementations provide facilities reminiscent of operating systems: a command processor, an automatic storage management facility, file management, display (windows, graphics, mouse) facilities, multitasking, a compiler, an incremental (re)linker/loader, a symbolic debugger, performance monitoring, and sometimes multiprocessing. ……
Its people. Of course, languages do not diversify themselves; people diversify languages. ……Lisp is the language of artificial intelligence, among other things. …… Another attraction is that Lisp is a language of experts, which for our purposes means that Lisp is not a language designed for inexpert programmers to code robust reliable software.
The STANFORD A.I. LISP 1.6 System was originally an adaptation of one developed by the Artificial Intelligence Project a t M.I.T. Since 1966, that system has been largely rewritten by John Allen and the author.
At Stanford in the 1960s, an early version of MacLisp was adapted for their PDP-6; this Lisp was called Lisp 1.6 [Quam 1972]. The early adaptation was rewritten by John Allen and Lynn Quam; later compiler improvements were made by Whit Diffie. Lisp 1.6 disappeared during the mid-1970s, one of the last remnants of the Lisp 1.5 era.
By 1964 a version of Lisp 1.5 was running in the Electrical Engineering Department at MIT on an IBM 7094 computer, running the Compatible Time Sharing System (CTSS). This Lisp and Basic PDP-1 Lisp were the main influences on the PDP-6 Lisp [PDP-6 Lisp 1967] implemented by DEC and some members of MIT's Tech Model Railroad Club in the spring of 1964. This Lisp was the first program written on the PDP-6. Also, this Lisp was the ancestor of MacLisp, the Lisp written to run under the Incompatible Timesharing System (ITS) [Eastlake 1968, 1972] at MIT on the PDP-6 and later on the PDP-10. ……
In 1965, virtually all of the Lisps in existence were identical or differed only in trivial ways. After 1965 - or more precisely, after MacLisp and BBN-Lisp diverged from each other - there came a plethora of Lisp dialects. ……
MacLisp was the primary Lisp dialect at the MIT AI Lab from the late 1960s until the early 1980s. Other important Lisp work at the Lab during this period included Lisp-Machine Lisp (later named Zetalisp) and Scheme. MacLisp is usually identified with the PDP-10 computer, but MacLisp also ran on another machine, the Honeywell 6180, under the Multics operating system [Organick 1972].
The Artificial Intelligence Project at Stanford University has produced a version of LISP 1.5 to be distributed by SHARE. In the middle of February 1965 the system is complete and is available from Stanford. The system should be available from SHARE by the end of March 1965. ……Evalquote
is available to the programmer as a LISP function - thus, one may now write "(EVALQUOTE APPEND ((A)(B C D)))
", rather than "(EVAL (QUOTE (APPEND (A)(B C D))) NIL)
", should one desire to do so.
A key difference between MacLisp and Interlisp was the approach to syntax. MacLisp favored the pure list style, usingEVAL
as the top level. Interlisp, along with Lisp 1.5, usedEVALQUOTE
.
To concatenate the lists(BOAT AIRPLANE SKATEBOARD)
and(CAR TRUCK)
in MacLisp, one would type this expression toEVAL
:
(APPEND (QUOTE (BOAT AIRPLANE SKATEBOARD)) (QUOTE (CAR TRUCK)))
or, using the syntactic abbreviation'x
for(quote x)
,
(APPEND '(BOAT AIRPLANE SKATEBOARD) '(CAR TRUCK))
The result would of course be(BOAT AIRPLANE SKATEBOARD CAR TRUCK)
.
In Lisp 1.5, one would type an expression (actually two expressions) like this toEVALQUOTE
:
APPEND((BOAT AIRPLANE SKATEBOARD) (CAR TRUCK))
The first expression denotes a function, and the second is a list of arguments. The "quote" in the nameEVALQUOTE
signifies the "implicit quoting of the arguments" to the function applied. MacLisp forked off and usedEVAL
exclusively as a top level interface. BBN-Lisp (and thus Interlisp) accommodated both: if the first input line contained a complete form and at least one character of a second form, then BBN-Lisp finished reading the second form and used theEVALQUOTE
interface for that interaction; otherwise it read exactly one form and used theEVAL
interface for that interaction.
The phrase "quoting arguments" actually is misleading and imprecise. It refers to the actions of a hypothetical preprocessor that transforms the input from a form like
APPEND((BOAT AIRPLANE SKATEBOARD) (CAR TRUCK))
to one like
(APPEND (QUOTE (BOAT AIRPLANE SKATEBOARD)) (QUOTE (CAR TRUCK)))
before evaluation is performed. A similar confusion carried over into the description of the so-calledFEXPR
or"special form." In some texts on Lisp one will find descriptions of special forms that speak of a special form "quoting its arguments," when in fact a special form has a special rule for determining its meaning and that rule involves not evaluating some forms [Pitman 1980].
McCarthy [McCarthy 1981] noted that the original Lisp interpreter was regarded as a universal Turing machine: It could perform any computation given a set of instructions (a function) and the initial input on its tape (arguments). Thus it was intended that
APPEND((BOAT AIRPLANE SKATEBOARD) (CAR TRUCK))
be regarded not as a syntactically mutated version of
(APPEND (QUOTE (BOAT AIRPLANE SKATEBOARD)) (QUOTE (CAR TRUCK)))
but as a function and (separately) a literal list of arguments. In hindsight we see that theEVALQUOTE
top level might better have been called theAPPLY
top level, making it pleasantly symmetrical to theEVAL
top level; the BBN-Lisp documentation brought out this symmetry explicitly. Indeed,EVALQUOTE
would have been identical to the functionAPPLY
in Lisp 1.5 if not for these two differences: (a) in Lisp 1.5,APPLY
took a third argument, an environment (regarded nowadays as something of a mistake that resulted in dynamic binding rather than the lexical scoping needed for a faithful reflection of the lambda calculus); and (b) "EVALQUOTE
is capable of handling special forms as a sort of exception" [McCarthy 1962]. Nowadays such an exception is referred to as a kluge [Raymond 1991]. (Note, however, that MacLisp'sAPPLY
function supported this same kluge.)
In 1963, L. Peter Deutsch (at that time a high school student) implemented a Lisp similar to Lisp 1.5 on the PDP-1 at Bolt Beranek and Newman (BBN) [Deutsch 1964]. This Lisp was called Basic PDP-1 Lisp. ……
At BBN, a successor to Basic PDP-1 Lisp was implemented on the PDP-1 and an upward-compatible version, patterned after Lisp 1.5 on the MIT CTSS system, was implemented on the Scientific Data Systems 940 (SDS 940) by Daniel Bobrow and D. L. Murphy. A further upward-compatible version was written for the PDP-10 by Alice Hartley and Murphy, and this Lisp was called BBN-Lisp [Teitelman 1971]. In 1973, not long after the time that SDS was acquired by Xerox and renamed Xerox Data Systems, the maintenance of BBN-Lisp was shared by BBN and Xerox Palo Alto Research Center and the name of the Lisp was changed to Interlisp [Teitelman 1974]. ……
Interlisp (and BBN-Lisp before it) introduced many radical ideas into Lisp programming style and methodology. The most visible of these ideas are embodied in programming tools, such as the spelling corrector, the file package, DWIM, CLISP, the structure editor, and MASTERSCOPE.
The origin of these ideas can be found in Warren Teitelman's Ph.D. dissertation on man-computer symbiosis [Teitelman 1966]. In particular, it contains the roots of structure editing (as opposed to "text" or "tape" editing [Rudloe 1962]), breakpointing, advice, and CLISP. ……
CLISP (Conversational LISP) was a mixed ALGOL-like and English-like syntax embedded within normal Interlisp syntax. Here is a valid definition ofFACTORIAL
written in lnterlisp CLISP syntax:
DEFINEQ((FACTORIAL
(LAMBDA (N) (IF N=0 THEN 1 ELSE N*(FACTORIAL N-I)))))
The use ofSETF
throughout Common Lisp - a later dialect of Lisp and the most popular - can be traced through Symbolics Zetalisp and MacLisp to the influence of MIT Lisp-Machine Lisp and then back through Greenblatt's proposal to Peter Deutsch and thence to Alan Kay.
The uniform treatment of access - reading and writing of state - has made Common Lisp more uniform that it might otherwise be. It is no longer necessary to remember both a reader function (such asCAR
) and also a separate writer or update function (such asRPLACA
), nor to remember the order of arguments. (ForRPLACA
, which comes first, the dotted pair or the new value for its car?) If the general form of a read operation is(f…)
, then the form of the write is(setf (f…) newvalue)
, and that is all the programmer needs to know about reading and writing data.
By 1964 a version of Lisp 1.5 was running in the Electrical Engineering Department at MIT on an IBM 7094 computer, running the Compatible Time Sharing System (CTSS). This Lisp and Basic PDP-1 Lisp were the main influences on the PDP-6 Lisp [PDP-6 Lisp 1967] implemented by DEC and some members of MIT's Tech Model Railroad Club in the spring of 1964. This Lisp was the first program written on the PDP-6. Also, this Lisp was the ancestor of MacLisp, the Lisp written to run under the Incompatible Timesharing System (ITS) [Eastlake 1968, 1972] at MIT on the PDP-6 and later on the PDP-10. ……
In 1965, virtually all of the Lisps in existence were identical or differed only in trivial ways. After 1965 - or more precisely, after MacLisp and BBN-Lisp diverged from each other - there came a plethora of Lisp dialects. ……
MacLisp was the primary Lisp dialect at the MIT AI Lab from the late 1960s until the early 1980s. Other important Lisp work at the Lab during this period included Lisp-Machine Lisp (later named Zetalisp) and Scheme. MacLisp is usually identified with the PDP-10 computer, but MacLisp also ran on another machine, the Honeywell 6180, under the Multics operating system [Organick 1972].
The purpose of the LISP Compiler is to replace S-expression definitions of functions with efficient machine language subroutines. A subroutine can be expected to run about 40 times as fast as the interpreter can execute the same function from its S-expression definition. Subroutines typically take 70-80 per cent of the storage required by their corresponding S-expressions.
Clojure is a Lisp not constrained by backwards compatibility
Symbols are used as names of variables in Common Lisp programs. When a symbol is evaluated as a form, the value of the variable it names is produced. For example, after doing (setq items 3
), which assigns the value3
to the variable nameditems
, thenitems
⇒3
. Variables can be assigned to, as bysetq
, or bound, as bylet
. Any program construct that binds a variable effectively saves the old value of the variable and causes it to have a new value, and on exit from the construct the old value is reinstated.
There are actually two kinds of variables in Common Lisp, called lexical (or static) variables and special (or dynamic) variables. At any given time either or both kinds of variable with the same name may have a current value. Which of the two kinds of variable is referred to when a symbol is evaluated depends on the context of the evaluation. The general rule is that if the symbol occurs textually within a program construct that creates a binding for a variable of the same name, then the reference is to the variable specified by the binding; if no such program construct textually contains the reference, then it is taken to refer to the special variable of that name.
The distinction between the two kinds of variable is one of scope and extent. A lexically bound variable can be referred to only by forms occurring at any place textually within the program construct that binds the variable. A dynamically bound (special) variable can be referred to at any time from the time the binding is made until the time evaluation of the construct that binds the variable terminates. Therefore lexical binding of variables imposes a spatial limitation on occurrences of references (but no temporal limitation, for the binding continues to exist as long as the possibility of reference remains). Conversely, dynamic binding of variables imposes a temporal limitation on occurrences of references (but no spatial limitation). For more information on scope and extent, see chapter 3.
The value a special variable has when there are currently no bindings of that variable is called the global value of the (special) variable. A global value can be given to a variable only by assignment, because a value given by binding is by definition not global.
It is possible for a special variable to have no value at all, in which case it is said to be unbound. By default, every global variable is unbound unless and until explicitly assigned a value, except for those global variables defined in this book or by the implementation already to have values when the Lisp system is first started. It is also possible to establish a binding of a special variable and then cause that binding to be valueless by using the functionmakunbound
. In this situation the variable is also said to be “unbound,” although this is a misnomer; precisely speaking, it is bound but valueless. It is an error to refer to a variable that is unbound.
It is customary to name dynamic variables with an asterisk at the beginning and end of the name. e.g.,*foo*
is a good name for a dynamic variable, but not for a lexical variable;foo
is a good name for a lexical variable, but not for a dynamic variable. This naming convention is observed for all defined names in Common Lisp; however, neither conforming programs nor conforming implementations are obliged to adhere to this convention.
A conditional expression has the form(p1 → e1, …, pn → en)
where thep
's are propositional expressions and thee
's are expressions of any kind. It may be read, “Ifp1
thene1
otherwise ifp2
thene2
, …, otherwise ifpn
thenen
,” or “p1
yieldse1
, …,pn
yieldsen
.”
I sent a proposal for conditional expressions to a CACM forum on what should be included in Algol 60. Because the item was short, the editor demoted it to a letter to the editor, for which CACM subsequently apologized. The notation given here was rejected for Algol 60, because it had been decided that no new mathematical notation should be allowed in Algol 60, and everything new had to be English. The if...then...else that Algol 60 adopted was suggested by John Backus.
The conditional expression[p1 → e1; …; pn → en]
is translated into(COND (p1* e1*) … (pn* en*))
.
In this article, we first describe a formalism for defining functions recursively. …… Next, we describe S-expressions and S-functions, give some examples, and then describe the universal S-function apply which plays the theoretical role of a universal Turing machine and the practical role of an interpreter. Then we describe the representation of S-expressions in the memory of the IBM 704 by list structures similar to those used by Newell, Shaw and Simon, and the representation of S-functions by program. Then we mention the main features of the LISP programming system for the IBM 704. Next comes another way of describing computations with symbolic expressions, and finally we give a recursive function interpretation of flow charts. ……
We shall need a number of mathematical ideas and notations concerning functions in general. Most of the ideas are well known, but the notion of conditional expression is believed to be new, and the use of conditional expressions permits functions to be defined recursively in a new and convenient way. ……
We shall first define a class of symbolic expressions in terms of ordered pairs and lists. Then we shall define five elementary functions and predicates, and build from them by composition, conditional expressions, and recursive definitions an extensive class of functions of which we shall give a number of examples. We shall then show how these functions themselves can be expressed as symbolic expressions, and we shall define a universal function apply that allows us to compute from the expression for a given function its value for given arguments. Finally, we shall define some functions with functions as arguments and give some useful examples. ……
The LISP programming system is a system for using the IBM 704 computer to compute with symbolic information in the form of S-expressions. …… The basis of the system is a way of writing computer programs to evaluate S-functions. …… In addition to the facilities for describing S-functions, there are facilities for using S-functions in programs written as sequences of statements along the lines of FORTRAN or ALGOL. ……
There are a number of ways of defining functions of symbolic expressions which are quite similar to the system we have adopted. Each of them involves three basic functions, conditional expressions, and recursive function definitions, but the class of expressions corresponding to S-expressions is different, and so are the precise definitions of the functions. We shall describe one of these variants called linear LISP. ……
Since both the usual form of computer program and recursive function definitions are universal computationally, it is interesting to display the relation between them. The translation of recursive symbolic functions into computer programs was the subject of the rest of this report. In this section we show how to go the other way, at least in principle.
The programs to be hand-compiled were written in an informal notation called M-expressions intended to resemble FORTRAN as much as possible. Besides FORTRAN-like assignment statements and go tos, the language allowed conditional expressions and the basic functions of LISP. Allowing recursive function definitions required no new notation from the function definitions allowed in FORTRAN I - only the removal of the restriction - as I recall, unstated in the FORTRAN manual - forbidding recursive definitions. The M-notation also used brackets instead of parentheses to enclose the arguments of functions in order to reserve parentheses for list-structure constants. It was intended to compile from some approximation to the M-notation, but the M-notation was never fully defined, because representing LISP functions by LISP lists became the dominant programming language when the interpreter later became available. A machine readable M-notation would have required redefinition, because the pencil-and-paper M-notation used characters unavailable on the IBM 026 key punch.
The READ and PRINT programs induced a de facto standard external notation for symbolic information, e.g. representingx + 3y + z
by(PLUS X (TIMES 3 Y) Z)
and(∀x)(P (x) ∨ Q(x, y)
by(ALL (X) (OR (P X) (Q X Y)))
.
……
Many people participated in the initial development of LISP, and I haven’t been able to remember all their contributions and must settle, at this writing, for a list of names. I can remember Paul Abrahams, Robert Brayton, Daniel Edwards, Patrick Fischer, Phyllis Fox, Saul Goldberg, Timothy Hart, Louis Hodes, Michael Levin, David Luckham, Klim Maling, Marvin Minsky, David Park, Nathaniel Rochester of IBM, and Steve Russell.
It was McCarthy's fortune that he found a financial basis for his work: The MIT Artificial Intelligence Project was founded on the first of September, 1958. McCarthy became an assistant professor in the department of Electrical Engineering (his branch was communication sciences), Minsky was already assistant professor in the department of mathematics. They received support from the Research Laboratory for Electronics in the form of the two programmers, a secretary, a typewriter and six graduate students.
When Maling started his job at the first of September he met S. Russell who seems to have something to do already. Maling remembers: "We were housed in an office in the basement of one of MIT's buildings, near the computer center. Initially there was John, Steve Russell, Carol - a buxom Boston Irish secretary and I. Steve was a brilliant programmer who had worked for John at Dartmouth (I believe). He was what we then called a 'Programming bum' ...".
As far as we know the first universal function was indeedapply
-eval
was not present at all or not seen to be important. When McCarthy was working on this function S. Russel saw it and suggested translating it by hand - as he had done so often - and adding it to the program system. McCarthy recalls: "… thisEVAL
was written and published in the paper and Steve Russell said, look, why don't I program thisEVAL
and you remember the interpreter, and I said to him, ho, ho, you're confusing theory with practice, thisEVAL
is intended for reading not for computing. But he went ahead and did it. That is, he compiled theEVAL
in my paper into 704 machine code fixing bugs and then advertised this as a LISP interpreter which it certainly was, so at that point LISP had essentially the form that it has today, the S-expression form ...".
In 1981 the emerging Common Lisp community turned to Scheme for some of its motivation and inspiration [Steele 1984]. Adopting lexical scoping proved one of the most important decisions the Common Lisp group ever made.
One aspect of Scheme that was not adopted, however, was a single namespace for functions and values, along with uniform evaluation rules for expressions in function and argument positions within the language. ……
In this paper, we shall refer to two abstract dialects of Lisp called Lisp1 and Lisp2.
Lisp1 has a single namespace that serves a dual role as the function namespace and value namespace; that is, its function namespace and value namespace are not distinct. In Lisp1, the functional position of a form and the argument positions of forms are evaluated according to the same rules. Scheme [Rees 1986] and the language being designed by the EuLisp group [Padget 1986] are Lisp1 dialects.
Lisp2 has distinct function and value namespaces. In Lisp2, the rules for evaluation in the functional position of a form are distinct from those for evaluation in the argument positions of the form. Common Lisp is a Lisp2 dialect. ……
Most Lisp dialects adopted a two-namespace approach to the naming problem. To some extent this is because most dialects followed Lisp 1.5 [McCarthy 1965] or dialects derived from Lisp 1.5.
Lisp 1.5 broke symbols into values and functions; values were stored on an association list, and function on the property lists of symbols. Compiled and interpreted code worked in different ways. In the interpreter, the association list was where all bindings were kept. When an identifier was encountered (an 'atomic symbol' in Lisp 1.5 terminology), it was taken as a variable to be evaluated for its value. First theAPVAL
part of the symbol was interrogated — anAPVAL
was "A Permanent, system-defined VALue" stored in a specific place in the symbol. Second, the association list was searched. Finally, if no binding was found, an error was signaled.
When a combination was encountered, the function position was evaluated differently from other positions. First, the symbol was interrogated to see whether there was a function definition associated with it; then the association list was searched.
Here we can see two namespaces at work, though nonsymbol variables were treated somewhat uniformly (Lisp 1.5 did not have lexical variables in interpreted code): when there were no function definitions associated with a symbol, there was one namespace, which was explicitly represented by the association list.
Compiled code worked a slightly different way, and from its internals we can see where the two namespaces came about in descendants of Lisp 1.5.
The Lisp 1.5 compiler supported 'common' and 'special' variables. A common variable enabled compiled and interpreted code to communicate with each other. A common variable was bound on an explicit association list; to evaluate such a variable, a call toEVAL
was emitted to determine the value. A special variable was the compiler's modeling of free variables and closely matched what is now called 'shallow binding.' Ordinary variables compiled into what we have termed lexical variables.
Thus, we see all of the components of the two-namespace world in Lisp 1.5, along with some of the components of the one-namespace world.
It seems that the designers of Lisp 1.5 thought of function names as being different from variable names — the Lisp 1.5 interpreter looked at the property list of the named atomic symbol first, and so it can be argued that a function was considered a property of a symbol. The designers used the terminology that a symbol "stands for" a function while a variable "refers" to a value.
Lisp for the PDP-6 at MIT adopted the style of the Lisp 1.5 special variables for dynamic binding in both compiled and interpreted code, thereby eliminated common variables. Compilers were still written that were to try to interpret special variables as lexical variables in as many places as possible. The value of a symbol was stored in the value cell of the symbol, and the function remained on the property list as it did in Lisp 1.5.
MacLisp [Pitman 1983] is a direct descendant of the PDP-6 Lisp and is a Lisp2 dialect. MacLisp uses a sophisticated form of link table, which is made possible by the separation of namespaces. In particular, function-defining functions have controlled access into the places where functions are stored so that the link tables can be correctly maintained. ……
Common Lisp was the result of a compromise between a number of dialects of Lisp, most of them descendants of MacLisp, all of them Lisp2s. ……
A free variable in Common Lisp is currently taken to be a dynamic rather than a lexical reference because there is no global lexical environment in Common Lisp. In this code
(DEFUN PLUS (X Y) (+ X Y))
(DEFUN SOMEWHAT-MORE-THAN (X) (PLUS X *SOMEWHAT*))
The reference to*SOMEWHAT*
is special (dynamic). On the surface, in Lisp1 the reference toPLUS
is free also, and thus it is special (dynamic).
When a variable is declared special, a free reference to it is to the most recent dynamically bound value for it; all bindings of it become special (dynamic) bindings. When a variable is declared constant, free references to it are to its permanent constant value, and compilers are free to fold that constant into the code they emit.
We introduce the concept of global variables; when a variable is global, free references to it are to the value cell of the symbol named by the variable, and all bindings of it are still lexical.
To avoid a compiler warning for free use of a variable like*SOMEWHAT*
, the variable is declaredSPECIAL
. In order to have compilers for Lisp1 not to warn about the free use ofPLUS
, it would be unfortunate to have to declare itSPECIAL
.
As noted, in Common Lisp the default for free variable references isSPECIAL
; that is, a free variable reference is taken to be a special variable reference. If the default in Lisp1 were that free variable references were global (lexical), then it would make sense for a compiler not to warn about such free variable references.
The biggest hit for me while at SAIL in late '69 was to really understand LISP. Of course, every student knew aboutcar
,cdr
, andcons
, but Utah was impoverished in that no one there used LISP and hence, no one had penetrated the mysteries ofeval
andapply
. I could hardly believe how beautiful and wonderful the idea of LISP was [McCarthy 1960]. I say it this way because LISP had not only been around enough to get some honest barnacles, but worse, there wee deep flaws in its logical foundations. By this, I mean that the pure language was supposed to be based on functions, but its most important components — such as lambda expressions quotes, and conds – were not functions at all, and instead were called special forms. Landin and others had been able to get quotes and conds in terms of lambda by tricks that were variously clever and useful, but the flaw remained in the jewel. In the practical language things were better. There were not justEXPR
s (which evaluated their arguments), butFEXPR
s (which did not). My next questions was, why on earth call it a functional language? Why not just base everything onFEXPR
s and force evaluation on the receiving side when needed? I could never get a good answer, but the question was very helpful when it came time to invent Smalltalk, because this started a line of thought that said “take the hardest and most profound thing you need to do, make it great, an then build every easier thing out of it”. That was the promise of LISP and the lure of lambda – needed was a better “hardest and most profound” thing. Objects should be it. ……
This Smalltalk language (today labeled -71) was very influenced by FLEX, PLANNER, LOGO, META II, and my own derivatives from them. ……
As I mentioned previously, it was annoying that the surface beauty of LISP was marred by some of its key parts having to be introduced as “special forms” rather than as its supposed universal building block of functions. ……
An elegant approach was suggested in a CMU thesis of Dave Fisher [Fisher 70] on the syntheses of control structures. ALGOL60 required a separate link for dynamic subroutine linking and for access to static global state. Fisher showed how a generalization of these links could be used to simulate a wide variety of control environments. One of the ways to solve the “funarg problem” of LISP is to associate the proper global state link with expressions and functions that are to be evaluated later so that the free variables referenced are the ones that were actually implied by the static form of the language. The notion of “lazy evaluation” is anticipated here as well.
dynamic - All variable names and their values live in one global table. lexical - Each binding scope (function, let syntax, …) creates a new table of variable names and values, organised in a hierarchy called “the environment”. ……
EmacsLisp as of 24.1 has both dynamic binding and lexical binding. Lexical binding must be enabled explicitly for a file or buffer. Individual variables can be'defvar'
ed to make them “special”, like in CommonLisp.
My original experiments with this architecture were done using a model I adapted from van Wijngaarten's and Wirth's "Generalization of Algol" and Wirth's Euler. Both of these were rather LISP-like but with a more conventional readable syntax. I didn't understand the monster LISP idea of tangible metalanguage then, but got kind of close with ideas about extensible languages draw from various sources, including Irons' IMP.
The second phase of this was to finally understand LISP and then using this understanding to make much nicer and smaller and more powerful and more late bound understructures. Dave Fisher's thesis was done in "McCarthy" style and his ideas about extensible control structures were very helpful. ……
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.
The biggest hit for me while at SAIL in late '69 was to really understand LISP. Of course, every student knew aboutcar
,cdr
, andcons
, but Utah was impoverished in that no one there used LISP and hence, no one had penetrated the mysteries ofeval
andapply
. I could hardly believe how beautiful and wonderful the idea of LISP was [McCarthy 1960]. I say it this way because LISP had not only been around enough to get some honest barnacles, but worse, there wee deep flaws in its logical foundations. By this, I mean that the pure language was supposed to be based on functions, but its most important components — such as lambda expressions quotes, and conds – were not functions at all, and instead were called special forms. Landin and others had been able to get quotes and conds in terms of lambda by tricks that were variously clever and useful, but the flaw remained in the jewel. In the practical language things were better. There were not justEXPR
s (which evaluated their arguments), butFEXPR
s (which did not). My next questions was, why on earth call it a functional language? Why not just base everything onFEXPR
s and force evaluation on the receiving side when needed? I could never get a good answer, but the question was very helpful when it came time to invent Smalltalk, because this started a line of thought that said “take the hardest and most profound thing you need to do, make it great, an then build every easier thing out of it”. That was the promise of LISP and the lure of lambda – needed was a better “hardest and most profound” thing. Objects should be it. ……
This Smalltalk language (today labeled -71) was very influenced by FLEX, PLANNER, LOGO, META II, and my own derivatives from them. ……
As I mentioned previously, it was annoying that the surface beauty of LISP was marred by some of its key parts having to be introduced as “special forms” rather than as its supposed universal building block of functions. ……
An elegant approach was suggested in a CMU thesis of Dave Fisher [Fisher 70] on the syntheses of control structures. ALGOL60 required a separate link for dynamic subroutine linking and for access to static global state. Fisher showed how a generalization of these links could be used to simulate a wide variety of control environments. One of the ways to solve the “funarg problem” of LISP is to associate the proper global state link with expressions and functions that are to be evaluated later so that the free variables referenced are the ones that were actually implied by the static form of the language. The notion of “lazy evaluation” is anticipated here as well.
The PDP-6 project started in early 1963, as a 24-bit machine. It grew to 36 bits for LISP, a design goal.
Contrary to popular belief, LISP was not originally derived from Church's λ-calculus [Church] [LISP History]. The earliest LISP did not have a well-defined notion of free variables or procedural objects. Early LISP programs were similar to recursion equations, defining functions on symbolic expressions ("S-expressions"). They differed from the equations of pure recursive function theory Kleene by introducing the conditional expression construction (often called the "McCarthy conditional"), to avoid "pattern-directed invocation". That is, in recursive function theory one would define the factorial function by the following two equations:
factorial(0) = 1
factorial(successor(x)) = successor(x) * factorial(x)
In early LISP, however, one would have written:
factorial[x] = [x=0 → 1; T → x*factoria1[x-1]]
where "[a → b; T → c]
" essentially means "if a then b else c
". The recursive function theory version depends on selecting which of two equations to use by matching the argument to the left-hand sides (such a discipline is actually used in the PROLOG language [Warren]); the early LISP version represents this decision as a conditional expression.
The theory of recursion equations deals with functions over the natural numbers. In LISP, however, one is interested in being able to manipulate algebraic expressions, programs, and other symbolic expressions as data structures. While such expressions can be encoded as numbers (using the technique of "arithmetization" developed by Kurt Gödel), such an encoding is not very convenient. Instead, a new kind of data called "S-expressions" (for "symbolic expressions") is introduced specifically to provide convenient encodings. S-expressions can be defined by a set of formal inductive axioms analogous to the Peano postulates used to define natural numbers.
Julia builds upon the lineage of mathematical programming languages, but also borrows much from popular dynamic languages, including Lisp, Perl, Python, Lua, and Ruby. …… some advantages of Julia over comparable systems include: …… Lisp-like macros and other metaprogramming facilities.
To use functions as arguments, one needs a notation for functions, and it seemed natural to use the λ-notation of Church (1941). I didn’t understand the rest of his book, so I wasn’t tempted to try to implement his more general mechanism for defining functions. Church used higher order functionals instead of using conditional expressions. Conditional expressions are much more readily implemented on computers. ……
One mathematical consideration that influenced LISP was to express programs as applicative expressions built up from variables and constants using functions. I considered it important to make these expressions obey the usual mathematical laws allowing replacement of expressions by expressions giving the same value. The motive was to allow proofs of properties of programs using ordinary mathematical methods. This is only possible to the extent that side-effects can be avoided. Unfortunately, side-effects are often a great convenience when computational efficiency is important, and “functions” with side-effects are present in LISP. However, the so-called pure LISP is free of side-effects, and (Cartwright 1976) and (Cartwright and McCarthy 1978) show how to represent pure LISP programs by sentences and schemata in first order logic and prove their properties. ……
Free variables. In all innocence, James R. Slagle programmed the following LISP function definition and complained when it didn’t work right ……. …… In modern terminology, lexical scoping was wanted, and dynamic scoping was obtained.
I must confess that I regarded this difficulty as just a bug and expressed confidence that Steve Russell would soon fix it. He did fix it but by inventing the so-called FUNARG device that took the lexical environment along with the functional argument. Similar difficulties later showed up in Algol 60, and Russell’s turned out to be one of the more comprehensive solutions to the problem. While it worked well in the interpreter, comprehensiveness and speed seem to be opposed in compiled code, and this led to a succession of compromises. …… the interested reader is referred to (Moses 1970) as a place to start.
The data on which LISP works is the S-language. ……
The M-language defines computations on S-expressions. ……
This is computationally complete. McCarthy (1960, Sect. 6) showed an arbitrary flowchart can be coded as mutually recursive functions.
The M-language of McCarthy (1960) is first order, as there is no provision to pass a function as argument or return a function as result.
However, McCarthy shows that M-language expressions and functions can be easily encoded as S-expressions and then defines in the M-language functions,eval
andapply
, that correctly interpret these S-expressions.
Thus LISP allows meta-programming, that is treating program as data and vice versa, by appropriate uses ofeval
andquote
. The 1960 paper gives the impression, quite strongly, that McCarthy saw this as removing any limitation stemming from the M-Language itself being first order.
It was originally intended that people would write programs in the M-language, in an Algol-like notation. In practice LISP programmers wrote their code directly in the S-language form, and the M-language became a kind of ghost that hovered in the background — theoretically important but nobody used it.
In LISP 1.5 (McCarthy et al. 1962) atoms acquired property lists, which serve several purposes and numbers appeared, as another kind of atom, along with basic arithmetic functions. ……
LISP was not based on the lambda calculus, despite using the word “LAMBDA” to denote functions. At the time he invented LISP, McCarthy was aware of (Church 1941) but had not studied it. The theoretical model behind LISP was Kleene’s theory of first order recursive functions. (McCarthy made these statements, or very similar ones, in a contribution from the floor at the 1982 ACM symposium on LISP and functional programming in Pittsburgh. No written version of this exists, as far as know.)
The M-language was first order, as already noted, but you could pass a function as a parameter by quotation, i.e. as the S-expression which encodes it. Unfortunately, this gives the wrong binding rules for free variables (dynamic instead of lexicographic). ……
McCarthy (1978) reports that this problem (wrong binding for free variables) showed up very early in a program of James Slagle. At first McCarthy assumed it was a bug and expected it to be fixed, but it actually springs from something fundamental — that meta-programming is not the same as higher order programming. Various devices were invented to get round this FUNARG problem, as it became known.
Not until SCHEME (Sussman 1975) did versions of LISP with default static binding appear. Today all versions of LISP are lambda calculus based.
Mathematically, it is possible to have functions as arguments of other functions. For example, in arithmetic one could define a function operate[op;a;b]
, whereop
is a functional argument that specifies which arithmetic operation is to be performed ona
andb
. ……
In LISP, functional arguments are extremely useful. A very important function with a functional argument ismaplist
. Its M-expression definition is
maplist[x;fn]=[null[x]→NIL;
T→cons[fn[x];maplist[cdr[x];fn]]]
……
The functional argument is, of course, a function translated into an S-expression. It is bound to the variablefn
and is then used wheneverfn
is mentioned as a function. The S-expression formaplist
itself is as follows:
(MAPLIST (LAMBDA (X FN) (COND ((NULL X) NIL)
(T (CONS (FN X) (MAPLIST (CDR X) FN))) )))
……
Usingmaplist
, we definechange
by
change[a]=maplist[a;λ[[j];cons[car[j];X]]]
This is not a valid M-expression as defined syntactically in section 1.5 because a function appears where a form is expected. This can be corrected by modifying the rule defining an argument so as to include functional arguments:
<argument> ::= <form> | <function>
We also need a special rule to translate functional arguments into S-expression. Iffn
is a function used as an argument, then it is translated into(FUNCTION fn*)
.
Example
(CHANGE (LAMBDA (A) (MAPLIST A (FUNCTION
(LAMBDA (J) (CONS (CAR J) (QUOTE X))) )))
An examination ofevalquote
shows thatQUOTE
will work instead ofFUNCTION
, provided that there are no free variables present.
Syntax:
function
name ⇒ function
Arguments and Values:
name - a function name or lambda expression.
function - a function object.
Description:
The value offunction
is the functional value of name in the current lexical environment.
……
Notes:
The notation#' name
may be used as an abbreviation for(function name)
.
Syntax:
funcall
function &rest args ⇒ result*
Arguments and Values:
function - a function designator.
args - arguments to the function.
results - the values returned by the function.
Description:
funcall
applies function to args. If function is a symbol, it is coerced to a function as if by finding its functional value in the global environment.
Symbols are used as names of variables in Common Lisp programs. When a symbol is evaluated as a form, the value of the variable it names is produced. For example, after doing (setq items 3
), which assigns the value3
to the variable nameditems
, thenitems
⇒3
. Variables can be assigned to, as bysetq
, or bound, as bylet
. Any program construct that binds a variable effectively saves the old value of the variable and causes it to have a new value, and on exit from the construct the old value is reinstated.
There are actually two kinds of variables in Common Lisp, called lexical (or static) variables and special (or dynamic) variables. At any given time either or both kinds of variable with the same name may have a current value. Which of the two kinds of variable is referred to when a symbol is evaluated depends on the context of the evaluation. The general rule is that if the symbol occurs textually within a program construct that creates a binding for a variable of the same name, then the reference is to the variable specified by the binding; if no such program construct textually contains the reference, then it is taken to refer to the special variable of that name.
The distinction between the two kinds of variable is one of scope and extent. A lexically bound variable can be referred to only by forms occurring at any place textually within the program construct that binds the variable. A dynamically bound (special) variable can be referred to at any time from the time the binding is made until the time evaluation of the construct that binds the variable terminates. Therefore lexical binding of variables imposes a spatial limitation on occurrences of references (but no temporal limitation, for the binding continues to exist as long as the possibility of reference remains). Conversely, dynamic binding of variables imposes a temporal limitation on occurrences of references (but no spatial limitation). For more information on scope and extent, see chapter 3.
The value a special variable has when there are currently no bindings of that variable is called the global value of the (special) variable. A global value can be given to a variable only by assignment, because a value given by binding is by definition not global.
It is possible for a special variable to have no value at all, in which case it is said to be unbound. By default, every global variable is unbound unless and until explicitly assigned a value, except for those global variables defined in this book or by the implementation already to have values when the Lisp system is first started. It is also possible to establish a binding of a special variable and then cause that binding to be valueless by using the functionmakunbound
. In this situation the variable is also said to be “unbound,” although this is a misnomer; precisely speaking, it is bound but valueless. It is an error to refer to a variable that is unbound.
It is customary to name dynamic variables with an asterisk at the beginning and end of the name. e.g.,*foo*
is a good name for a dynamic variable, but not for a lexical variable;foo
is a good name for a lexical variable, but not for a dynamic variable. This naming convention is observed for all defined names in Common Lisp; however, neither conforming programs nor conforming implementations are obliged to adhere to this convention.
A list is a chain of conses in which the car of each cons is an element of the list, and the cdr of each cons is either the next link in the chain or a terminating atom.
A proper list is a list terminated by the empty list. The empty list is a proper list, but is not a cons.
An improper list is a list that is not a proper list; that is, it is a circular list or a dotted list.
A dotted list is a list that has a terminating atom that is not the empty list. A non-nil atom by itself is not considered to be a list of any kind---not even a dotted list.
A circular list is a chain of conses that has no termination because some cons in the chain is the cdr of a later cons.
DEFUN
Special Form(DEFUN namespec . definitionspec)
DEFUN
offers a way of associating a functional definition with a symbol. ……DEFUN
can be used to define both normal functions and special forms (fexprs and macros). ……
;; Normal expr or lexpr function definitions
(DEFUN name bvl . body)
……
Ifname
is a symbol andbvl
is a list of symbols which is free of&
keywords (to be described below), the definition is an expr or lexpr definition. In Maclisp, this would be essentially equivalent to writing
(DEFPROP name (LAMBDA bvl . body) EXPR)
.
…… Note also that the keywordEXPR
is used for both expr and lexpr definitions. The only distinction between expr and lexpr definitions is whether thebvl
is a symbol or a list, but they are specified in the same way and looked up from the same property. ……
A fexpr is a function for which the formal parameters are not evaluated. The form of a fexpr definition is:
(DEFUN name FEXPR (sym) . body)
.
The lambda expression which describes a fexpr should expect to receive exactly one argument which will be the list of (unevaluated) arguments given in the call to the fexpr, so usually there is only one bound variable (sym
) in the bound variable list. ……DEFUN
can also be used to instantiate aMACRO
definition. …… The syntax for a macro definition is
(DEFUN name MACRO (sym) . body)
,
wheresym
will become bound to the whole macro form to be expanded (including the name). Note that this argument convention is different than fexprs, which only receive thecdr
of the call form. ……DEFUN
was introduced into Maclisp in March, 1969. Although now it is recognized as the standard function defining form because it shields the user from the implementational details of how the function is defined ……. ……DEFPROP
Function(DEFPROP sym val indicator)
Givessym
a property called indicator with valueval
. The arguments are not evaluated.DEFPROP
should not be used imbedded in other expressions. It is intended to occur at toplevel to assign properties that are set up once and never changed. In other places, usePUTPROP
with three quoted arguments.
Contrary to popular belief, LISP was not originally derived from Church's λ-calculus [Church] [LISP History]. The earliest LISP did not have a well-defined notion of free variables or procedural objects. Early LISP programs were similar to recursion equations, defining functions on symbolic expressions ("S-expressions"). They differed from the equations of pure recursive function theory Kleene by introducing the conditional expression construction (often called the "McCarthy conditional"), to avoid "pattern-directed invocation". That is, in recursive function theory one would define the factorial function by the following two equations:
factorial(0) = 1
factorial(successor(x)) = successor(x) * factorial(x)
In early LISP, however, one would have written:
factorial[x] = [x=0 → 1; T → x*factoria1[x-1]]
where "[a → b; T → c]
" essentially means "if a then b else c
". The recursive function theory version depends on selecting which of two equations to use by matching the argument to the left-hand sides (such a discipline is actually used in the PROLOG language [Warren]); the early LISP version represents this decision as a conditional expression.
The theory of recursion equations deals with functions over the natural numbers. In LISP, however, one is interested in being able to manipulate algebraic expressions, programs, and other symbolic expressions as data structures. While such expressions can be encoded as numbers (using the technique of "arithmetization" developed by Kurt Gödel), such an encoding is not very convenient. Instead, a new kind of data called "S-expressions" (for "symbolic expressions") is introduced specifically to provide convenient encodings. S-expressions can be defined by a set of formal inductive axioms analogous to the Peano postulates used to define natural numbers.
SCHEME is a dialect of LISP. It is an expression-oriented, applicative order, interpreter-based language which allows one to manipulate programs as data.
It differs from most current dialects of LISP in that it closes all lambda-expressions in the environment of their definition or declaration, rather than in the execution environment. This has the consequence that variables are normally lexically scoped, as in ALGOL. However, in contrast with ALGOL, SCHEME treats procedures as a first-class data type. They can be the values of variables, the returned values of procedures, and components of data structures.
Another difference from LISP is that SCHEME is implemented in such a way that tail-recursions execute without net growth of the interpreter stack. The effect of this is that a procedure call behaves like a GOTO and thus procedure calls can be used to implement iterations, as in PLASMA.
Micro-Planner is an implementation of a subset of Cal Hewitt's language, PLANNER by Gerald Jay Sussman, Terry Winograd, and Eugene Charniak on the AI group computer in LISP.
DEFINE
- This is analogous to the MacLISPDEFUN
primitive (but note that theLAMBDA
must appear explicitly!). It is used for defining a function in the "global environment" permanently, as opposed toLABELS
(see below), which is used for temporary definitions in a local environment.DEFINE
takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom).
Atoms which are not atomic symbols (identifiers) evaluate to themselves. Typical examples of such atoms are numbers, arrays, and strings (character arrays). Symbols are treated as identifiers or variables. They may be lexically bound by lambda-expressions. There is a global environment containing values initially have as their values primitive operations such as, for example,CAR
,CONS
, andPLUS
. SCHEME differs from most LISP systems in that the atomCAR
is not itself an operation (in the sense of being an invocable object, e.g. a valid first argument toAPPLY
), but only has one as a value when considered as an identifier.
Inspired by ACTORS [Greif and Hewitt][Smith and Hewitt], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [Church], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to:
⑴ alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc. by clarifying the embedding of non-recursive control structures in a recursive host language like LISP.
⑵ explain how to use these control structures, independent of such issues as pattern matching and data base manipulation.
⑶ have a simple concrete experimental domain for certain issues of programming semantics and style.
The variables in a lambda expression are dummy or bound variables because systematically changing them does not alter the meaning of the expression. Thusλ[[u;v];v2+u]
means the same thing asλ[[x;y];y2+x]
.
We shall sometimes use expressions in which a variable is not bound by a lambda. For example, in the function of two variablesλ[[x;y];xn+yn]
the variablen
is not bound. This is called a free variable. It may be regarded as a parameter. Unlessn
has been given a value before trying to compute with this function, the value of the function must be undefined. ……
When the compiler is called upon to compile a function, it looks for anEXPR
orFEXPR
on the property list of the function name. The compiler then translates this S-expression into an S-expression that represents a subroutine in the LISP Assembly Language (LAP). LAP then proceeds to assemble this program into binary program space. Thus anEXPR
, or anFEXPR
, has been changed to aSUBR
or anFSUBR
, respectively. ……
Free variables in compiled functions must be declared before the function is compiled. ……
A variable is bound in a particular function when it occurs in a list of bound variables following the wordLAMBDA
orPROG
. Any variable that is not bound is free. ……
When a variable is used free, it must have been bound by a higher level function. If a program is being run interpretively, and a free variable is used without having bee bound on a higher level, error diagnostic *A 8* will occur.
If the program is being run complied, the diagnostic may not occur, and the variable may have value NIL.
There are three types of variables in compiled functions: ordinary variables,SPECIAL
variables, andCOMMON
variables.SPECIAL
andCOMMON
variables must be declared before compiling. Any variable that is not declared will be considered an ordinary variable.
When functions are translated into subroutines, the concept of a variable is translated into a location where an argument is stored. If the variable is an ordinary one, then a storage location for it is set up on the push-down list. Other functions cannot find this private cell, making it impossible to use it as a free variable.SPECIAL
variables have the indicatorSPECIAL
on their property lists Following the indicator there is a pointer to a fixed cell. When this variable is bound, the old value is saved on the push-down list, and the current value is stored in theSPECLAL
cell. When it is no longer bound, the old value must be restored. When a function uses this variable free, then the quantity in theSPECIAL
cell is picked up.SPECIAL
variables are declared by using the pseudo-functionspecial[a]
, wherea
is a list of variable names. ……SPECIAL
variables are inexpensive and will allow free communication among compiled functions. They do not increase run time significantly.SPECIAL
variables cannot be communicated between the interpreter and compiled functions.COMMON
variables have the flagCOMMON
on their property lists; however, this is only used to inform the compiler that they areCOMMON
, and is not needed at run time.COMMON
variables are bound on an a-list by the compiled functions. When they are to be evaluated,eval
is given this a-list. This happens at run time.
The use ofCOMMON
variables will slow down any compiled function using them. However, they do provide complete communication between interpreted and compiled functions.COMMON
variables are declared bycommon[a]
, wherea
is a list of variable names. ……
LISP is designed in such a way that all functions for which the possibility of recursion can exist are in fact recursive. This requires that all temporary stored results related to the computation that is in progress be set aside when a piece of coding is to be used recursively, and that they be later restored. This is done automatically and need not be programmed explicitly.
All saving of temporary results in LISP is performed on a linear block of storage called the push-down list. Each set of stored data that is moved onto the push-down list is in a block labeled with its size and the name of the subroutine from which it came. Since it is in the nature of recursion that the first block to be saved is always the last block to be restored, it is possible to keep the push-down list compact.
There are several ways in which to maintain the values of free variables (and, thus the computational environment) in order to handle cases like the one above. All of the techniques used involve some increase in time. One approach makes it easy to access the value of a free variable is local. This approach is favored in recent implementations of LISP. We shall call it the "shallow access" approach.
Another approach is to make it relatively easy to enter and leave a block in which a free variable is bound, but relatively expensive to access a free variable. This approach is used in the classical "alist" implementation of LISP. Several Algol systems have opted for a similar approach. We shall call this the "deep access" approach. Both of these approaches allow one to modify the value of the variable as well as access it. The access time is approximately the same as modification time in both approaches.
Let us consider the "shallow access" approach first. By "shallow access" we mean that the value of a free variable can be obtained in a single fetch from memory. Since the current value may be stored in a difficult-to-determine location up in the stack, a special cell for its current value is used. In many recent LISP implementations this special values cell is stored as a property of the atom structure, and is unique to the free variable. In order to maintain the proper value in the special cell, extra work must be done every time a function or a block is entered in which the variable is an argument or is local. On entering such a function or block one usually saves the old value which is in the special cell on the stack alone with sufficient information to allow one to know from where the stack value came. On leaving such a function or block one must remember to store the value saved in the stack in the special cell.
The "deep access" approach forces one to search upward in the stack for the most recent value of the free variable. Such a search may be quite deep and slow if the free variable was bond vary far up in the stack. However, this approach has the advantage of avoiding the need for an extra special value cell. In addition, we shall see later that this approach sometimes allows one to use free variables with greater ease and flexibility than the "shallow access" approach. ……
Certain LISP systems use "deep access" in interpreted functions and "shallow access" in compiled functions. if free variables in compiled functions are declared to beCOMMON
, their values will be stored on the "alist". In this manner one can solve the environment problem. The cost is a very slow interpreter and "deep access" to free variables.
Magic forms are recognized by then presence of a "magic (reserved) word" in thecar
position of the form. All non-atomic forms which are not magic forms are considered to be combinations. The system has a small initial set of magic words; there is also a mechanism for creating new ones {Note FUNCALL is a Pain}.
A combination is considered to be a list of subforms. These subforms are all evaluated. The first value mub be a procedure; it is applied to the other values to get the value of the combination. There are four important points here:
(1) the procedure Position is always evaluated just like any other position. (This is why the primitive operators are the values of global identifiers.)
(2) the procedure is never "re-evaluated"; if the first subform fails to evaluate to a applicable procedure, it is an error. Thus, unlike most LISP systems, SCHEME always evaluates the first subform of a combination exactly once.
(3) the arguments are all completely evaluated before the procedure is applied; that is, SCHEME, like most LISP systems, is an applicative-order language. Many SCHEME programs exploit this fact.
(4) the argument forms (and procedure form) may in principle be evaluated in any order. This is unlike the usual LISP left-to-right order. (All SCHEME interpreters implemented so far have in fact performed left-to-right evaluation, but we do not wish programs to depend on this fact. Indeed, there are some reasons why a clever interpreter might want to evaluate them right-to-left, e.g. to get things on a stack in the correct order.)
Lisp is a survivor, having been in use for about a quarter of a century. Among the active programming languages only Fortran has had a longer life.
Dynamic scoping inside isolated namespaces - newLISP is sometimes criticized for using dynamic scoping and fexprs. …… In newLISP, all variables are dynamically scoped by default. However, by defining a function in its own context, static/lexical scoping can be achieved. In newLISP, several functions and data can share a namespace. By enclosing functions in their own namespace, a lexical closure- like mechanism is achieved. Common Lisp and Scheme are lexically scoped by default and use lambda expressions as the closure mechanism. Common Lisp also offers special variables for dynamic scoping.
The problems of free variables in dynamic scoping can be avoided. In the rare cases when free variables must be used, you can partition code into namespace modules for easier control over free variables. You can then exploit the advantages of dynamic scoping. With dynamic scoping inside lexically-closed namespaces, newLISP combines the best of both scoping worlds.
newLISP has no funarg problem because it follows a simple rule: variables always show the binding of their current environment. When expressions with local variables are entered, newLISP saves the current variable state on a stack and restores it on exit of that expression. In newLISP, not only are function parameters and variables declared with let expressions local, loop variables in all looping expressions are local too.
In 1981 the emerging Common Lisp community turned to Scheme for some of its motivation and inspiration [Steele 1984]. Adopting lexical scoping proved one of the most important decisions the Common Lisp group ever made.
One aspect of Scheme that was not adopted, however, was a single namespace for functions and values, along with uniform evaluation rules for expressions in function and argument positions within the language. ……
In this paper, we shall refer to two abstract dialects of Lisp called Lisp1 and Lisp2.
Lisp1 has a single namespace that serves a dual role as the function namespace and value namespace; that is, its function namespace and value namespace are not distinct. In Lisp1, the functional position of a form and the argument positions of forms are evaluated according to the same rules. Scheme [Rees 1986] and the language being designed by the EuLisp group [Padget 1986] are Lisp1 dialects.
Lisp2 has distinct function and value namespaces. In Lisp2, the rules for evaluation in the functional position of a form are distinct from those for evaluation in the argument positions of the form. Common Lisp is a Lisp2 dialect. ……
Most Lisp dialects adopted a two-namespace approach to the naming problem. To some extent this is because most dialects followed Lisp 1.5 [McCarthy 1965] or dialects derived from Lisp 1.5.
Lisp 1.5 broke symbols into values and functions; values were stored on an association list, and function on the property lists of symbols. Compiled and interpreted code worked in different ways. In the interpreter, the association list was where all bindings were kept. When an identifier was encountered (an 'atomic symbol' in Lisp 1.5 terminology), it was taken as a variable to be evaluated for its value. First theAPVAL
part of the symbol was interrogated — anAPVAL
was "A Permanent, system-defined VALue" stored in a specific place in the symbol. Second, the association list was searched. Finally, if no binding was found, an error was signaled.
When a combination was encountered, the function position was evaluated differently from other positions. First, the symbol was interrogated to see whether there was a function definition associated with it; then the association list was searched.
Here we can see two namespaces at work, though nonsymbol variables were treated somewhat uniformly (Lisp 1.5 did not have lexical variables in interpreted code): when there were no function definitions associated with a symbol, there was one namespace, which was explicitly represented by the association list.
Compiled code worked a slightly different way, and from its internals we can see where the two namespaces came about in descendants of Lisp 1.5.
The Lisp 1.5 compiler supported 'common' and 'special' variables. A common variable enabled compiled and interpreted code to communicate with each other. A common variable was bound on an explicit association list; to evaluate such a variable, a call toEVAL
was emitted to determine the value. A special variable was the compiler's modeling of free variables and closely matched what is now called 'shallow binding.' Ordinary variables compiled into what we have termed lexical variables.
Thus, we see all of the components of the two-namespace world in Lisp 1.5, along with some of the components of the one-namespace world.
It seems that the designers of Lisp 1.5 thought of function names as being different from variable names — the Lisp 1.5 interpreter looked at the property list of the named atomic symbol first, and so it can be argued that a function was considered a property of a symbol. The designers used the terminology that a symbol "stands for" a function while a variable "refers" to a value.
Lisp for the PDP-6 at MIT adopted the style of the Lisp 1.5 special variables for dynamic binding in both compiled and interpreted code, thereby eliminated common variables. Compilers were still written that were to try to interpret special variables as lexical variables in as many places as possible. The value of a symbol was stored in the value cell of the symbol, and the function remained on the property list as it did in Lisp 1.5.
MacLisp [Pitman 1983] is a direct descendant of the PDP-6 Lisp and is a Lisp2 dialect. MacLisp uses a sophisticated form of link table, which is made possible by the separation of namespaces. In particular, function-defining functions have controlled access into the places where functions are stored so that the link tables can be correctly maintained. ……
Common Lisp was the result of a compromise between a number of dialects of Lisp, most of them descendants of MacLisp, all of them Lisp2s. ……
A free variable in Common Lisp is currently taken to be a dynamic rather than a lexical reference because there is no global lexical environment in Common Lisp. In this code
(DEFUN PLUS (X Y) (+ X Y))
(DEFUN SOMEWHAT-MORE-THAN (X) (PLUS X *SOMEWHAT*))
The reference to*SOMEWHAT*
is special (dynamic). On the surface, in Lisp1 the reference toPLUS
is free also, and thus it is special (dynamic).
When a variable is declared special, a free reference to it is to the most recent dynamically bound value for it; all bindings of it become special (dynamic) bindings. When a variable is declared constant, free references to it are to its permanent constant value, and compilers are free to fold that constant into the code they emit.
We introduce the concept of global variables; when a variable is global, free references to it are to the value cell of the symbol named by the variable, and all bindings of it are still lexical.
To avoid a compiler warning for free use of a variable like*SOMEWHAT*
, the variable is declaredSPECIAL
. In order to have compilers for Lisp1 not to warn about the free use ofPLUS
, it would be unfortunate to have to declare itSPECIAL
.
As noted, in Common Lisp the default for free variable references isSPECIAL
; that is, a free variable reference is taken to be a special variable reference. If the default in Lisp1 were that free variable references were global (lexical), then it would make sense for a compiler not to warn about such free variable references.
Sussman had just been studying Algol. He suggested starting with a lexically scoped dialect of Lisp, because that seemed necessary to model the way names could refer to acquaintances in PLASMA. Lexical scoping would allow actors and functions to be created by almost identical mechanisms. Evaluating a form beginning with the wordlambda
would capture the current variable-lookup environment and create a closure; evaluating a form beginning with the wordalpha
would also capture the current environment but create an actor. Message passing could be expressed syntactically in the same way as function invocation. The difference between an actor and a function would be detected in the part of the interpreter traditionally known asapply
. A function would return a value, but an actor would never return; instead, it would typically invoke a continuation, another actor that it knew about. Our interpreter also provided the necessary primitives for implementing the internal behavior of primitive actors, such as an addition operator that could accept two numbers and a continuation actor. ……
…… This led us to three important ideas:
• First, we realized that all the patterns of control structure that Hewitt had described in terms of actors could equally well be described by the λ-calculus. ……
• Second, we realized that the λ-calculus — a small, simple formalism — could serve as the core of a powerful and expressive programming language. (Lisp had adopted the λ-notation for functions but had failed to support the appropriate behavior for free variables. The original theoretical core of Lisp was recursion equations, not the λ-calculus.) ……
• Third, we realized that in our quest for the “ultimate AI language” we had come full circle. As the MIT school had struggled to find more and more powerful ways to express and manipulate control structure to support heuristic search, we had progressed from Lisp to CONVERT to Planner to Conniver to PLASMA to a simple variation of Lisp!
In 1981 the emerging Common Lisp community turned to Scheme for some of its motivation and inspiration [Steele 1984]. Adopting lexical scoping proved one of the most important decisions the Common Lisp group ever made.
One aspect of Scheme that was not adopted, however, was a single namespace for functions and values, along with uniform evaluation rules for expressions in function and argument positions within the language. ……
In this paper, we shall refer to two abstract dialects of Lisp called Lisp1 and Lisp2.
Lisp1 has a single namespace that serves a dual role as the function namespace and value namespace; that is, its function namespace and value namespace are not distinct. In Lisp1, the functional position of a form and the argument positions of forms are evaluated according to the same rules. Scheme [Rees 1986] and the language being designed by the EuLisp group [Padget 1986] are Lisp1 dialects.
Lisp2 has distinct function and value namespaces. In Lisp2, the rules for evaluation in the functional position of a form are distinct from those for evaluation in the argument positions of the form. Common Lisp is a Lisp2 dialect. ……
Most Lisp dialects adopted a two-namespace approach to the naming problem. To some extent this is because most dialects followed Lisp 1.5 [McCarthy 1965] or dialects derived from Lisp 1.5.
Lisp 1.5 broke symbols into values and functions; values were stored on an association list, and function on the property lists of symbols. Compiled and interpreted code worked in different ways. In the interpreter, the association list was where all bindings were kept. When an identifier was encountered (an 'atomic symbol' in Lisp 1.5 terminology), it was taken as a variable to be evaluated for its value. First theAPVAL
part of the symbol was interrogated — anAPVAL
was "A Permanent, system-defined VALue" stored in a specific place in the symbol. Second, the association list was searched. Finally, if no binding was found, an error was signaled.
When a combination was encountered, the function position was evaluated differently from other positions. First, the symbol was interrogated to see whether there was a function definition associated with it; then the association list was searched.
Here we can see two namespaces at work, though nonsymbol variables were treated somewhat uniformly (Lisp 1.5 did not have lexical variables in interpreted code): when there were no function definitions associated with a symbol, there was one namespace, which was explicitly represented by the association list.
Compiled code worked a slightly different way, and from its internals we can see where the two namespaces came about in descendants of Lisp 1.5.
The Lisp 1.5 compiler supported 'common' and 'special' variables. A common variable enabled compiled and interpreted code to communicate with each other. A common variable was bound on an explicit association list; to evaluate such a variable, a call toEVAL
was emitted to determine the value. A special variable was the compiler's modeling of free variables and closely matched what is now called 'shallow binding.' Ordinary variables compiled into what we have termed lexical variables.
Thus, we see all of the components of the two-namespace world in Lisp 1.5, along with some of the components of the one-namespace world.
It seems that the designers of Lisp 1.5 thought of function names as being different from variable names — the Lisp 1.5 interpreter looked at the property list of the named atomic symbol first, and so it can be argued that a function was considered a property of a symbol. The designers used the terminology that a symbol "stands for" a function while a variable "refers" to a value.
Lisp for the PDP-6 at MIT adopted the style of the Lisp 1.5 special variables for dynamic binding in both compiled and interpreted code, thereby eliminated common variables. Compilers were still written that were to try to interpret special variables as lexical variables in as many places as possible. The value of a symbol was stored in the value cell of the symbol, and the function remained on the property list as it did in Lisp 1.5.
MacLisp [Pitman 1983] is a direct descendant of the PDP-6 Lisp and is a Lisp2 dialect. MacLisp uses a sophisticated form of link table, which is made possible by the separation of namespaces. In particular, function-defining functions have controlled access into the places where functions are stored so that the link tables can be correctly maintained. ……
Common Lisp was the result of a compromise between a number of dialects of Lisp, most of them descendants of MacLisp, all of them Lisp2s. ……
A free variable in Common Lisp is currently taken to be a dynamic rather than a lexical reference because there is no global lexical environment in Common Lisp. In this code
(DEFUN PLUS (X Y) (+ X Y))
(DEFUN SOMEWHAT-MORE-THAN (X) (PLUS X *SOMEWHAT*))
The reference to*SOMEWHAT*
is special (dynamic). On the surface, in Lisp1 the reference toPLUS
is free also, and thus it is special (dynamic).
When a variable is declared special, a free reference to it is to the most recent dynamically bound value for it; all bindings of it become special (dynamic) bindings. When a variable is declared constant, free references to it are to its permanent constant value, and compilers are free to fold that constant into the code they emit.
We introduce the concept of global variables; when a variable is global, free references to it are to the value cell of the symbol named by the variable, and all bindings of it are still lexical.
To avoid a compiler warning for free use of a variable like*SOMEWHAT*
, the variable is declaredSPECIAL
. In order to have compilers for Lisp1 not to warn about the free use ofPLUS
, it would be unfortunate to have to declare itSPECIAL
.
As noted, in Common Lisp the default for free variable references isSPECIAL
; that is, a free variable reference is taken to be a special variable reference. If the default in Lisp1 were that free variable references were global (lexical), then it would make sense for a compiler not to warn about such free variable references.
Why do you use dynamic variable binding? Dynamic binding is very powerful, because there is only one single, dynamically changing environment active all the time. This makes it possible (e.g. for program snippets, interspersed with application data and/or passed over the network) to access the whole application context, freely, yet in a dynamically controlled manner. And (shallow) dynamic binding is the fastest method for a Lisp interpreter.
Lexical binding is more limited by definition, because each environment is deliberately restricted to the visible (textual) static scope within its establishing form. Therefore, most Lisps with lexical binding introduce "special variables" to support dynamic binding as well, and constructs like labels to extend the scope of variables beyond a single function.
In PicoLisp, function definitions are normal symbol values. They can be dynamically rebound like other variables. ……
Are there no problems caused by dynamic binding? You mean the funarg problem, or problems that arise when a variable might be bound to itself? For that reason we have a convention in PicoLisp to use transient symbols (instead of internal symbols) or private internal symbols ……
But with dynamic binding I cannot implement closures! This is not true. Closures are a matter of scope, not of binding.
For a closure it is necessary to build and maintain a separate environment. In a system with lexical bindings, this has to be done at each function call, and for compiled code it is the most efficient strategy anyway, because it is done once by the compiler, and can then be accessed as stack frames at runtime.
For an interpreter, however, this is quite an overhead. So it should not be done automatically at each and every function invocation, but only if needed.
A conditional expression has the form(p1 → e1, …, pn → en)
where thep
's are propositional expressions and thee
's are expressions of any kind. It may be read, “Ifp1
thene1
otherwise ifp2
thene2
, …, otherwise ifpn
thenen
,” or “p1
yieldse1
, …,pn
yieldsen
.”
I sent a proposal for conditional expressions to a CACM forum on what should be included in Algol 60. Because the item was short, the editor demoted it to a letter to the editor, for which CACM subsequently apologized. The notation given here was rejected for Algol 60, because it had been decided that no new mathematical notation should be allowed in Algol 60, and everything new had to be English. The if...then...else that Algol 60 adopted was suggested by John Backus.
The conditional expression[p1 → e1; …; pn → en]
is translated into(COND (p1* e1*) … (pn* en*))
.
Mathematically, it is possible to have functions as arguments of other functions. For example, in arithmetic one could define a function operate[op;a;b]
, whereop
is a functional argument that specifies which arithmetic operation is to be performed ona
andb
. ……
In LISP, functional arguments are extremely useful. A very important function with a functional argument ismaplist
. Its M-expression definition is
maplist[x;fn]=[null[x]→NIL;
T→cons[fn[x];maplist[cdr[x];fn]]]
……
The functional argument is, of course, a function translated into an S-expression. It is bound to the variablefn
and is then used wheneverfn
is mentioned as a function. The S-expression formaplist
itself is as follows:
(MAPLIST (LAMBDA (X FN) (COND ((NULL X) NIL)
(T (CONS (FN X) (MAPLIST (CDR X) FN))) )))
……
Usingmaplist
, we definechange
by
change[a]=maplist[a;λ[[j];cons[car[j];X]]]
This is not a valid M-expression as defined syntactically in section 1.5 because a function appears where a form is expected. This can be corrected by modifying the rule defining an argument so as to include functional arguments:
<argument> ::= <form> | <function>
We also need a special rule to translate functional arguments into S-expression. Iffn
is a function used as an argument, then it is translated into(FUNCTION fn*)
.
Example
(CHANGE (LAMBDA (A) (MAPLIST A (FUNCTION
(LAMBDA (J) (CONS (CAR J) (QUOTE X))) )))
An examination ofevalquote
shows thatQUOTE
will work instead ofFUNCTION
, provided that there are no free variables present.
Syntax:
function
name ⇒ function
Arguments and Values:
name - a function name or lambda expression.
function - a function object.
Description:
The value offunction
is the functional value of name in the current lexical environment.
……
Notes:
The notation#' name
may be used as an abbreviation for(function name)
.
Syntax:
funcall
function &rest args ⇒ result*
Arguments and Values:
function - a function designator.
args - arguments to the function.
results - the values returned by the function.
Description:
funcall
applies function to args. If function is a symbol, it is coerced to a function as if by finding its functional value in the global environment.
A function can be simply a name. In this case its meaning must be previously understood. A function may be defined by using the lambda notation and establishing a correspondence between the arguments and the variables used in a form. If the function is recursive, it must be given a name by using a label. ……
When a symbol stands for a function, the situation is similar to that in which a symbol stands for an argument. When a function is recursive, it must be given a name. This is done by means of the formLABEL
, which pairs the name with the function definition on the a-list. The name is then bound to the function definition, just as a variable is bound to its value.
In actual practice,LABEL
is seldom used. It is usually more convenient to attach the name to the definition in a uniform manner. This is done by putting on the property list of the name, the symbolEXPR
followed by the function definition. The pseudo-functiondefine
used at the beginning of this section accomplishes this. Whenapply
interprets a function represented by an atomic symbol, it searches the p-list of the atomic symbol before searching the current a-list. Thus adefine
will override aLABEL
.
The fact that most functions are constants defined by the programmer, and not variables that are modified by the program, is not due to any weakness of the system. On the contrary, it indicates a richness of the system which we do not know how to exploit very well.
evalqyote[fn;args]
. When evalquote
is given a function and a list of arguments for that function, it computes the value of the function applied to the arguments.fn
" of the function evalquote
must be an S-expression. Since we have been writing functions as M-expressions, it is necessary to translate them into S-expressions. ……evalquote
is defined by using two main functions, called eval
and apply
. apply
handles a function and its arguments, while eval
handles forms. Each of these functions also has another argument that is used as an association list for storing the values of bound variables and function names. ……apply
. When apply
encounters a function beginning with LAMBDA
, the list of variables is paired with the list of arguments and added to the front of the a-list During the evaluation of the function, variables may be encountered. They are evaluated by looking them up on the a-list. If a variable has been bound several times, the last or most recent value is used. The part of the interpreter that does this is called eval
. The following example will illustrate this discussion. Suppose the interpreter is given the following doublet:fn: (LAMBDA (X Y) (CONS X Y))
args: (A B)
evalquote
will give these arguments to apply
. (Look at the universal function of Section I.)apply[(LAMBDA (X Y) (CONS X Y)); (A B);NIL]
apply
will bind the variables and give the function and a-list to eval
.eval[(CONS X Y); ((X . A) (Y . B))]
eval
will evaluate the variables and give it to cons
.cons[A;B] = (A . B)
apply[CONS;(A B);((X . A) (Y . B))]
. ……LAMBDA
that binds it. When a variable always has a certain value regardless of the current a-list, it will be called a constant. This is accomplished by means of the property list (p-list) of the variable symbol. Every atomic symbol has a p-list. When the p-list contains the indicator APVAL
, then the symbol is a constant and the next item on the list is the value. eval
searches p-lists before a-lists when evaluating variables, thus making it possible to set constants. ……NIL
is an example of this. It can be evaluated repeatedly and will still be NIL
. T
, F
. NIL
, and other constants cannot be used as variables. ……PROG
, list of program variables, sequence of statements and atomic' symbols…)LAMBDA
. The value of each program variable is NIL
until it has been set to something else. To set a program variable, use the form SET
. To set variable PI
to 3.14
write (SET (OUOTE PI) 3.14)
. SETQ
is like SET
except that it quotes its first argument. Thus (SETQ PI 3.14)
. SETQ
is usually more convenient. SET
and SETQ
can change variables that are on the a-list from higher level functions. The value of SET
or SETQ
is the value of its second argument. ……777778
(i.e., minus 1
) as the first element of the list. The rest of the list contains various properties of the atomic symbol. Each property is preceded by an atomic symbol which is called its indicator. Some of the indicators are:PNAME
- the BCD print name of the atomic symbol for input-output use.EXPR
- S-expression defining a function whose name is the atomic symbol on whose property list the EXPR
appears.SUBR
- Function defined by a machine language subroutine.APVAL
- Permanent value for the atomic symbol considered as a variable.NIL
has two things on its property list - its PNAME
, and an APVAL
that gives it a value of NIL
. Its property list looks like this:.---.---. .-----.---. .---.---. .-----.---. .---.---. |-1 | |->|APVAL| |->| | |->|PNAME| |->| | / | `---'---' `-----'---' `---'---' `-----'---' `---'---' ^ | | | .-V-.---. .-V-.---. | | | / | | | / | | `---'---' `---'---' | | | '-----------------------' .-V-----. | NIL???| `-------'……
EXPR
points to an S-expression defining a function. The function define
puts EXPR
's on property lists. After defining ff
, its property list would look like this.---.---. .-----.---. .---.---. .-----.---. .---.---. |-1 | |->|EXPR | |->| | |->|PNAME| |->| | / | `---'---' `-----'---' `---'---' `-----'---' `---'---' .-----------------------' | .-V----.---. .---.---. .---.---. .-V-.---. |LAMBDA| |->| | |->| | / | | | / | `------'---' `---'---' `---'---' `---'---' | | | .---.---. .----.---. .-V-----. | X | / | |COND| |-> - - - | FF????| `---'---' `----'---' `-------'……
-1
in the address, certain bits in the tag which specify that it is a number and what type it is, and a pointer to the number itself in the decrement of this word.15
is represented as follows:.----.-.---. .--------------. | -1 |1| |->| 000000000017 | `----'-'---' `--------------'……
special[x]
SUBR
pseudo-function
The list x
contains the names of variables that are to be declared SPECIAL
. ……common[x]
SUBR
pseudo-function
The list x
contains the names of variables that are to be declared COMMON
. ……PROG
feature is an FSUBR
coded into the system. ……
When a set
or a setq
is encountered, the name of the variable is located on the a-list. The value of the variable (or cdr
of the pair) is actually replaced with the new value. ……
The following points should be noted concerning declared variables.
1. Program variables follow the same rules as λ variables do.
a. If a variable is purely local, it need not be declared.
b. Special variables can be used as free variables in compiled functions. They may be set at a lower level than that at which they are bound.
c. Common program variables maintain complete communication between compiled programs and the interpreter.
2. set
as distinct from setq
can only be used to set common variables.”
The purpose of the LISP Compiler is to replace S-expression definitions of functions with efficient machine language subroutines. A subroutine can be expected to run about 40 times as fast as the interpreter can execute the same function from its S-expression definition. Subroutines typically take 70-80 per cent of the storage required by their corresponding S-expressions.
define
. …… A pseudo-function is a function that is executed for its effect on the system in core memory, as well as for its value. define
causes these functions to be defined and available within the system. Its value is a list of the functions defined ……. ……define[x]
: EXPR
pseudo-functiondefine
, x
, is a list of pairs((u1 v1) (u2 v2) … (un vn))
u
is a name and each v
is a λ-expression for a function. For each pair, define puts an EXPR
on the property list for u
pointing to v
. The function of define
puts things on at the front of the property list. The value of define
is the list of u
's.define[x] = deflist[x;EXPR]
deflist[x;ind]
: EXPR
pseudo-functiondeflist
is a more general defining function. Its first argument is a list of pairs as for define
. Its second argument is the indicator that is to be used. After deflist
has been executed with (ui vi)
among its first argument, the property list of ui
will begin:| .----.---. .---.---. .---.---. uᵢ `->| -1 | |->|IND| |->| | |--> - - - `----'---' `---'---' `---'---' | V vᵢ
If deflist
or define
is used twice on the same object with the same indicator, the old value will be replaced by the new one.”
The over-all design of the LISP Programming System is the work of John McCarthy and is based on his paper "Recursive Functions of Symbolic Expressions and Their Computation by Machine" which was published in Communications of the ACM, April 1960.
This manual was written by Michael I. Levin.
The interpreter was programmed by Stephen B. Russell and Daniel J . Edwards.
The print and read programs were written by John McCarthy, Klim Maling, Daniel J. Edwards, and Paul W, Abrahams.
The garbage collector and arithmetic features Were written by Daniel J. Edwards.
The compiler and assembler were written by Timothy P. Hart and Michael I. Levin.
An earlier compiler was written by Robert Brayton.
The "LISP 1 Programmer's Manual," March 1, 1960, was written by Phyllis A. Fox. Additional programs and suggestions were contributed by the following members of the Artificial Intelligence Group of the Research Laboratory of Electronics: Marvin L. Minsky, Bertram Raphael, Louis Hodes, David M. R. Park, David C. Luckham, Daniel G. Bobrow, James R. Slagle, and Nathaniel Rochester.
The STANFORD A.I. LISP 1.6 System was originally an adaptation of one developed by the Artificial Intelligence Project a t M.I.T. Since 1966, that system has been largely rewritten by John Allen and the author.
At Stanford in the 1960s, an early version of MacLisp was adapted for their PDP-6; this Lisp was called Lisp 1.6 [Quam 1972]. The early adaptation was rewritten by John Allen and Lynn Quam; later compiler improvements were made by Whit Diffie. Lisp 1.6 disappeared during the mid-1970s, one of the last remnants of the Lisp 1.5 era.
The Artificial Intelligence Project at Stanford University has produced a version of LISP 1.5 to be distributed by SHARE. In the middle of February 1965 the system is complete and is available from Stanford. The system should be available from SHARE by the end of March 1965. ……Evalquote
is available to the programmer as a LISP function - thus, one may now write "(EVALQUOTE APPEND ((A)(B C D)))
", rather than "(EVAL (QUOTE (APPEND (A)(B C D))) NIL)
", should one desire to do so.
A key difference between MacLisp and Interlisp was the approach to syntax. MacLisp favored the pure list style, usingEVAL
as the top level. Interlisp, along with Lisp 1.5, usedEVALQUOTE
.
To concatenate the lists(BOAT AIRPLANE SKATEBOARD)
and(CAR TRUCK)
in MacLisp, one would type this expression toEVAL
:
(APPEND (QUOTE (BOAT AIRPLANE SKATEBOARD)) (QUOTE (CAR TRUCK)))
or, using the syntactic abbreviation'x
for(quote x)
,
(APPEND '(BOAT AIRPLANE SKATEBOARD) '(CAR TRUCK))
The result would of course be(BOAT AIRPLANE SKATEBOARD CAR TRUCK)
.
In Lisp 1.5, one would type an expression (actually two expressions) like this toEVALQUOTE
:
APPEND((BOAT AIRPLANE SKATEBOARD) (CAR TRUCK))
The first expression denotes a function, and the second is a list of arguments. The "quote" in the nameEVALQUOTE
signifies the "implicit quoting of the arguments" to the function applied. MacLisp forked off and usedEVAL
exclusively as a top level interface. BBN-Lisp (and thus Interlisp) accommodated both: if the first input line contained a complete form and at least one character of a second form, then BBN-Lisp finished reading the second form and used theEVALQUOTE
interface for that interaction; otherwise it read exactly one form and used theEVAL
interface for that interaction.
The phrase "quoting arguments" actually is misleading and imprecise. It refers to the actions of a hypothetical preprocessor that transforms the input from a form like
APPEND((BOAT AIRPLANE SKATEBOARD) (CAR TRUCK))
to one like
(APPEND (QUOTE (BOAT AIRPLANE SKATEBOARD)) (QUOTE (CAR TRUCK)))
before evaluation is performed. A similar confusion carried over into the description of the so-calledFEXPR
or"special form." In some texts on Lisp one will find descriptions of special forms that speak of a special form "quoting its arguments," when in fact a special form has a special rule for determining its meaning and that rule involves not evaluating some forms [Pitman 1980].
McCarthy [McCarthy 1981] noted that the original Lisp interpreter was regarded as a universal Turing machine: It could perform any computation given a set of instructions (a function) and the initial input on its tape (arguments). Thus it was intended that
APPEND((BOAT AIRPLANE SKATEBOARD) (CAR TRUCK))
be regarded not as a syntactically mutated version of
(APPEND (QUOTE (BOAT AIRPLANE SKATEBOARD)) (QUOTE (CAR TRUCK)))
but as a function and (separately) a literal list of arguments. In hindsight we see that theEVALQUOTE
top level might better have been called theAPPLY
top level, making it pleasantly symmetrical to theEVAL
top level; the BBN-Lisp documentation brought out this symmetry explicitly. Indeed,EVALQUOTE
would have been identical to the functionAPPLY
in Lisp 1.5 if not for these two differences: (a) in Lisp 1.5,APPLY
took a third argument, an environment (regarded nowadays as something of a mistake that resulted in dynamic binding rather than the lexical scoping needed for a faithful reflection of the lambda calculus); and (b) "EVALQUOTE
is capable of handling special forms as a sort of exception" [McCarthy 1962]. Nowadays such an exception is referred to as a kluge [Raymond 1991]. (Note, however, that MacLisp'sAPPLY
function supported this same kluge.)
In 1963, L. Peter Deutsch (at that time a high school student) implemented a Lisp similar to Lisp 1.5 on the PDP-1 at Bolt Beranek and Newman (BBN) [Deutsch 1964]. This Lisp was called Basic PDP-1 Lisp. ……
At BBN, a successor to Basic PDP-1 Lisp was implemented on the PDP-1 and an upward-compatible version, patterned after Lisp 1.5 on the MIT CTSS system, was implemented on the Scientific Data Systems 940 (SDS 940) by Daniel Bobrow and D. L. Murphy. A further upward-compatible version was written for the PDP-10 by Alice Hartley and Murphy, and this Lisp was called BBN-Lisp [Teitelman 1971]. In 1973, not long after the time that SDS was acquired by Xerox and renamed Xerox Data Systems, the maintenance of BBN-Lisp was shared by BBN and Xerox Palo Alto Research Center and the name of the Lisp was changed to Interlisp [Teitelman 1974]. ……
Interlisp (and BBN-Lisp before it) introduced many radical ideas into Lisp programming style and methodology. The most visible of these ideas are embodied in programming tools, such as the spelling corrector, the file package, DWIM, CLISP, the structure editor, and MASTERSCOPE.
The origin of these ideas can be found in Warren Teitelman's Ph.D. dissertation on man-computer symbiosis [Teitelman 1966]. In particular, it contains the roots of structure editing (as opposed to "text" or "tape" editing [Rudloe 1962]), breakpointing, advice, and CLISP. ……
CLISP (Conversational LISP) was a mixed ALGOL-like and English-like syntax embedded within normal Interlisp syntax. Here is a valid definition ofFACTORIAL
written in lnterlisp CLISP syntax:
DEFINEQ((FACTORIAL
(LAMBDA (N) (IF N=0 THEN 1 ELSE N*(FACTORIAL N-I)))))
Special Forms - Normally,eval
evaluates the arguments of a function before applying the function itself. Thus ifeval
is given(CONS X Y)
, it will evaluateX
andY
, and thencons
them. But ifeval
is given(QUOTE X)
,X
should not be evaluated.QUOTE
is a special form that prevents its argument from being evaluated.
A special form differs from a function in two ways. Its arguments are not evaluated before the special form sees them.COND
, for example, has a very special way of evaluating its arguments by usingevcon
. The second way which special forms differ from functions is that they may have an indefinite number of arguments. Special forms have indicators on their property lists calledFEXPR
andFSUBR
for LISP-defined forms and machine language coded forms, respectively.
The variables in a lambda expression are dummy or bound variables because systematically changing them does not alter the meaning of the expression. Thusλ[[u;v];v2+u]
means the same thing asλ[[x;y];y2+x]
.
We shall sometimes use expressions in which a variable is not bound by a lambda. For example, in the function of two variablesλ[[x;y];xn+yn]
the variablen
is not bound. This is called a free variable. It may be regarded as a parameter. Unlessn
has been given a value before trying to compute with this function, the value of the function must be undefined. ……
When the compiler is called upon to compile a function, it looks for anEXPR
orFEXPR
on the property list of the function name. The compiler then translates this S-expression into an S-expression that represents a subroutine in the LISP Assembly Language (LAP). LAP then proceeds to assemble this program into binary program space. Thus anEXPR
, or anFEXPR
, has been changed to aSUBR
or anFSUBR
, respectively. ……
Free variables in compiled functions must be declared before the function is compiled. ……
A variable is bound in a particular function when it occurs in a list of bound variables following the wordLAMBDA
orPROG
. Any variable that is not bound is free. ……
When a variable is used free, it must have been bound by a higher level function. If a program is being run interpretively, and a free variable is used without having bee bound on a higher level, error diagnostic *A 8* will occur.
If the program is being run complied, the diagnostic may not occur, and the variable may have value NIL.
There are three types of variables in compiled functions: ordinary variables,SPECIAL
variables, andCOMMON
variables.SPECIAL
andCOMMON
variables must be declared before compiling. Any variable that is not declared will be considered an ordinary variable.
When functions are translated into subroutines, the concept of a variable is translated into a location where an argument is stored. If the variable is an ordinary one, then a storage location for it is set up on the push-down list. Other functions cannot find this private cell, making it impossible to use it as a free variable.SPECIAL
variables have the indicatorSPECIAL
on their property lists Following the indicator there is a pointer to a fixed cell. When this variable is bound, the old value is saved on the push-down list, and the current value is stored in theSPECLAL
cell. When it is no longer bound, the old value must be restored. When a function uses this variable free, then the quantity in theSPECIAL
cell is picked up.SPECIAL
variables are declared by using the pseudo-functionspecial[a]
, wherea
is a list of variable names. ……SPECIAL
variables are inexpensive and will allow free communication among compiled functions. They do not increase run time significantly.SPECIAL
variables cannot be communicated between the interpreter and compiled functions.COMMON
variables have the flagCOMMON
on their property lists; however, this is only used to inform the compiler that they areCOMMON
, and is not needed at run time.COMMON
variables are bound on an a-list by the compiled functions. When they are to be evaluated,eval
is given this a-list. This happens at run time.
The use ofCOMMON
variables will slow down any compiled function using them. However, they do provide complete communication between interpreted and compiled functions.COMMON
variables are declared bycommon[a]
, wherea
is a list of variable names. ……
LISP is designed in such a way that all functions for which the possibility of recursion can exist are in fact recursive. This requires that all temporary stored results related to the computation that is in progress be set aside when a piece of coding is to be used recursively, and that they be later restored. This is done automatically and need not be programmed explicitly.
All saving of temporary results in LISP is performed on a linear block of storage called the push-down list. Each set of stored data that is moved onto the push-down list is in a block labeled with its size and the name of the subroutine from which it came. Since it is in the nature of recursion that the first block to be saved is always the last block to be restored, it is possible to keep the push-down list compact.
There are several ways in which to maintain the values of free variables (and, thus the computational environment) in order to handle cases like the one above. All of the techniques used involve some increase in time. One approach makes it easy to access the value of a free variable is local. This approach is favored in recent implementations of LISP. We shall call it the "shallow access" approach.
Another approach is to make it relatively easy to enter and leave a block in which a free variable is bound, but relatively expensive to access a free variable. This approach is used in the classical "alist" implementation of LISP. Several Algol systems have opted for a similar approach. We shall call this the "deep access" approach. Both of these approaches allow one to modify the value of the variable as well as access it. The access time is approximately the same as modification time in both approaches.
Let us consider the "shallow access" approach first. By "shallow access" we mean that the value of a free variable can be obtained in a single fetch from memory. Since the current value may be stored in a difficult-to-determine location up in the stack, a special cell for its current value is used. In many recent LISP implementations this special values cell is stored as a property of the atom structure, and is unique to the free variable. In order to maintain the proper value in the special cell, extra work must be done every time a function or a block is entered in which the variable is an argument or is local. On entering such a function or block one usually saves the old value which is in the special cell on the stack alone with sufficient information to allow one to know from where the stack value came. On leaving such a function or block one must remember to store the value saved in the stack in the special cell.
The "deep access" approach forces one to search upward in the stack for the most recent value of the free variable. Such a search may be quite deep and slow if the free variable was bond vary far up in the stack. However, this approach has the advantage of avoiding the need for an extra special value cell. In addition, we shall see later that this approach sometimes allows one to use free variables with greater ease and flexibility than the "shallow access" approach. ……
Certain LISP systems use "deep access" in interpreted functions and "shallow access" in compiled functions. if free variables in compiled functions are declared to beCOMMON
, their values will be stored on the "alist". In this manner one can solve the environment problem. The cost is a very slow interpreter and "deep access" to free variables.
There were two motivations for developing a language for the IBM 704. First, IBM was generously establishing a New England Computation Center at M.I.T. …… Second, IBM was undertaking to develop a program for proving theorems in plane geometry (based on an idea of Marvin Minsky’s), ……. ……
In connection with IBM’s plane geometry project, Nathaniel Rochester and Herbert Gelernter (on the advice of McCarthy) decided to implement a list processing language within FORTRAN, ……. This work was undertaken by Herbert Gelernter and Carl Gerberich at IBM and led to FLPL, standing for FORTRAN List Processing Language. ……
I spent the summer of 1958 at the IBM Information Research Department at the invitation of Nathaniel Rochester and chose differentiating algebraic expressions as a sample problem. It led to the following innovations beyond FLPL:
a. Writing recursive function definitions using conditional expressions. ……
b. The maplist function that forms a list of applications of a functional argument to the elements of a list. ……
c. To use functions as arguments, one needs a notation for functions, and it seemed natural to use the λ-notation of Church (1941). ……
d. The recursive definition of differentiation made no provision for erasure of abandoned list structure. ……
The implementation of LISP began in Fall 1958. ……
LISP is now the second oldest programming language in present widespread use (after FORTRAN and not counting APT, which isn’t used for programming per se).
To use functions as arguments, one needs a notation for functions, and it seemed natural to use the λ-notation of Church (1941). I didn’t understand the rest of his book, so I wasn’t tempted to try to implement his more general mechanism for defining functions. Church used higher order functionals instead of using conditional expressions. Conditional expressions are much more readily implemented on computers. ……
One mathematical consideration that influenced LISP was to express programs as applicative expressions built up from variables and constants using functions. I considered it important to make these expressions obey the usual mathematical laws allowing replacement of expressions by expressions giving the same value. The motive was to allow proofs of properties of programs using ordinary mathematical methods. This is only possible to the extent that side-effects can be avoided. Unfortunately, side-effects are often a great convenience when computational efficiency is important, and “functions” with side-effects are present in LISP. However, the so-called pure LISP is free of side-effects, and (Cartwright 1976) and (Cartwright and McCarthy 1978) show how to represent pure LISP programs by sentences and schemata in first order logic and prove their properties. ……
Free variables. In all innocence, James R. Slagle programmed the following LISP function definition and complained when it didn’t work right ……. …… In modern terminology, lexical scoping was wanted, and dynamic scoping was obtained.
I must confess that I regarded this difficulty as just a bug and expressed confidence that Steve Russell would soon fix it. He did fix it but by inventing the so-called FUNARG device that took the lexical environment along with the functional argument. Similar difficulties later showed up in Algol 60, and Russell’s turned out to be one of the more comprehensive solutions to the problem. While it worked well in the interpreter, comprehensiveness and speed seem to be opposed in compiled code, and this led to a succession of compromises. …… the interested reader is referred to (Moses 1970) as a place to start.
The data on which LISP works is the S-language. ……
The M-language defines computations on S-expressions. ……
This is computationally complete. McCarthy (1960, Sect. 6) showed an arbitrary flowchart can be coded as mutually recursive functions.
The M-language of McCarthy (1960) is first order, as there is no provision to pass a function as argument or return a function as result.
However, McCarthy shows that M-language expressions and functions can be easily encoded as S-expressions and then defines in the M-language functions,eval
andapply
, that correctly interpret these S-expressions.
Thus LISP allows meta-programming, that is treating program as data and vice versa, by appropriate uses ofeval
andquote
. The 1960 paper gives the impression, quite strongly, that McCarthy saw this as removing any limitation stemming from the M-Language itself being first order.
It was originally intended that people would write programs in the M-language, in an Algol-like notation. In practice LISP programmers wrote their code directly in the S-language form, and the M-language became a kind of ghost that hovered in the background — theoretically important but nobody used it.
In LISP 1.5 (McCarthy et al. 1962) atoms acquired property lists, which serve several purposes and numbers appeared, as another kind of atom, along with basic arithmetic functions. ……
LISP was not based on the lambda calculus, despite using the word “LAMBDA” to denote functions. At the time he invented LISP, McCarthy was aware of (Church 1941) but had not studied it. The theoretical model behind LISP was Kleene’s theory of first order recursive functions. (McCarthy made these statements, or very similar ones, in a contribution from the floor at the 1982 ACM symposium on LISP and functional programming in Pittsburgh. No written version of this exists, as far as know.)
The M-language was first order, as already noted, but you could pass a function as a parameter by quotation, i.e. as the S-expression which encodes it. Unfortunately, this gives the wrong binding rules for free variables (dynamic instead of lexicographic). ……
McCarthy (1978) reports that this problem (wrong binding for free variables) showed up very early in a program of James Slagle. At first McCarthy assumed it was a bug and expected it to be fixed, but it actually springs from something fundamental — that meta-programming is not the same as higher order programming. Various devices were invented to get round this FUNARG problem, as it became known.
Not until SCHEME (Sussman 1975) did versions of LISP with default static binding appear. Today all versions of LISP are lambda calculus based.
A conditional expression has the form(p1 → e1, …, pn → en)
where thep
's are propositional expressions and thee
's are expressions of any kind. It may be read, “Ifp1
thene1
otherwise ifp2
thene2
, …, otherwise ifpn
thenen
,” or “p1
yieldse1
, …,pn
yieldsen
.”
I sent a proposal for conditional expressions to a CACM forum on what should be included in Algol 60. Because the item was short, the editor demoted it to a letter to the editor, for which CACM subsequently apologized. The notation given here was rejected for Algol 60, because it had been decided that no new mathematical notation should be allowed in Algol 60, and everything new had to be English. The if...then...else that Algol 60 adopted was suggested by John Backus.
The conditional expression[p1 → e1; …; pn → en]
is translated into(COND (p1* e1*) … (pn* en*))
.
In this article, we first describe a formalism for defining functions recursively. …… Next, we describe S-expressions and S-functions, give some examples, and then describe the universal S-function apply which plays the theoretical role of a universal Turing machine and the practical role of an interpreter. Then we describe the representation of S-expressions in the memory of the IBM 704 by list structures similar to those used by Newell, Shaw and Simon, and the representation of S-functions by program. Then we mention the main features of the LISP programming system for the IBM 704. Next comes another way of describing computations with symbolic expressions, and finally we give a recursive function interpretation of flow charts. ……
We shall need a number of mathematical ideas and notations concerning functions in general. Most of the ideas are well known, but the notion of conditional expression is believed to be new, and the use of conditional expressions permits functions to be defined recursively in a new and convenient way. ……
We shall first define a class of symbolic expressions in terms of ordered pairs and lists. Then we shall define five elementary functions and predicates, and build from them by composition, conditional expressions, and recursive definitions an extensive class of functions of which we shall give a number of examples. We shall then show how these functions themselves can be expressed as symbolic expressions, and we shall define a universal function apply that allows us to compute from the expression for a given function its value for given arguments. Finally, we shall define some functions with functions as arguments and give some useful examples. ……
The LISP programming system is a system for using the IBM 704 computer to compute with symbolic information in the form of S-expressions. …… The basis of the system is a way of writing computer programs to evaluate S-functions. …… In addition to the facilities for describing S-functions, there are facilities for using S-functions in programs written as sequences of statements along the lines of FORTRAN or ALGOL. ……
There are a number of ways of defining functions of symbolic expressions which are quite similar to the system we have adopted. Each of them involves three basic functions, conditional expressions, and recursive function definitions, but the class of expressions corresponding to S-expressions is different, and so are the precise definitions of the functions. We shall describe one of these variants called linear LISP. ……
Since both the usual form of computer program and recursive function definitions are universal computationally, it is interesting to display the relation between them. The translation of recursive symbolic functions into computer programs was the subject of the rest of this report. In this section we show how to go the other way, at least in principle.
The programs to be hand-compiled were written in an informal notation called M-expressions intended to resemble FORTRAN as much as possible. Besides FORTRAN-like assignment statements and go tos, the language allowed conditional expressions and the basic functions of LISP. Allowing recursive function definitions required no new notation from the function definitions allowed in FORTRAN I - only the removal of the restriction - as I recall, unstated in the FORTRAN manual - forbidding recursive definitions. The M-notation also used brackets instead of parentheses to enclose the arguments of functions in order to reserve parentheses for list-structure constants. It was intended to compile from some approximation to the M-notation, but the M-notation was never fully defined, because representing LISP functions by LISP lists became the dominant programming language when the interpreter later became available. A machine readable M-notation would have required redefinition, because the pencil-and-paper M-notation used characters unavailable on the IBM 026 key punch.
The READ and PRINT programs induced a de facto standard external notation for symbolic information, e.g. representingx + 3y + z
by(PLUS X (TIMES 3 Y) Z)
and(∀x)(P (x) ∨ Q(x, y)
by(ALL (X) (OR (P X) (Q X Y)))
.
……
Many people participated in the initial development of LISP, and I haven’t been able to remember all their contributions and must settle, at this writing, for a list of names. I can remember Paul Abrahams, Robert Brayton, Daniel Edwards, Patrick Fischer, Phyllis Fox, Saul Goldberg, Timothy Hart, Louis Hodes, Michael Levin, David Luckham, Klim Maling, Marvin Minsky, David Park, Nathaniel Rochester of IBM, and Steve Russell.
It was McCarthy's fortune that he found a financial basis for his work: The MIT Artificial Intelligence Project was founded on the first of September, 1958. McCarthy became an assistant professor in the department of Electrical Engineering (his branch was communication sciences), Minsky was already assistant professor in the department of mathematics. They received support from the Research Laboratory for Electronics in the form of the two programmers, a secretary, a typewriter and six graduate students.
When Maling started his job at the first of September he met S. Russell who seems to have something to do already. Maling remembers: "We were housed in an office in the basement of one of MIT's buildings, near the computer center. Initially there was John, Steve Russell, Carol - a buxom Boston Irish secretary and I. Steve was a brilliant programmer who had worked for John at Dartmouth (I believe). He was what we then called a 'Programming bum' ...".
Another way to show that LISP was neater than Turing machines was to write a universal LISP function and show that it is briefer and more comprehensible than the description of a universal Turing machine. This was the LISP functioneval[e,a]
, which computes the value of a LISP expressione
- the second argumenta
being a list of assignments of values to variables. (a
is needed to make the recursion work). Writingeval
required inventing a notation representing LISP functions as LISP data, and such a notation was devised for the purposes of the paper with no thought that it would be used to express LISP programs in practice. Logical completeness required that the notation used to express functions used as functional arguments be extended to provide for recursive functions, and theLABEL
notation was invented by Nathaniel Rochester for that purpose. ……
S.R. Russell noticed thateval
could serve as an interpreter for LISP, promptly hand coded it, and we now had a programming language with an interpreter. ……
Once theeval
interpreter was programmed, it became available to the programmer, and it was especially easy to use because it interprets LISP programs expressed as LISP data. In particular, eval made possible FEXPRS and FSUBRS which are “functions” that are not given their actual arguments but are given the expressions that evaluate to the arguments and must calleval
themselves when they want the expressions evaluated. The main application of this facility is to functions that don't always evaluate all of their arguments; they evaluate some of them first, and then decide which others to evaluate. This facility resembles Algol's call-by-name but is more flexible, becauseeval
is explicitly available. A first order logic treatment of “extensional”FEXPR
s andFSUBR
s now seems possible.
I invented conditional expressions in connection with a set of chess legal move routines I wrote in FORTRAN for the IBM 704 at M.I.T. during 1957-58. ……
A paper defining conditional expressions and proposing their use in Algol was sent to the Communications of the ACM but was arbitrarily demoted to a letter to the editor, because it was very short.
Julia builds upon the lineage of mathematical programming languages, but also borrows much from popular dynamic languages, including Lisp, Perl, Python, Lua, and Ruby. …… some advantages of Julia over comparable systems include: …… Lisp-like macros and other metaprogramming facilities.
LISP and APL were two early influences, as was Stephen Wolfram's 1981 SMP symbolic computation language.
There were two motivations for developing a language for the IBM 704. First, IBM was generously establishing a New England Computation Center at M.I.T. …… Second, IBM was undertaking to develop a program for proving theorems in plane geometry (based on an idea of Marvin Minsky’s), ……. ……
In connection with IBM’s plane geometry project, Nathaniel Rochester and Herbert Gelernter (on the advice of McCarthy) decided to implement a list processing language within FORTRAN, ……. This work was undertaken by Herbert Gelernter and Carl Gerberich at IBM and led to FLPL, standing for FORTRAN List Processing Language. ……
I spent the summer of 1958 at the IBM Information Research Department at the invitation of Nathaniel Rochester and chose differentiating algebraic expressions as a sample problem. It led to the following innovations beyond FLPL:
a. Writing recursive function definitions using conditional expressions. ……
b. The maplist function that forms a list of applications of a functional argument to the elements of a list. ……
c. To use functions as arguments, one needs a notation for functions, and it seemed natural to use the λ-notation of Church (1941). ……
d. The recursive definition of differentiation made no provision for erasure of abandoned list structure. ……
The implementation of LISP began in Fall 1958. ……
LISP is now the second oldest programming language in present widespread use (after FORTRAN and not counting APT, which isn’t used for programming per se).
Lisp is a survivor, having been in use for about a quarter of a century. Among the active programming languages only Fortran has had a longer life.
Contrary to popular belief, LISP was not originally derived from Church's λ-calculus [Church] [LISP History]. The earliest LISP did not have a well-defined notion of free variables or procedural objects. Early LISP programs were similar to recursion equations, defining functions on symbolic expressions ("S-expressions"). They differed from the equations of pure recursive function theory Kleene by introducing the conditional expression construction (often called the "McCarthy conditional"), to avoid "pattern-directed invocation". That is, in recursive function theory one would define the factorial function by the following two equations:
factorial(0) = 1
factorial(successor(x)) = successor(x) * factorial(x)
In early LISP, however, one would have written:
factorial[x] = [x=0 → 1; T → x*factoria1[x-1]]
where "[a → b; T → c]
" essentially means "if a then b else c
". The recursive function theory version depends on selecting which of two equations to use by matching the argument to the left-hand sides (such a discipline is actually used in the PROLOG language [Warren]); the early LISP version represents this decision as a conditional expression.
The theory of recursion equations deals with functions over the natural numbers. In LISP, however, one is interested in being able to manipulate algebraic expressions, programs, and other symbolic expressions as data structures. While such expressions can be encoded as numbers (using the technique of "arithmetization" developed by Kurt Gödel), such an encoding is not very convenient. Instead, a new kind of data called "S-expressions" (for "symbolic expressions") is introduced specifically to provide convenient encodings. S-expressions can be defined by a set of formal inductive axioms analogous to the Peano postulates used to define natural numbers.
To use functions as arguments, one needs a notation for functions, and it seemed natural to use the λ-notation of Church (1941). I didn’t understand the rest of his book, so I wasn’t tempted to try to implement his more general mechanism for defining functions. Church used higher order functionals instead of using conditional expressions. Conditional expressions are much more readily implemented on computers. ……
One mathematical consideration that influenced LISP was to express programs as applicative expressions built up from variables and constants using functions. I considered it important to make these expressions obey the usual mathematical laws allowing replacement of expressions by expressions giving the same value. The motive was to allow proofs of properties of programs using ordinary mathematical methods. This is only possible to the extent that side-effects can be avoided. Unfortunately, side-effects are often a great convenience when computational efficiency is important, and “functions” with side-effects are present in LISP. However, the so-called pure LISP is free of side-effects, and (Cartwright 1976) and (Cartwright and McCarthy 1978) show how to represent pure LISP programs by sentences and schemata in first order logic and prove their properties. ……
Free variables. In all innocence, James R. Slagle programmed the following LISP function definition and complained when it didn’t work right ……. …… In modern terminology, lexical scoping was wanted, and dynamic scoping was obtained.
I must confess that I regarded this difficulty as just a bug and expressed confidence that Steve Russell would soon fix it. He did fix it but by inventing the so-called FUNARG device that took the lexical environment along with the functional argument. Similar difficulties later showed up in Algol 60, and Russell’s turned out to be one of the more comprehensive solutions to the problem. While it worked well in the interpreter, comprehensiveness and speed seem to be opposed in compiled code, and this led to a succession of compromises. …… the interested reader is referred to (Moses 1970) as a place to start.
The data on which LISP works is the S-language. ……
The M-language defines computations on S-expressions. ……
This is computationally complete. McCarthy (1960, Sect. 6) showed an arbitrary flowchart can be coded as mutually recursive functions.
The M-language of McCarthy (1960) is first order, as there is no provision to pass a function as argument or return a function as result.
However, McCarthy shows that M-language expressions and functions can be easily encoded as S-expressions and then defines in the M-language functions,eval
andapply
, that correctly interpret these S-expressions.
Thus LISP allows meta-programming, that is treating program as data and vice versa, by appropriate uses ofeval
andquote
. The 1960 paper gives the impression, quite strongly, that McCarthy saw this as removing any limitation stemming from the M-Language itself being first order.
It was originally intended that people would write programs in the M-language, in an Algol-like notation. In practice LISP programmers wrote their code directly in the S-language form, and the M-language became a kind of ghost that hovered in the background — theoretically important but nobody used it.
In LISP 1.5 (McCarthy et al. 1962) atoms acquired property lists, which serve several purposes and numbers appeared, as another kind of atom, along with basic arithmetic functions. ……
LISP was not based on the lambda calculus, despite using the word “LAMBDA” to denote functions. At the time he invented LISP, McCarthy was aware of (Church 1941) but had not studied it. The theoretical model behind LISP was Kleene’s theory of first order recursive functions. (McCarthy made these statements, or very similar ones, in a contribution from the floor at the 1982 ACM symposium on LISP and functional programming in Pittsburgh. No written version of this exists, as far as know.)
The M-language was first order, as already noted, but you could pass a function as a parameter by quotation, i.e. as the S-expression which encodes it. Unfortunately, this gives the wrong binding rules for free variables (dynamic instead of lexicographic). ……
McCarthy (1978) reports that this problem (wrong binding for free variables) showed up very early in a program of James Slagle. At first McCarthy assumed it was a bug and expected it to be fixed, but it actually springs from something fundamental — that meta-programming is not the same as higher order programming. Various devices were invented to get round this FUNARG problem, as it became known.
Not until SCHEME (Sussman 1975) did versions of LISP with default static binding appear. Today all versions of LISP are lambda calculus based.
A conditional expression has the form(p1 → e1, …, pn → en)
where thep
's are propositional expressions and thee
's are expressions of any kind. It may be read, “Ifp1
thene1
otherwise ifp2
thene2
, …, otherwise ifpn
thenen
,” or “p1
yieldse1
, …,pn
yieldsen
.”
I sent a proposal for conditional expressions to a CACM forum on what should be included in Algol 60. Because the item was short, the editor demoted it to a letter to the editor, for which CACM subsequently apologized. The notation given here was rejected for Algol 60, because it had been decided that no new mathematical notation should be allowed in Algol 60, and everything new had to be English. The if...then...else that Algol 60 adopted was suggested by John Backus.
The conditional expression[p1 → e1; …; pn → en]
is translated into(COND (p1* e1*) … (pn* en*))
.
Mathematically, it is possible to have functions as arguments of other functions. For example, in arithmetic one could define a function operate[op;a;b]
, whereop
is a functional argument that specifies which arithmetic operation is to be performed ona
andb
. ……
In LISP, functional arguments are extremely useful. A very important function with a functional argument ismaplist
. Its M-expression definition is
maplist[x;fn]=[null[x]→NIL;
T→cons[fn[x];maplist[cdr[x];fn]]]
……
The functional argument is, of course, a function translated into an S-expression. It is bound to the variablefn
and is then used wheneverfn
is mentioned as a function. The S-expression formaplist
itself is as follows:
(MAPLIST (LAMBDA (X FN) (COND ((NULL X) NIL)
(T (CONS (FN X) (MAPLIST (CDR X) FN))) )))
……
Usingmaplist
, we definechange
by
change[a]=maplist[a;λ[[j];cons[car[j];X]]]
This is not a valid M-expression as defined syntactically in section 1.5 because a function appears where a form is expected. This can be corrected by modifying the rule defining an argument so as to include functional arguments:
<argument> ::= <form> | <function>
We also need a special rule to translate functional arguments into S-expression. Iffn
is a function used as an argument, then it is translated into(FUNCTION fn*)
.
Example
(CHANGE (LAMBDA (A) (MAPLIST A (FUNCTION
(LAMBDA (J) (CONS (CAR J) (QUOTE X))) )))
An examination ofevalquote
shows thatQUOTE
will work instead ofFUNCTION
, provided that there are no free variables present.
Syntax:
function
name ⇒ function
Arguments and Values:
name - a function name or lambda expression.
function - a function object.
Description:
The value offunction
is the functional value of name in the current lexical environment.
……
Notes:
The notation#' name
may be used as an abbreviation for(function name)
.
Syntax:
funcall
function &rest args ⇒ result*
Arguments and Values:
function - a function designator.
args - arguments to the function.
results - the values returned by the function.
Description:
funcall
applies function to args. If function is a symbol, it is coerced to a function as if by finding its functional value in the global environment.
A function can be simply a name. In this case its meaning must be previously understood. A function may be defined by using the lambda notation and establishing a correspondence between the arguments and the variables used in a form. If the function is recursive, it must be given a name by using a label. ……
When a symbol stands for a function, the situation is similar to that in which a symbol stands for an argument. When a function is recursive, it must be given a name. This is done by means of the formLABEL
, which pairs the name with the function definition on the a-list. The name is then bound to the function definition, just as a variable is bound to its value.
In actual practice,LABEL
is seldom used. It is usually more convenient to attach the name to the definition in a uniform manner. This is done by putting on the property list of the name, the symbolEXPR
followed by the function definition. The pseudo-functiondefine
used at the beginning of this section accomplishes this. Whenapply
interprets a function represented by an atomic symbol, it searches the p-list of the atomic symbol before searching the current a-list. Thus adefine
will override aLABEL
.
The fact that most functions are constants defined by the programmer, and not variables that are modified by the program, is not due to any weakness of the system. On the contrary, it indicates a richness of the system which we do not know how to exploit very well.
In this article, we first describe a formalism for defining functions recursively. …… Next, we describe S-expressions and S-functions, give some examples, and then describe the universal S-function apply which plays the theoretical role of a universal Turing machine and the practical role of an interpreter. Then we describe the representation of S-expressions in the memory of the IBM 704 by list structures similar to those used by Newell, Shaw and Simon, and the representation of S-functions by program. Then we mention the main features of the LISP programming system for the IBM 704. Next comes another way of describing computations with symbolic expressions, and finally we give a recursive function interpretation of flow charts. ……
We shall need a number of mathematical ideas and notations concerning functions in general. Most of the ideas are well known, but the notion of conditional expression is believed to be new, and the use of conditional expressions permits functions to be defined recursively in a new and convenient way. ……
We shall first define a class of symbolic expressions in terms of ordered pairs and lists. Then we shall define five elementary functions and predicates, and build from them by composition, conditional expressions, and recursive definitions an extensive class of functions of which we shall give a number of examples. We shall then show how these functions themselves can be expressed as symbolic expressions, and we shall define a universal function apply that allows us to compute from the expression for a given function its value for given arguments. Finally, we shall define some functions with functions as arguments and give some useful examples. ……
The LISP programming system is a system for using the IBM 704 computer to compute with symbolic information in the form of S-expressions. …… The basis of the system is a way of writing computer programs to evaluate S-functions. …… In addition to the facilities for describing S-functions, there are facilities for using S-functions in programs written as sequences of statements along the lines of FORTRAN or ALGOL. ……
There are a number of ways of defining functions of symbolic expressions which are quite similar to the system we have adopted. Each of them involves three basic functions, conditional expressions, and recursive function definitions, but the class of expressions corresponding to S-expressions is different, and so are the precise definitions of the functions. We shall describe one of these variants called linear LISP. ……
Since both the usual form of computer program and recursive function definitions are universal computationally, it is interesting to display the relation between them. The translation of recursive symbolic functions into computer programs was the subject of the rest of this report. In this section we show how to go the other way, at least in principle.
Two general approaches were taken in order to have programming languages with fully specified semantics: (i) Better specification methods were developed that were adequate to fully describe existing large programming languages such as PL/1. (ii) New languages were developed with clean mathematical structure that were more amenable to formal description. McCarthy pioneered the latter approach in basing the LISP 1.5 programming language on a simpler functional language, sometimes called “pure LISP” or M-expressions, that was defined in a mathematical style, independent of a particular machine or implementation.
Pure LISP allows the definition and evaluation of functions over S-expressions. The lambda notation for functional abstraction is borrowed from Church’s lambda calculus, but otherwise there is little similarity between the two systems. Pure LISP has no higher-order functions, and call-by-value evaluation order is implicitly assumed. Two special constructs, conditional expressions and the label operator, allow recursive functions to be defined. Limited as it is, pure LISP is nevertheless powerful enough to express all partial recursive functions and hence provides an adequate basis for a theory of computation.
The LISP 1.5 programming language extends pure LISP in many ways that make it more useful in practice but at the same time tend to destroy its clean mathematical properties. Its semantics is defined by an interpreter written in a mixture of pure LISP and English. The distinction between programs and data is blurred. Higher-order functions, assignment, and a global symbol table are added.
The programs to be hand-compiled were written in an informal notation called M-expressions intended to resemble FORTRAN as much as possible. Besides FORTRAN-like assignment statements and go tos, the language allowed conditional expressions and the basic functions of LISP. Allowing recursive function definitions required no new notation from the function definitions allowed in FORTRAN I - only the removal of the restriction - as I recall, unstated in the FORTRAN manual - forbidding recursive definitions. The M-notation also used brackets instead of parentheses to enclose the arguments of functions in order to reserve parentheses for list-structure constants. It was intended to compile from some approximation to the M-notation, but the M-notation was never fully defined, because representing LISP functions by LISP lists became the dominant programming language when the interpreter later became available. A machine readable M-notation would have required redefinition, because the pencil-and-paper M-notation used characters unavailable on the IBM 026 key punch.
The READ and PRINT programs induced a de facto standard external notation for symbolic information, e.g. representingx + 3y + z
by(PLUS X (TIMES 3 Y) Z)
and(∀x)(P (x) ∨ Q(x, y)
by(ALL (X) (OR (P X) (Q X Y)))
.
……
Many people participated in the initial development of LISP, and I haven’t been able to remember all their contributions and must settle, at this writing, for a list of names. I can remember Paul Abrahams, Robert Brayton, Daniel Edwards, Patrick Fischer, Phyllis Fox, Saul Goldberg, Timothy Hart, Louis Hodes, Michael Levin, David Luckham, Klim Maling, Marvin Minsky, David Park, Nathaniel Rochester of IBM, and Steve Russell.
It was McCarthy's fortune that he found a financial basis for his work: The MIT Artificial Intelligence Project was founded on the first of September, 1958. McCarthy became an assistant professor in the department of Electrical Engineering (his branch was communication sciences), Minsky was already assistant professor in the department of mathematics. They received support from the Research Laboratory for Electronics in the form of the two programmers, a secretary, a typewriter and six graduate students.
When Maling started his job at the first of September he met S. Russell who seems to have something to do already. Maling remembers: "We were housed in an office in the basement of one of MIT's buildings, near the computer center. Initially there was John, Steve Russell, Carol - a buxom Boston Irish secretary and I. Steve was a brilliant programmer who had worked for John at Dartmouth (I believe). He was what we then called a 'Programming bum' ...".
As far as we know the first universal function was indeedapply
-eval
was not present at all or not seen to be important. When McCarthy was working on this function S. Russel saw it and suggested translating it by hand - as he had done so often - and adding it to the program system. McCarthy recalls: "… thisEVAL
was written and published in the paper and Steve Russell said, look, why don't I program thisEVAL
and you remember the interpreter, and I said to him, ho, ho, you're confusing theory with practice, thisEVAL
is intended for reading not for computing. But he went ahead and did it. That is, he compiled theEVAL
in my paper into 704 machine code fixing bugs and then advertised this as a LISP interpreter which it certainly was, so at that point LISP had essentially the form that it has today, the S-expression form ...".
Another way to show that LISP was neater than Turing machines was to write a universal LISP function and show that it is briefer and more comprehensible than the description of a universal Turing machine. This was the LISP functioneval[e,a]
, which computes the value of a LISP expressione
- the second argumenta
being a list of assignments of values to variables. (a
is needed to make the recursion work). Writingeval
required inventing a notation representing LISP functions as LISP data, and such a notation was devised for the purposes of the paper with no thought that it would be used to express LISP programs in practice. Logical completeness required that the notation used to express functions used as functional arguments be extended to provide for recursive functions, and theLABEL
notation was invented by Nathaniel Rochester for that purpose. ……
S.R. Russell noticed thateval
could serve as an interpreter for LISP, promptly hand coded it, and we now had a programming language with an interpreter. ……
Once theeval
interpreter was programmed, it became available to the programmer, and it was especially easy to use because it interprets LISP programs expressed as LISP data. In particular, eval made possible FEXPRS and FSUBRS which are “functions” that are not given their actual arguments but are given the expressions that evaluate to the arguments and must calleval
themselves when they want the expressions evaluated. The main application of this facility is to functions that don't always evaluate all of their arguments; they evaluate some of them first, and then decide which others to evaluate. This facility resembles Algol's call-by-name but is more flexible, becauseeval
is explicitly available. A first order logic treatment of “extensional”FEXPR
s andFSUBR
s now seems possible.
evalqyote[fn;args]
. When evalquote
is given a function and a list of arguments for that function, it computes the value of the function applied to the arguments.fn
" of the function evalquote
must be an S-expression. Since we have been writing functions as M-expressions, it is necessary to translate them into S-expressions. ……evalquote
is defined by using two main functions, called eval
and apply
. apply
handles a function and its arguments, while eval
handles forms. Each of these functions also has another argument that is used as an association list for storing the values of bound variables and function names. ……apply
. When apply
encounters a function beginning with LAMBDA
, the list of variables is paired with the list of arguments and added to the front of the a-list During the evaluation of the function, variables may be encountered. They are evaluated by looking them up on the a-list. If a variable has been bound several times, the last or most recent value is used. The part of the interpreter that does this is called eval
. The following example will illustrate this discussion. Suppose the interpreter is given the following doublet:fn: (LAMBDA (X Y) (CONS X Y))
args: (A B)
evalquote
will give these arguments to apply
. (Look at the universal function of Section I.)apply[(LAMBDA (X Y) (CONS X Y)); (A B);NIL]
apply
will bind the variables and give the function and a-list to eval
.eval[(CONS X Y); ((X . A) (Y . B))]
eval
will evaluate the variables and give it to cons
.cons[A;B] = (A . B)
apply[CONS;(A B);((X . A) (Y . B))]
. ……LAMBDA
that binds it. When a variable always has a certain value regardless of the current a-list, it will be called a constant. This is accomplished by means of the property list (p-list) of the variable symbol. Every atomic symbol has a p-list. When the p-list contains the indicator APVAL
, then the symbol is a constant and the next item on the list is the value. eval
searches p-lists before a-lists when evaluating variables, thus making it possible to set constants. ……NIL
is an example of this. It can be evaluated repeatedly and will still be NIL
. T
, F
. NIL
, and other constants cannot be used as variables. ……PROG
, list of program variables, sequence of statements and atomic' symbols…)LAMBDA
. The value of each program variable is NIL
until it has been set to something else. To set a program variable, use the form SET
. To set variable PI
to 3.14
write (SET (OUOTE PI) 3.14)
. SETQ
is like SET
except that it quotes its first argument. Thus (SETQ PI 3.14)
. SETQ
is usually more convenient. SET
and SETQ
can change variables that are on the a-list from higher level functions. The value of SET
or SETQ
is the value of its second argument. ……777778
(i.e., minus 1
) as the first element of the list. The rest of the list contains various properties of the atomic symbol. Each property is preceded by an atomic symbol which is called its indicator. Some of the indicators are:PNAME
- the BCD print name of the atomic symbol for input-output use.EXPR
- S-expression defining a function whose name is the atomic symbol on whose property list the EXPR
appears.SUBR
- Function defined by a machine language subroutine.APVAL
- Permanent value for the atomic symbol considered as a variable.NIL
has two things on its property list - its PNAME
, and an APVAL
that gives it a value of NIL
. Its property list looks like this:.---.---. .-----.---. .---.---. .-----.---. .---.---. |-1 | |->|APVAL| |->| | |->|PNAME| |->| | / | `---'---' `-----'---' `---'---' `-----'---' `---'---' ^ | | | .-V-.---. .-V-.---. | | | / | | | / | | `---'---' `---'---' | | | '-----------------------' .-V-----. | NIL???| `-------'……
EXPR
points to an S-expression defining a function. The function define
puts EXPR
's on property lists. After defining ff
, its property list would look like this.---.---. .-----.---. .---.---. .-----.---. .---.---. |-1 | |->|EXPR | |->| | |->|PNAME| |->| | / | `---'---' `-----'---' `---'---' `-----'---' `---'---' .-----------------------' | .-V----.---. .---.---. .---.---. .-V-.---. |LAMBDA| |->| | |->| | / | | | / | `------'---' `---'---' `---'---' `---'---' | | | .---.---. .----.---. .-V-----. | X | / | |COND| |-> - - - | FF????| `---'---' `----'---' `-------'……
-1
in the address, certain bits in the tag which specify that it is a number and what type it is, and a pointer to the number itself in the decrement of this word.15
is represented as follows:.----.-.---. .--------------. | -1 |1| |->| 000000000017 | `----'-'---' `--------------'……
special[x]
SUBR
pseudo-function
The list x
contains the names of variables that are to be declared SPECIAL
. ……common[x]
SUBR
pseudo-function
The list x
contains the names of variables that are to be declared COMMON
. ……PROG
feature is an FSUBR
coded into the system. ……
When a set
or a setq
is encountered, the name of the variable is located on the a-list. The value of the variable (or cdr
of the pair) is actually replaced with the new value. ……
The following points should be noted concerning declared variables.
1. Program variables follow the same rules as λ variables do.
a. If a variable is purely local, it need not be declared.
b. Special variables can be used as free variables in compiled functions. They may be set at a lower level than that at which they are bound.
c. Common program variables maintain complete communication between compiled programs and the interpreter.
2. set
as distinct from setq
can only be used to set common variables.”
The purpose of the LISP Compiler is to replace S-expression definitions of functions with efficient machine language subroutines. A subroutine can be expected to run about 40 times as fast as the interpreter can execute the same function from its S-expression definition. Subroutines typically take 70-80 per cent of the storage required by their corresponding S-expressions.
SCHEME is a dialect of LISP. It is an expression-oriented, applicative order, interpreter-based language which allows one to manipulate programs as data.
It differs from most current dialects of LISP in that it closes all lambda-expressions in the environment of their definition or declaration, rather than in the execution environment. This has the consequence that variables are normally lexically scoped, as in ALGOL. However, in contrast with ALGOL, SCHEME treats procedures as a first-class data type. They can be the values of variables, the returned values of procedures, and components of data structures.
Another difference from LISP is that SCHEME is implemented in such a way that tail-recursions execute without net growth of the interpreter stack. The effect of this is that a procedure call behaves like a GOTO and thus procedure calls can be used to implement iterations, as in PLASMA.
The PDP-6 project started in early 1963, as a 24-bit machine. It grew to 36 bits for LISP, a design goal.
Micro-Planner is an implementation of a subset of Cal Hewitt's language, PLANNER by Gerald Jay Sussman, Terry Winograd, and Eugene Charniak on the AI group computer in LISP.
In this history of the evolution of Lisp, we have seen that Lisp seems to have a more elaborate and complex history than languages with wider usage. It would seem that almost every little research group has its own version of Lisp and there would appear to be as many Lisps as variations on language concepts. It is natural to ask what is so special or different about Lisp that explains it.
There are six basic reasons: ……
Its theoretical foundations. Lisp was founded on the footing of recursive function theory and the theory of computability. The work on Scheme aligned it with Church's lambda calculus and denotational semantics. Its purest form is useful for mathematical reasoning and proof. ……
Its expressiveness. Lisp has proved itself concerned more with expressiveness than anything else. ……
Its malleability. It is easy with Lisp to experiment with new language features, because it is possible to extend Lisp in such a way that the extensions are indistinguishable to users from the base language. Primarily this is accomplished through the use of macros, which have been part of Lisp since 1963 [Hart 1963]. ……
Its interactive and incremental nature. It is easy to explore the solutions to programming problems in Lisp, because it is easy to implement part of a solution, test it, modify it, change design, and debug the changes. ……
Its operating system facilities. Many Lisp implementations provide facilities reminiscent of operating systems: a command processor, an automatic storage management facility, file management, display (windows, graphics, mouse) facilities, multitasking, a compiler, an incremental (re)linker/loader, a symbolic debugger, performance monitoring, and sometimes multiprocessing. ……
Its people. Of course, languages do not diversify themselves; people diversify languages. ……Lisp is the language of artificial intelligence, among other things. …… Another attraction is that Lisp is a language of experts, which for our purposes means that Lisp is not a language designed for inexpert programmers to code robust reliable software.
define
. …… A pseudo-function is a function that is executed for its effect on the system in core memory, as well as for its value. define
causes these functions to be defined and available within the system. Its value is a list of the functions defined ……. ……define[x]
: EXPR
pseudo-functiondefine
, x
, is a list of pairs((u1 v1) (u2 v2) … (un vn))
u
is a name and each v
is a λ-expression for a function. For each pair, define puts an EXPR
on the property list for u
pointing to v
. The function of define
puts things on at the front of the property list. The value of define
is the list of u
's.define[x] = deflist[x;EXPR]
deflist[x;ind]
: EXPR
pseudo-functiondeflist
is a more general defining function. Its first argument is a list of pairs as for define
. Its second argument is the indicator that is to be used. After deflist
has been executed with (ui vi)
among its first argument, the property list of ui
will begin:| .----.---. .---.---. .---.---. uᵢ `->| -1 | |->|IND| |->| | |--> - - - `----'---' `---'---' `---'---' | V vᵢ
If deflist
or define
is used twice on the same object with the same indicator, the old value will be replaced by the new one.”
DEFUN
Special Form(DEFUN namespec . definitionspec)
DEFUN
offers a way of associating a functional definition with a symbol. ……DEFUN
can be used to define both normal functions and special forms (fexprs and macros). ……
;; Normal expr or lexpr function definitions
(DEFUN name bvl . body)
……
Ifname
is a symbol andbvl
is a list of symbols which is free of&
keywords (to be described below), the definition is an expr or lexpr definition. In Maclisp, this would be essentially equivalent to writing
(DEFPROP name (LAMBDA bvl . body) EXPR)
.
…… Note also that the keywordEXPR
is used for both expr and lexpr definitions. The only distinction between expr and lexpr definitions is whether thebvl
is a symbol or a list, but they are specified in the same way and looked up from the same property. ……
A fexpr is a function for which the formal parameters are not evaluated. The form of a fexpr definition is:
(DEFUN name FEXPR (sym) . body)
.
The lambda expression which describes a fexpr should expect to receive exactly one argument which will be the list of (unevaluated) arguments given in the call to the fexpr, so usually there is only one bound variable (sym
) in the bound variable list. ……DEFUN
can also be used to instantiate aMACRO
definition. …… The syntax for a macro definition is
(DEFUN name MACRO (sym) . body)
,
wheresym
will become bound to the whole macro form to be expanded (including the name). Note that this argument convention is different than fexprs, which only receive thecdr
of the call form. ……DEFUN
was introduced into Maclisp in March, 1969. Although now it is recognized as the standard function defining form because it shields the user from the implementational details of how the function is defined ……. ……DEFPROP
Function(DEFPROP sym val indicator)
Givessym
a property called indicator with valueval
. The arguments are not evaluated.DEFPROP
should not be used imbedded in other expressions. It is intended to occur at toplevel to assign properties that are set up once and never changed. In other places, usePUTPROP
with three quoted arguments.
DEFINE
- This is analogous to the MacLISPDEFUN
primitive (but note that theLAMBDA
must appear explicitly!). It is used for defining a function in the "global environment" permanently, as opposed toLABELS
(see below), which is used for temporary definitions in a local environment.DEFINE
takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom).
Atoms which are not atomic symbols (identifiers) evaluate to themselves. Typical examples of such atoms are numbers, arrays, and strings (character arrays). Symbols are treated as identifiers or variables. They may be lexically bound by lambda-expressions. There is a global environment containing values initially have as their values primitive operations such as, for example,CAR
,CONS
, andPLUS
. SCHEME differs from most LISP systems in that the atomCAR
is not itself an operation (in the sense of being an invocable object, e.g. a valid first argument toAPPLY
), but only has one as a value when considered as an identifier.
In 1981 the emerging Common Lisp community turned to Scheme for some of its motivation and inspiration [Steele 1984]. Adopting lexical scoping proved one of the most important decisions the Common Lisp group ever made.
One aspect of Scheme that was not adopted, however, was a single namespace for functions and values, along with uniform evaluation rules for expressions in function and argument positions within the language. ……
In this paper, we shall refer to two abstract dialects of Lisp called Lisp1 and Lisp2.
Lisp1 has a single namespace that serves a dual role as the function namespace and value namespace; that is, its function namespace and value namespace are not distinct. In Lisp1, the functional position of a form and the argument positions of forms are evaluated according to the same rules. Scheme [Rees 1986] and the language being designed by the EuLisp group [Padget 1986] are Lisp1 dialects.
Lisp2 has distinct function and value namespaces. In Lisp2, the rules for evaluation in the functional position of a form are distinct from those for evaluation in the argument positions of the form. Common Lisp is a Lisp2 dialect. ……
Most Lisp dialects adopted a two-namespace approach to the naming problem. To some extent this is because most dialects followed Lisp 1.5 [McCarthy 1965] or dialects derived from Lisp 1.5.
Lisp 1.5 broke symbols into values and functions; values were stored on an association list, and function on the property lists of symbols. Compiled and interpreted code worked in different ways. In the interpreter, the association list was where all bindings were kept. When an identifier was encountered (an 'atomic symbol' in Lisp 1.5 terminology), it was taken as a variable to be evaluated for its value. First theAPVAL
part of the symbol was interrogated — anAPVAL
was "A Permanent, system-defined VALue" stored in a specific place in the symbol. Second, the association list was searched. Finally, if no binding was found, an error was signaled.
When a combination was encountered, the function position was evaluated differently from other positions. First, the symbol was interrogated to see whether there was a function definition associated with it; then the association list was searched.
Here we can see two namespaces at work, though nonsymbol variables were treated somewhat uniformly (Lisp 1.5 did not have lexical variables in interpreted code): when there were no function definitions associated with a symbol, there was one namespace, which was explicitly represented by the association list.
Compiled code worked a slightly different way, and from its internals we can see where the two namespaces came about in descendants of Lisp 1.5.
The Lisp 1.5 compiler supported 'common' and 'special' variables. A common variable enabled compiled and interpreted code to communicate with each other. A common variable was bound on an explicit association list; to evaluate such a variable, a call toEVAL
was emitted to determine the value. A special variable was the compiler's modeling of free variables and closely matched what is now called 'shallow binding.' Ordinary variables compiled into what we have termed lexical variables.
Thus, we see all of the components of the two-namespace world in Lisp 1.5, along with some of the components of the one-namespace world.
It seems that the designers of Lisp 1.5 thought of function names as being different from variable names — the Lisp 1.5 interpreter looked at the property list of the named atomic symbol first, and so it can be argued that a function was considered a property of a symbol. The designers used the terminology that a symbol "stands for" a function while a variable "refers" to a value.
Lisp for the PDP-6 at MIT adopted the style of the Lisp 1.5 special variables for dynamic binding in both compiled and interpreted code, thereby eliminated common variables. Compilers were still written that were to try to interpret special variables as lexical variables in as many places as possible. The value of a symbol was stored in the value cell of the symbol, and the function remained on the property list as it did in Lisp 1.5.
MacLisp [Pitman 1983] is a direct descendant of the PDP-6 Lisp and is a Lisp2 dialect. MacLisp uses a sophisticated form of link table, which is made possible by the separation of namespaces. In particular, function-defining functions have controlled access into the places where functions are stored so that the link tables can be correctly maintained. ……
Common Lisp was the result of a compromise between a number of dialects of Lisp, most of them descendants of MacLisp, all of them Lisp2s. ……
A free variable in Common Lisp is currently taken to be a dynamic rather than a lexical reference because there is no global lexical environment in Common Lisp. In this code
(DEFUN PLUS (X Y) (+ X Y))
(DEFUN SOMEWHAT-MORE-THAN (X) (PLUS X *SOMEWHAT*))
The reference to*SOMEWHAT*
is special (dynamic). On the surface, in Lisp1 the reference toPLUS
is free also, and thus it is special (dynamic).
When a variable is declared special, a free reference to it is to the most recent dynamically bound value for it; all bindings of it become special (dynamic) bindings. When a variable is declared constant, free references to it are to its permanent constant value, and compilers are free to fold that constant into the code they emit.
We introduce the concept of global variables; when a variable is global, free references to it are to the value cell of the symbol named by the variable, and all bindings of it are still lexical.
To avoid a compiler warning for free use of a variable like*SOMEWHAT*
, the variable is declaredSPECIAL
. In order to have compilers for Lisp1 not to warn about the free use ofPLUS
, it would be unfortunate to have to declare itSPECIAL
.
As noted, in Common Lisp the default for free variable references isSPECIAL
; that is, a free variable reference is taken to be a special variable reference. If the default in Lisp1 were that free variable references were global (lexical), then it would make sense for a compiler not to warn about such free variable references.
The over-all design of the LISP Programming System is the work of John McCarthy and is based on his paper "Recursive Functions of Symbolic Expressions and Their Computation by Machine" which was published in Communications of the ACM, April 1960.
This manual was written by Michael I. Levin.
The interpreter was programmed by Stephen B. Russell and Daniel J . Edwards.
The print and read programs were written by John McCarthy, Klim Maling, Daniel J. Edwards, and Paul W, Abrahams.
The garbage collector and arithmetic features Were written by Daniel J. Edwards.
The compiler and assembler were written by Timothy P. Hart and Michael I. Levin.
An earlier compiler was written by Robert Brayton.
The "LISP 1 Programmer's Manual," March 1, 1960, was written by Phyllis A. Fox. Additional programs and suggestions were contributed by the following members of the Artificial Intelligence Group of the Research Laboratory of Electronics: Marvin L. Minsky, Bertram Raphael, Louis Hodes, David M. R. Park, David C. Luckham, Daniel G. Bobrow, James R. Slagle, and Nathaniel Rochester.
The STANFORD A.I. LISP 1.6 System was originally an adaptation of one developed by the Artificial Intelligence Project a t M.I.T. Since 1966, that system has been largely rewritten by John Allen and the author.
At Stanford in the 1960s, an early version of MacLisp was adapted for their PDP-6; this Lisp was called Lisp 1.6 [Quam 1972]. The early adaptation was rewritten by John Allen and Lynn Quam; later compiler improvements were made by Whit Diffie. Lisp 1.6 disappeared during the mid-1970s, one of the last remnants of the Lisp 1.5 era.
By 1964 a version of Lisp 1.5 was running in the Electrical Engineering Department at MIT on an IBM 7094 computer, running the Compatible Time Sharing System (CTSS). This Lisp and Basic PDP-1 Lisp were the main influences on the PDP-6 Lisp [PDP-6 Lisp 1967] implemented by DEC and some members of MIT's Tech Model Railroad Club in the spring of 1964. This Lisp was the first program written on the PDP-6. Also, this Lisp was the ancestor of MacLisp, the Lisp written to run under the Incompatible Timesharing System (ITS) [Eastlake 1968, 1972] at MIT on the PDP-6 and later on the PDP-10. ……
In 1965, virtually all of the Lisps in existence were identical or differed only in trivial ways. After 1965 - or more precisely, after MacLisp and BBN-Lisp diverged from each other - there came a plethora of Lisp dialects. ……
MacLisp was the primary Lisp dialect at the MIT AI Lab from the late 1960s until the early 1980s. Other important Lisp work at the Lab during this period included Lisp-Machine Lisp (later named Zetalisp) and Scheme. MacLisp is usually identified with the PDP-10 computer, but MacLisp also ran on another machine, the Honeywell 6180, under the Multics operating system [Organick 1972].
The Artificial Intelligence Project at Stanford University has produced a version of LISP 1.5 to be distributed by SHARE. In the middle of February 1965 the system is complete and is available from Stanford. The system should be available from SHARE by the end of March 1965. ……Evalquote
is available to the programmer as a LISP function - thus, one may now write "(EVALQUOTE APPEND ((A)(B C D)))
", rather than "(EVAL (QUOTE (APPEND (A)(B C D))) NIL)
", should one desire to do so.
A key difference between MacLisp and Interlisp was the approach to syntax. MacLisp favored the pure list style, usingEVAL
as the top level. Interlisp, along with Lisp 1.5, usedEVALQUOTE
.
To concatenate the lists(BOAT AIRPLANE SKATEBOARD)
and(CAR TRUCK)
in MacLisp, one would type this expression toEVAL
:
(APPEND (QUOTE (BOAT AIRPLANE SKATEBOARD)) (QUOTE (CAR TRUCK)))
or, using the syntactic abbreviation'x
for(quote x)
,
(APPEND '(BOAT AIRPLANE SKATEBOARD) '(CAR TRUCK))
The result would of course be(BOAT AIRPLANE SKATEBOARD CAR TRUCK)
.
In Lisp 1.5, one would type an expression (actually two expressions) like this toEVALQUOTE
:
APPEND((BOAT AIRPLANE SKATEBOARD) (CAR TRUCK))
The first expression denotes a function, and the second is a list of arguments. The "quote" in the nameEVALQUOTE
signifies the "implicit quoting of the arguments" to the function applied. MacLisp forked off and usedEVAL
exclusively as a top level interface. BBN-Lisp (and thus Interlisp) accommodated both: if the first input line contained a complete form and at least one character of a second form, then BBN-Lisp finished reading the second form and used theEVALQUOTE
interface for that interaction; otherwise it read exactly one form and used theEVAL
interface for that interaction.
The phrase "quoting arguments" actually is misleading and imprecise. It refers to the actions of a hypothetical preprocessor that transforms the input from a form like
APPEND((BOAT AIRPLANE SKATEBOARD) (CAR TRUCK))
to one like
(APPEND (QUOTE (BOAT AIRPLANE SKATEBOARD)) (QUOTE (CAR TRUCK)))
before evaluation is performed. A similar confusion carried over into the description of the so-calledFEXPR
or"special form." In some texts on Lisp one will find descriptions of special forms that speak of a special form "quoting its arguments," when in fact a special form has a special rule for determining its meaning and that rule involves not evaluating some forms [Pitman 1980].
McCarthy [McCarthy 1981] noted that the original Lisp interpreter was regarded as a universal Turing machine: It could perform any computation given a set of instructions (a function) and the initial input on its tape (arguments). Thus it was intended that
APPEND((BOAT AIRPLANE SKATEBOARD) (CAR TRUCK))
be regarded not as a syntactically mutated version of
(APPEND (QUOTE (BOAT AIRPLANE SKATEBOARD)) (QUOTE (CAR TRUCK)))
but as a function and (separately) a literal list of arguments. In hindsight we see that theEVALQUOTE
top level might better have been called theAPPLY
top level, making it pleasantly symmetrical to theEVAL
top level; the BBN-Lisp documentation brought out this symmetry explicitly. Indeed,EVALQUOTE
would have been identical to the functionAPPLY
in Lisp 1.5 if not for these two differences: (a) in Lisp 1.5,APPLY
took a third argument, an environment (regarded nowadays as something of a mistake that resulted in dynamic binding rather than the lexical scoping needed for a faithful reflection of the lambda calculus); and (b) "EVALQUOTE
is capable of handling special forms as a sort of exception" [McCarthy 1962]. Nowadays such an exception is referred to as a kluge [Raymond 1991]. (Note, however, that MacLisp'sAPPLY
function supported this same kluge.)
In 1963, L. Peter Deutsch (at that time a high school student) implemented a Lisp similar to Lisp 1.5 on the PDP-1 at Bolt Beranek and Newman (BBN) [Deutsch 1964]. This Lisp was called Basic PDP-1 Lisp. ……
At BBN, a successor to Basic PDP-1 Lisp was implemented on the PDP-1 and an upward-compatible version, patterned after Lisp 1.5 on the MIT CTSS system, was implemented on the Scientific Data Systems 940 (SDS 940) by Daniel Bobrow and D. L. Murphy. A further upward-compatible version was written for the PDP-10 by Alice Hartley and Murphy, and this Lisp was called BBN-Lisp [Teitelman 1971]. In 1973, not long after the time that SDS was acquired by Xerox and renamed Xerox Data Systems, the maintenance of BBN-Lisp was shared by BBN and Xerox Palo Alto Research Center and the name of the Lisp was changed to Interlisp [Teitelman 1974]. ……
Interlisp (and BBN-Lisp before it) introduced many radical ideas into Lisp programming style and methodology. The most visible of these ideas are embodied in programming tools, such as the spelling corrector, the file package, DWIM, CLISP, the structure editor, and MASTERSCOPE.
The origin of these ideas can be found in Warren Teitelman's Ph.D. dissertation on man-computer symbiosis [Teitelman 1966]. In particular, it contains the roots of structure editing (as opposed to "text" or "tape" editing [Rudloe 1962]), breakpointing, advice, and CLISP. ……
CLISP (Conversational LISP) was a mixed ALGOL-like and English-like syntax embedded within normal Interlisp syntax. Here is a valid definition ofFACTORIAL
written in lnterlisp CLISP syntax:
DEFINEQ((FACTORIAL
(LAMBDA (N) (IF N=0 THEN 1 ELSE N*(FACTORIAL N-I)))))
Inspired by ACTORS [Greif and Hewitt][Smith and Hewitt], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [Church], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to:
⑴ alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc. by clarifying the embedding of non-recursive control structures in a recursive host language like LISP.
⑵ explain how to use these control structures, independent of such issues as pattern matching and data base manipulation.
⑶ have a simple concrete experimental domain for certain issues of programming semantics and style.
Sussman had just been studying Algol. He suggested starting with a lexically scoped dialect of Lisp, because that seemed necessary to model the way names could refer to acquaintances in PLASMA. Lexical scoping would allow actors and functions to be created by almost identical mechanisms. Evaluating a form beginning with the wordlambda
would capture the current variable-lookup environment and create a closure; evaluating a form beginning with the wordalpha
would also capture the current environment but create an actor. Message passing could be expressed syntactically in the same way as function invocation. The difference between an actor and a function would be detected in the part of the interpreter traditionally known asapply
. A function would return a value, but an actor would never return; instead, it would typically invoke a continuation, another actor that it knew about. Our interpreter also provided the necessary primitives for implementing the internal behavior of primitive actors, such as an addition operator that could accept two numbers and a continuation actor. ……
…… This led us to three important ideas:
• First, we realized that all the patterns of control structure that Hewitt had described in terms of actors could equally well be described by the λ-calculus. ……
• Second, we realized that the λ-calculus — a small, simple formalism — could serve as the core of a powerful and expressive programming language. (Lisp had adopted the λ-notation for functions but had failed to support the appropriate behavior for free variables. The original theoretical core of Lisp was recursion equations, not the λ-calculus.) ……
• Third, we realized that in our quest for the “ultimate AI language” we had come full circle. As the MIT school had struggled to find more and more powerful ways to express and manipulate control structure to support heuristic search, we had progressed from Lisp to CONVERT to Planner to Conniver to PLASMA to a simple variation of Lisp!
Clojure is a Lisp not constrained by backwards compatibility
Special Forms - Normally,eval
evaluates the arguments of a function before applying the function itself. Thus ifeval
is given(CONS X Y)
, it will evaluateX
andY
, and thencons
them. But ifeval
is given(QUOTE X)
,X
should not be evaluated.QUOTE
is a special form that prevents its argument from being evaluated.
A special form differs from a function in two ways. Its arguments are not evaluated before the special form sees them.COND
, for example, has a very special way of evaluating its arguments by usingevcon
. The second way which special forms differ from functions is that they may have an indefinite number of arguments. Special forms have indicators on their property lists calledFEXPR
andFSUBR
for LISP-defined forms and machine language coded forms, respectively.
The use ofSETF
throughout Common Lisp - a later dialect of Lisp and the most popular - can be traced through Symbolics Zetalisp and MacLisp to the influence of MIT Lisp-Machine Lisp and then back through Greenblatt's proposal to Peter Deutsch and thence to Alan Kay.
The uniform treatment of access - reading and writing of state - has made Common Lisp more uniform that it might otherwise be. It is no longer necessary to remember both a reader function (such asCAR
) and also a separate writer or update function (such asRPLACA
), nor to remember the order of arguments. (ForRPLACA
, which comes first, the dotted pair or the new value for its car?) If the general form of a read operation is(f…)
, then the form of the write is(setf (f…) newvalue)
, and that is all the programmer needs to know about reading and writing data.
The variables in a lambda expression are dummy or bound variables because systematically changing them does not alter the meaning of the expression. Thusλ[[u;v];v2+u]
means the same thing asλ[[x;y];y2+x]
.
We shall sometimes use expressions in which a variable is not bound by a lambda. For example, in the function of two variablesλ[[x;y];xn+yn]
the variablen
is not bound. This is called a free variable. It may be regarded as a parameter. Unlessn
has been given a value before trying to compute with this function, the value of the function must be undefined. ……
When the compiler is called upon to compile a function, it looks for anEXPR
orFEXPR
on the property list of the function name. The compiler then translates this S-expression into an S-expression that represents a subroutine in the LISP Assembly Language (LAP). LAP then proceeds to assemble this program into binary program space. Thus anEXPR
, or anFEXPR
, has been changed to aSUBR
or anFSUBR
, respectively. ……
Free variables in compiled functions must be declared before the function is compiled. ……
A variable is bound in a particular function when it occurs in a list of bound variables following the wordLAMBDA
orPROG
. Any variable that is not bound is free. ……
When a variable is used free, it must have been bound by a higher level function. If a program is being run interpretively, and a free variable is used without having bee bound on a higher level, error diagnostic *A 8* will occur.
If the program is being run complied, the diagnostic may not occur, and the variable may have value NIL.
There are three types of variables in compiled functions: ordinary variables,SPECIAL
variables, andCOMMON
variables.SPECIAL
andCOMMON
variables must be declared before compiling. Any variable that is not declared will be considered an ordinary variable.
When functions are translated into subroutines, the concept of a variable is translated into a location where an argument is stored. If the variable is an ordinary one, then a storage location for it is set up on the push-down list. Other functions cannot find this private cell, making it impossible to use it as a free variable.SPECIAL
variables have the indicatorSPECIAL
on their property lists Following the indicator there is a pointer to a fixed cell. When this variable is bound, the old value is saved on the push-down list, and the current value is stored in theSPECLAL
cell. When it is no longer bound, the old value must be restored. When a function uses this variable free, then the quantity in theSPECIAL
cell is picked up.SPECIAL
variables are declared by using the pseudo-functionspecial[a]
, wherea
is a list of variable names. ……SPECIAL
variables are inexpensive and will allow free communication among compiled functions. They do not increase run time significantly.SPECIAL
variables cannot be communicated between the interpreter and compiled functions.COMMON
variables have the flagCOMMON
on their property lists; however, this is only used to inform the compiler that they areCOMMON
, and is not needed at run time.COMMON
variables are bound on an a-list by the compiled functions. When they are to be evaluated,eval
is given this a-list. This happens at run time.
The use ofCOMMON
variables will slow down any compiled function using them. However, they do provide complete communication between interpreted and compiled functions.COMMON
variables are declared bycommon[a]
, wherea
is a list of variable names. ……
LISP is designed in such a way that all functions for which the possibility of recursion can exist are in fact recursive. This requires that all temporary stored results related to the computation that is in progress be set aside when a piece of coding is to be used recursively, and that they be later restored. This is done automatically and need not be programmed explicitly.
All saving of temporary results in LISP is performed on a linear block of storage called the push-down list. Each set of stored data that is moved onto the push-down list is in a block labeled with its size and the name of the subroutine from which it came. Since it is in the nature of recursion that the first block to be saved is always the last block to be restored, it is possible to keep the push-down list compact.
There are several ways in which to maintain the values of free variables (and, thus the computational environment) in order to handle cases like the one above. All of the techniques used involve some increase in time. One approach makes it easy to access the value of a free variable is local. This approach is favored in recent implementations of LISP. We shall call it the "shallow access" approach.
Another approach is to make it relatively easy to enter and leave a block in which a free variable is bound, but relatively expensive to access a free variable. This approach is used in the classical "alist" implementation of LISP. Several Algol systems have opted for a similar approach. We shall call this the "deep access" approach. Both of these approaches allow one to modify the value of the variable as well as access it. The access time is approximately the same as modification time in both approaches.
Let us consider the "shallow access" approach first. By "shallow access" we mean that the value of a free variable can be obtained in a single fetch from memory. Since the current value may be stored in a difficult-to-determine location up in the stack, a special cell for its current value is used. In many recent LISP implementations this special values cell is stored as a property of the atom structure, and is unique to the free variable. In order to maintain the proper value in the special cell, extra work must be done every time a function or a block is entered in which the variable is an argument or is local. On entering such a function or block one usually saves the old value which is in the special cell on the stack alone with sufficient information to allow one to know from where the stack value came. On leaving such a function or block one must remember to store the value saved in the stack in the special cell.
The "deep access" approach forces one to search upward in the stack for the most recent value of the free variable. Such a search may be quite deep and slow if the free variable was bond vary far up in the stack. However, this approach has the advantage of avoiding the need for an extra special value cell. In addition, we shall see later that this approach sometimes allows one to use free variables with greater ease and flexibility than the "shallow access" approach. ……
Certain LISP systems use "deep access" in interpreted functions and "shallow access" in compiled functions. if free variables in compiled functions are declared to beCOMMON
, their values will be stored on the "alist". In this manner one can solve the environment problem. The cost is a very slow interpreter and "deep access" to free variables.
Symbols are used as names of variables in Common Lisp programs. When a symbol is evaluated as a form, the value of the variable it names is produced. For example, after doing (setq items 3
), which assigns the value3
to the variable nameditems
, thenitems
⇒3
. Variables can be assigned to, as bysetq
, or bound, as bylet
. Any program construct that binds a variable effectively saves the old value of the variable and causes it to have a new value, and on exit from the construct the old value is reinstated.
There are actually two kinds of variables in Common Lisp, called lexical (or static) variables and special (or dynamic) variables. At any given time either or both kinds of variable with the same name may have a current value. Which of the two kinds of variable is referred to when a symbol is evaluated depends on the context of the evaluation. The general rule is that if the symbol occurs textually within a program construct that creates a binding for a variable of the same name, then the reference is to the variable specified by the binding; if no such program construct textually contains the reference, then it is taken to refer to the special variable of that name.
The distinction between the two kinds of variable is one of scope and extent. A lexically bound variable can be referred to only by forms occurring at any place textually within the program construct that binds the variable. A dynamically bound (special) variable can be referred to at any time from the time the binding is made until the time evaluation of the construct that binds the variable terminates. Therefore lexical binding of variables imposes a spatial limitation on occurrences of references (but no temporal limitation, for the binding continues to exist as long as the possibility of reference remains). Conversely, dynamic binding of variables imposes a temporal limitation on occurrences of references (but no spatial limitation). For more information on scope and extent, see chapter 3.
The value a special variable has when there are currently no bindings of that variable is called the global value of the (special) variable. A global value can be given to a variable only by assignment, because a value given by binding is by definition not global.
It is possible for a special variable to have no value at all, in which case it is said to be unbound. By default, every global variable is unbound unless and until explicitly assigned a value, except for those global variables defined in this book or by the implementation already to have values when the Lisp system is first started. It is also possible to establish a binding of a special variable and then cause that binding to be valueless by using the functionmakunbound
. In this situation the variable is also said to be “unbound,” although this is a misnomer; precisely speaking, it is bound but valueless. It is an error to refer to a variable that is unbound.
It is customary to name dynamic variables with an asterisk at the beginning and end of the name. e.g.,*foo*
is a good name for a dynamic variable, but not for a lexical variable;foo
is a good name for a lexical variable, but not for a dynamic variable. This naming convention is observed for all defined names in Common Lisp; however, neither conforming programs nor conforming implementations are obliged to adhere to this convention.
Why do you use dynamic variable binding? Dynamic binding is very powerful, because there is only one single, dynamically changing environment active all the time. This makes it possible (e.g. for program snippets, interspersed with application data and/or passed over the network) to access the whole application context, freely, yet in a dynamically controlled manner. And (shallow) dynamic binding is the fastest method for a Lisp interpreter.
Lexical binding is more limited by definition, because each environment is deliberately restricted to the visible (textual) static scope within its establishing form. Therefore, most Lisps with lexical binding introduce "special variables" to support dynamic binding as well, and constructs like labels to extend the scope of variables beyond a single function.
In PicoLisp, function definitions are normal symbol values. They can be dynamically rebound like other variables. ……
Are there no problems caused by dynamic binding? You mean the funarg problem, or problems that arise when a variable might be bound to itself? For that reason we have a convention in PicoLisp to use transient symbols (instead of internal symbols) or private internal symbols ……
But with dynamic binding I cannot implement closures! This is not true. Closures are a matter of scope, not of binding.
For a closure it is necessary to build and maintain a separate environment. In a system with lexical bindings, this has to be done at each function call, and for compiled code it is the most efficient strategy anyway, because it is done once by the compiler, and can then be accessed as stack frames at runtime.
For an interpreter, however, this is quite an overhead. So it should not be done automatically at each and every function invocation, but only if needed.
Dynamic scoping inside isolated namespaces - newLISP is sometimes criticized for using dynamic scoping and fexprs. …… In newLISP, all variables are dynamically scoped by default. However, by defining a function in its own context, static/lexical scoping can be achieved. In newLISP, several functions and data can share a namespace. By enclosing functions in their own namespace, a lexical closure- like mechanism is achieved. Common Lisp and Scheme are lexically scoped by default and use lambda expressions as the closure mechanism. Common Lisp also offers special variables for dynamic scoping.
The problems of free variables in dynamic scoping can be avoided. In the rare cases when free variables must be used, you can partition code into namespace modules for easier control over free variables. You can then exploit the advantages of dynamic scoping. With dynamic scoping inside lexically-closed namespaces, newLISP combines the best of both scoping worlds.
newLISP has no funarg problem because it follows a simple rule: variables always show the binding of their current environment. When expressions with local variables are entered, newLISP saves the current variable state on a stack and restores it on exit of that expression. In newLISP, not only are function parameters and variables declared with let expressions local, loop variables in all looping expressions are local too.
dynamic - All variable names and their values live in one global table. lexical - Each binding scope (function, let syntax, …) creates a new table of variable names and values, organised in a hierarchy called “the environment”. ……
EmacsLisp as of 24.1 has both dynamic binding and lexical binding. Lexical binding must be enabled explicitly for a file or buffer. Individual variables can be'defvar'
ed to make them “special”, like in CommonLisp.
A list is a chain of conses in which the car of each cons is an element of the list, and the cdr of each cons is either the next link in the chain or a terminating atom.
A proper list is a list terminated by the empty list. The empty list is a proper list, but is not a cons.
An improper list is a list that is not a proper list; that is, it is a circular list or a dotted list.
A dotted list is a list that has a terminating atom that is not the empty list. A non-nil atom by itself is not considered to be a list of any kind---not even a dotted list.
A circular list is a chain of conses that has no termination because some cons in the chain is the cdr of a later cons.
I invented conditional expressions in connection with a set of chess legal move routines I wrote in FORTRAN for the IBM 704 at M.I.T. during 1957-58. ……
A paper defining conditional expressions and proposing their use in Algol was sent to the Communications of the ACM but was arbitrarily demoted to a letter to the editor, because it was very short.
The biggest hit for me while at SAIL in late '69 was to really understand LISP. Of course, every student knew aboutcar
,cdr
, andcons
, but Utah was impoverished in that no one there used LISP and hence, no one had penetrated the mysteries ofeval
andapply
. I could hardly believe how beautiful and wonderful the idea of LISP was [McCarthy 1960]. I say it this way because LISP had not only been around enough to get some honest barnacles, but worse, there wee deep flaws in its logical foundations. By this, I mean that the pure language was supposed to be based on functions, but its most important components — such as lambda expressions quotes, and conds – were not functions at all, and instead were called special forms. Landin and others had been able to get quotes and conds in terms of lambda by tricks that were variously clever and useful, but the flaw remained in the jewel. In the practical language things were better. There were not justEXPR
s (which evaluated their arguments), butFEXPR
s (which did not). My next questions was, why on earth call it a functional language? Why not just base everything onFEXPR
s and force evaluation on the receiving side when needed? I could never get a good answer, but the question was very helpful when it came time to invent Smalltalk, because this started a line of thought that said “take the hardest and most profound thing you need to do, make it great, an then build every easier thing out of it”. That was the promise of LISP and the lure of lambda – needed was a better “hardest and most profound” thing. Objects should be it. ……
This Smalltalk language (today labeled -71) was very influenced by FLEX, PLANNER, LOGO, META II, and my own derivatives from them. ……
As I mentioned previously, it was annoying that the surface beauty of LISP was marred by some of its key parts having to be introduced as “special forms” rather than as its supposed universal building block of functions. ……
An elegant approach was suggested in a CMU thesis of Dave Fisher [Fisher 70] on the syntheses of control structures. ALGOL60 required a separate link for dynamic subroutine linking and for access to static global state. Fisher showed how a generalization of these links could be used to simulate a wide variety of control environments. One of the ways to solve the “funarg problem” of LISP is to associate the proper global state link with expressions and functions that are to be evaluated later so that the free variables referenced are the ones that were actually implied by the static form of the language. The notion of “lazy evaluation” is anticipated here as well.
My original experiments with this architecture were done using a model I adapted from van Wijngaarten's and Wirth's "Generalization of Algol" and Wirth's Euler. Both of these were rather LISP-like but with a more conventional readable syntax. I didn't understand the monster LISP idea of tangible metalanguage then, but got kind of close with ideas about extensible languages draw from various sources, including Irons' IMP.
The second phase of this was to finally understand LISP and then using this understanding to make much nicer and smaller and more powerful and more late bound understructures. Dave Fisher's thesis was done in "McCarthy" style and his ideas about extensible control structures were very helpful. ……
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.
There were two motivations for developing a language for the IBM 704. First, IBM was generously establishing a New England Computation Center at M.I.T. …… Second, IBM was undertaking to develop a program for proving theorems in plane geometry (based on an idea of Marvin Minsky’s), ……. ……
In connection with IBM’s plane geometry project, Nathaniel Rochester and Herbert Gelernter (on the advice of McCarthy) decided to implement a list processing language within FORTRAN, ……. This work was undertaken by Herbert Gelernter and Carl Gerberich at IBM and led to FLPL, standing for FORTRAN List Processing Language. ……
I spent the summer of 1958 at the IBM Information Research Department at the invitation of Nathaniel Rochester and chose differentiating algebraic expressions as a sample problem. It led to the following innovations beyond FLPL:
a. Writing recursive function definitions using conditional expressions. ……
b. The maplist function that forms a list of applications of a functional argument to the elements of a list. ……
c. To use functions as arguments, one needs a notation for functions, and it seemed natural to use the λ-notation of Church (1941). ……
d. The recursive definition of differentiation made no provision for erasure of abandoned list structure. ……
The implementation of LISP began in Fall 1958. ……
LISP is now the second oldest programming language in present widespread use (after FORTRAN and not counting APT, which isn’t used for programming per se).
To use functions as arguments, one needs a notation for functions, and it seemed natural to use the λ-notation of Church (1941). I didn’t understand the rest of his book, so I wasn’t tempted to try to implement his more general mechanism for defining functions. Church used higher order functionals instead of using conditional expressions. Conditional expressions are much more readily implemented on computers. ……
One mathematical consideration that influenced LISP was to express programs as applicative expressions built up from variables and constants using functions. I considered it important to make these expressions obey the usual mathematical laws allowing replacement of expressions by expressions giving the same value. The motive was to allow proofs of properties of programs using ordinary mathematical methods. This is only possible to the extent that side-effects can be avoided. Unfortunately, side-effects are often a great convenience when computational efficiency is important, and “functions” with side-effects are present in LISP. However, the so-called pure LISP is free of side-effects, and (Cartwright 1976) and (Cartwright and McCarthy 1978) show how to represent pure LISP programs by sentences and schemata in first order logic and prove their properties. ……
Free variables. In all innocence, James R. Slagle programmed the following LISP function definition and complained when it didn’t work right ……. …… In modern terminology, lexical scoping was wanted, and dynamic scoping was obtained.
I must confess that I regarded this difficulty as just a bug and expressed confidence that Steve Russell would soon fix it. He did fix it but by inventing the so-called FUNARG device that took the lexical environment along with the functional argument. Similar difficulties later showed up in Algol 60, and Russell’s turned out to be one of the more comprehensive solutions to the problem. While it worked well in the interpreter, comprehensiveness and speed seem to be opposed in compiled code, and this led to a succession of compromises. …… the interested reader is referred to (Moses 1970) as a place to start.
The data on which LISP works is the S-language. ……
The M-language defines computations on S-expressions. ……
This is computationally complete. McCarthy (1960, Sect. 6) showed an arbitrary flowchart can be coded as mutually recursive functions.
The M-language of McCarthy (1960) is first order, as there is no provision to pass a function as argument or return a function as result.
However, McCarthy shows that M-language expressions and functions can be easily encoded as S-expressions and then defines in the M-language functions,eval
andapply
, that correctly interpret these S-expressions.
Thus LISP allows meta-programming, that is treating program as data and vice versa, by appropriate uses ofeval
andquote
. The 1960 paper gives the impression, quite strongly, that McCarthy saw this as removing any limitation stemming from the M-Language itself being first order.
It was originally intended that people would write programs in the M-language, in an Algol-like notation. In practice LISP programmers wrote their code directly in the S-language form, and the M-language became a kind of ghost that hovered in the background — theoretically important but nobody used it.
In LISP 1.5 (McCarthy et al. 1962) atoms acquired property lists, which serve several purposes and numbers appeared, as another kind of atom, along with basic arithmetic functions. ……
LISP was not based on the lambda calculus, despite using the word “LAMBDA” to denote functions. At the time he invented LISP, McCarthy was aware of (Church 1941) but had not studied it. The theoretical model behind LISP was Kleene’s theory of first order recursive functions. (McCarthy made these statements, or very similar ones, in a contribution from the floor at the 1982 ACM symposium on LISP and functional programming in Pittsburgh. No written version of this exists, as far as know.)
The M-language was first order, as already noted, but you could pass a function as a parameter by quotation, i.e. as the S-expression which encodes it. Unfortunately, this gives the wrong binding rules for free variables (dynamic instead of lexicographic). ……
McCarthy (1978) reports that this problem (wrong binding for free variables) showed up very early in a program of James Slagle. At first McCarthy assumed it was a bug and expected it to be fixed, but it actually springs from something fundamental — that meta-programming is not the same as higher order programming. Various devices were invented to get round this FUNARG problem, as it became known.
Not until SCHEME (Sussman 1975) did versions of LISP with default static binding appear. Today all versions of LISP are lambda calculus based.
In this article, we first describe a formalism for defining functions recursively. …… Next, we describe S-expressions and S-functions, give some examples, and then describe the universal S-function apply which plays the theoretical role of a universal Turing machine and the practical role of an interpreter. Then we describe the representation of S-expressions in the memory of the IBM 704 by list structures similar to those used by Newell, Shaw and Simon, and the representation of S-functions by program. Then we mention the main features of the LISP programming system for the IBM 704. Next comes another way of describing computations with symbolic expressions, and finally we give a recursive function interpretation of flow charts. ……
We shall need a number of mathematical ideas and notations concerning functions in general. Most of the ideas are well known, but the notion of conditional expression is believed to be new, and the use of conditional expressions permits functions to be defined recursively in a new and convenient way. ……
We shall first define a class of symbolic expressions in terms of ordered pairs and lists. Then we shall define five elementary functions and predicates, and build from them by composition, conditional expressions, and recursive definitions an extensive class of functions of which we shall give a number of examples. We shall then show how these functions themselves can be expressed as symbolic expressions, and we shall define a universal function apply that allows us to compute from the expression for a given function its value for given arguments. Finally, we shall define some functions with functions as arguments and give some useful examples. ……
The LISP programming system is a system for using the IBM 704 computer to compute with symbolic information in the form of S-expressions. …… The basis of the system is a way of writing computer programs to evaluate S-functions. …… In addition to the facilities for describing S-functions, there are facilities for using S-functions in programs written as sequences of statements along the lines of FORTRAN or ALGOL. ……
There are a number of ways of defining functions of symbolic expressions which are quite similar to the system we have adopted. Each of them involves three basic functions, conditional expressions, and recursive function definitions, but the class of expressions corresponding to S-expressions is different, and so are the precise definitions of the functions. We shall describe one of these variants called linear LISP. ……
Since both the usual form of computer program and recursive function definitions are universal computationally, it is interesting to display the relation between them. The translation of recursive symbolic functions into computer programs was the subject of the rest of this report. In this section we show how to go the other way, at least in principle.
Two general approaches were taken in order to have programming languages with fully specified semantics: (i) Better specification methods were developed that were adequate to fully describe existing large programming languages such as PL/1. (ii) New languages were developed with clean mathematical structure that were more amenable to formal description. McCarthy pioneered the latter approach in basing the LISP 1.5 programming language on a simpler functional language, sometimes called “pure LISP” or M-expressions, that was defined in a mathematical style, independent of a particular machine or implementation.
Pure LISP allows the definition and evaluation of functions over S-expressions. The lambda notation for functional abstraction is borrowed from Church’s lambda calculus, but otherwise there is little similarity between the two systems. Pure LISP has no higher-order functions, and call-by-value evaluation order is implicitly assumed. Two special constructs, conditional expressions and the label operator, allow recursive functions to be defined. Limited as it is, pure LISP is nevertheless powerful enough to express all partial recursive functions and hence provides an adequate basis for a theory of computation.
The LISP 1.5 programming language extends pure LISP in many ways that make it more useful in practice but at the same time tend to destroy its clean mathematical properties. Its semantics is defined by an interpreter written in a mixture of pure LISP and English. The distinction between programs and data is blurred. Higher-order functions, assignment, and a global symbol table are added.
The programs to be hand-compiled were written in an informal notation called M-expressions intended to resemble FORTRAN as much as possible. Besides FORTRAN-like assignment statements and go tos, the language allowed conditional expressions and the basic functions of LISP. Allowing recursive function definitions required no new notation from the function definitions allowed in FORTRAN I - only the removal of the restriction - as I recall, unstated in the FORTRAN manual - forbidding recursive definitions. The M-notation also used brackets instead of parentheses to enclose the arguments of functions in order to reserve parentheses for list-structure constants. It was intended to compile from some approximation to the M-notation, but the M-notation was never fully defined, because representing LISP functions by LISP lists became the dominant programming language when the interpreter later became available. A machine readable M-notation would have required redefinition, because the pencil-and-paper M-notation used characters unavailable on the IBM 026 key punch.
The READ and PRINT programs induced a de facto standard external notation for symbolic information, e.g. representingx + 3y + z
by(PLUS X (TIMES 3 Y) Z)
and(∀x)(P (x) ∨ Q(x, y)
by(ALL (X) (OR (P X) (Q X Y)))
.
……
Many people participated in the initial development of LISP, and I haven’t been able to remember all their contributions and must settle, at this writing, for a list of names. I can remember Paul Abrahams, Robert Brayton, Daniel Edwards, Patrick Fischer, Phyllis Fox, Saul Goldberg, Timothy Hart, Louis Hodes, Michael Levin, David Luckham, Klim Maling, Marvin Minsky, David Park, Nathaniel Rochester of IBM, and Steve Russell.
It was McCarthy's fortune that he found a financial basis for his work: The MIT Artificial Intelligence Project was founded on the first of September, 1958. McCarthy became an assistant professor in the department of Electrical Engineering (his branch was communication sciences), Minsky was already assistant professor in the department of mathematics. They received support from the Research Laboratory for Electronics in the form of the two programmers, a secretary, a typewriter and six graduate students.
When Maling started his job at the first of September he met S. Russell who seems to have something to do already. Maling remembers: "We were housed in an office in the basement of one of MIT's buildings, near the computer center. Initially there was John, Steve Russell, Carol - a buxom Boston Irish secretary and I. Steve was a brilliant programmer who had worked for John at Dartmouth (I believe). He was what we then called a 'Programming bum' ...".
evalqyote[fn;args]
. When evalquote
is given a function and a list of arguments for that function, it computes the value of the function applied to the arguments.fn
" of the function evalquote
must be an S-expression. Since we have been writing functions as M-expressions, it is necessary to translate them into S-expressions. ……evalquote
is defined by using two main functions, called eval
and apply
. apply
handles a function and its arguments, while eval
handles forms. Each of these functions also has another argument that is used as an association list for storing the values of bound variables and function names. ……apply
. When apply
encounters a function beginning with LAMBDA
, the list of variables is paired with the list of arguments and added to the front of the a-list During the evaluation of the function, variables may be encountered. They are evaluated by looking them up on the a-list. If a variable has been bound several times, the last or most recent value is used. The part of the interpreter that does this is called eval
. The following example will illustrate this discussion. Suppose the interpreter is given the following doublet:fn: (LAMBDA (X Y) (CONS X Y))
args: (A B)
evalquote
will give these arguments to apply
. (Look at the universal function of Section I.)apply[(LAMBDA (X Y) (CONS X Y)); (A B);NIL]
apply
will bind the variables and give the function and a-list to eval
.eval[(CONS X Y); ((X . A) (Y . B))]
eval
will evaluate the variables and give it to cons
.cons[A;B] = (A . B)
apply[CONS;(A B);((X . A) (Y . B))]
. ……LAMBDA
that binds it. When a variable always has a certain value regardless of the current a-list, it will be called a constant. This is accomplished by means of the property list (p-list) of the variable symbol. Every atomic symbol has a p-list. When the p-list contains the indicator APVAL
, then the symbol is a constant and the next item on the list is the value. eval
searches p-lists before a-lists when evaluating variables, thus making it possible to set constants. ……NIL
is an example of this. It can be evaluated repeatedly and will still be NIL
. T
, F
. NIL
, and other constants cannot be used as variables. ……PROG
, list of program variables, sequence of statements and atomic' symbols…)LAMBDA
. The value of each program variable is NIL
until it has been set to something else. To set a program variable, use the form SET
. To set variable PI
to 3.14
write (SET (OUOTE PI) 3.14)
. SETQ
is like SET
except that it quotes its first argument. Thus (SETQ PI 3.14)
. SETQ
is usually more convenient. SET
and SETQ
can change variables that are on the a-list from higher level functions. The value of SET
or SETQ
is the value of its second argument. ……777778
(i.e., minus 1
) as the first element of the list. The rest of the list contains various properties of the atomic symbol. Each property is preceded by an atomic symbol which is called its indicator. Some of the indicators are:PNAME
- the BCD print name of the atomic symbol for input-output use.EXPR
- S-expression defining a function whose name is the atomic symbol on whose property list the EXPR
appears.SUBR
- Function defined by a machine language subroutine.APVAL
- Permanent value for the atomic symbol considered as a variable.NIL
has two things on its property list - its PNAME
, and an APVAL
that gives it a value of NIL
. Its property list looks like this:.---.---. .-----.---. .---.---. .-----.---. .---.---. |-1 | |->|APVAL| |->| | |->|PNAME| |->| | / | `---'---' `-----'---' `---'---' `-----'---' `---'---' ^ | | | .-V-.---. .-V-.---. | | | / | | | / | | `---'---' `---'---' | | | '-----------------------' .-V-----. | NIL???| `-------'……
EXPR
points to an S-expression defining a function. The function define
puts EXPR
's on property lists. After defining ff
, its property list would look like this.---.---. .-----.---. .---.---. .-----.---. .---.---. |-1 | |->|EXPR | |->| | |->|PNAME| |->| | / | `---'---' `-----'---' `---'---' `-----'---' `---'---' .-----------------------' | .-V----.---. .---.---. .---.---. .-V-.---. |LAMBDA| |->| | |->| | / | | | / | `------'---' `---'---' `---'---' `---'---' | | | .---.---. .----.---. .-V-----. | X | / | |COND| |-> - - - | FF????| `---'---' `----'---' `-------'……
-1
in the address, certain bits in the tag which specify that it is a number and what type it is, and a pointer to the number itself in the decrement of this word.15
is represented as follows:.----.-.---. .--------------. | -1 |1| |->| 000000000017 | `----'-'---' `--------------'……
special[x]
SUBR
pseudo-function
The list x
contains the names of variables that are to be declared SPECIAL
. ……common[x]
SUBR
pseudo-function
The list x
contains the names of variables that are to be declared COMMON
. ……PROG
feature is an FSUBR
coded into the system. ……
When a set
or a setq
is encountered, the name of the variable is located on the a-list. The value of the variable (or cdr
of the pair) is actually replaced with the new value. ……
The following points should be noted concerning declared variables.
1. Program variables follow the same rules as λ variables do.
a. If a variable is purely local, it need not be declared.
b. Special variables can be used as free variables in compiled functions. They may be set at a lower level than that at which they are bound.
c. Common program variables maintain complete communication between compiled programs and the interpreter.
2. set
as distinct from setq
can only be used to set common variables.”
DEFUN
Special Form(DEFUN namespec . definitionspec)
DEFUN
offers a way of associating a functional definition with a symbol. ……DEFUN
can be used to define both normal functions and special forms (fexprs and macros). ……
;; Normal expr or lexpr function definitions
(DEFUN name bvl . body)
……
Ifname
is a symbol andbvl
is a list of symbols which is free of&
keywords (to be described below), the definition is an expr or lexpr definition. In Maclisp, this would be essentially equivalent to writing
(DEFPROP name (LAMBDA bvl . body) EXPR)
.
…… Note also that the keywordEXPR
is used for both expr and lexpr definitions. The only distinction between expr and lexpr definitions is whether thebvl
is a symbol or a list, but they are specified in the same way and looked up from the same property. ……
A fexpr is a function for which the formal parameters are not evaluated. The form of a fexpr definition is:
(DEFUN name FEXPR (sym) . body)
.
The lambda expression which describes a fexpr should expect to receive exactly one argument which will be the list of (unevaluated) arguments given in the call to the fexpr, so usually there is only one bound variable (sym
) in the bound variable list. ……DEFUN
can also be used to instantiate aMACRO
definition. …… The syntax for a macro definition is
(DEFUN name MACRO (sym) . body)
,
wheresym
will become bound to the whole macro form to be expanded (including the name). Note that this argument convention is different than fexprs, which only receive thecdr
of the call form. ……DEFUN
was introduced into Maclisp in March, 1969. Although now it is recognized as the standard function defining form because it shields the user from the implementational details of how the function is defined ……. ……DEFPROP
Function(DEFPROP sym val indicator)
Givessym
a property called indicator with valueval
. The arguments are not evaluated.DEFPROP
should not be used imbedded in other expressions. It is intended to occur at toplevel to assign properties that are set up once and never changed. In other places, usePUTPROP
with three quoted arguments.
In 1981 the emerging Common Lisp community turned to Scheme for some of its motivation and inspiration [Steele 1984]. Adopting lexical scoping proved one of the most important decisions the Common Lisp group ever made.
One aspect of Scheme that was not adopted, however, was a single namespace for functions and values, along with uniform evaluation rules for expressions in function and argument positions within the language. ……
In this paper, we shall refer to two abstract dialects of Lisp called Lisp1 and Lisp2.
Lisp1 has a single namespace that serves a dual role as the function namespace and value namespace; that is, its function namespace and value namespace are not distinct. In Lisp1, the functional position of a form and the argument positions of forms are evaluated according to the same rules. Scheme [Rees 1986] and the language being designed by the EuLisp group [Padget 1986] are Lisp1 dialects.
Lisp2 has distinct function and value namespaces. In Lisp2, the rules for evaluation in the functional position of a form are distinct from those for evaluation in the argument positions of the form. Common Lisp is a Lisp2 dialect. ……
Most Lisp dialects adopted a two-namespace approach to the naming problem. To some extent this is because most dialects followed Lisp 1.5 [McCarthy 1965] or dialects derived from Lisp 1.5.
Lisp 1.5 broke symbols into values and functions; values were stored on an association list, and function on the property lists of symbols. Compiled and interpreted code worked in different ways. In the interpreter, the association list was where all bindings were kept. When an identifier was encountered (an 'atomic symbol' in Lisp 1.5 terminology), it was taken as a variable to be evaluated for its value. First theAPVAL
part of the symbol was interrogated — anAPVAL
was "A Permanent, system-defined VALue" stored in a specific place in the symbol. Second, the association list was searched. Finally, if no binding was found, an error was signaled.
When a combination was encountered, the function position was evaluated differently from other positions. First, the symbol was interrogated to see whether there was a function definition associated with it; then the association list was searched.
Here we can see two namespaces at work, though nonsymbol variables were treated somewhat uniformly (Lisp 1.5 did not have lexical variables in interpreted code): when there were no function definitions associated with a symbol, there was one namespace, which was explicitly represented by the association list.
Compiled code worked a slightly different way, and from its internals we can see where the two namespaces came about in descendants of Lisp 1.5.
The Lisp 1.5 compiler supported 'common' and 'special' variables. A common variable enabled compiled and interpreted code to communicate with each other. A common variable was bound on an explicit association list; to evaluate such a variable, a call toEVAL
was emitted to determine the value. A special variable was the compiler's modeling of free variables and closely matched what is now called 'shallow binding.' Ordinary variables compiled into what we have termed lexical variables.
Thus, we see all of the components of the two-namespace world in Lisp 1.5, along with some of the components of the one-namespace world.
It seems that the designers of Lisp 1.5 thought of function names as being different from variable names — the Lisp 1.5 interpreter looked at the property list of the named atomic symbol first, and so it can be argued that a function was considered a property of a symbol. The designers used the terminology that a symbol "stands for" a function while a variable "refers" to a value.
Lisp for the PDP-6 at MIT adopted the style of the Lisp 1.5 special variables for dynamic binding in both compiled and interpreted code, thereby eliminated common variables. Compilers were still written that were to try to interpret special variables as lexical variables in as many places as possible. The value of a symbol was stored in the value cell of the symbol, and the function remained on the property list as it did in Lisp 1.5.
MacLisp [Pitman 1983] is a direct descendant of the PDP-6 Lisp and is a Lisp2 dialect. MacLisp uses a sophisticated form of link table, which is made possible by the separation of namespaces. In particular, function-defining functions have controlled access into the places where functions are stored so that the link tables can be correctly maintained. ……
Common Lisp was the result of a compromise between a number of dialects of Lisp, most of them descendants of MacLisp, all of them Lisp2s. ……
A free variable in Common Lisp is currently taken to be a dynamic rather than a lexical reference because there is no global lexical environment in Common Lisp. In this code
(DEFUN PLUS (X Y) (+ X Y))
(DEFUN SOMEWHAT-MORE-THAN (X) (PLUS X *SOMEWHAT*))
The reference to*SOMEWHAT*
is special (dynamic). On the surface, in Lisp1 the reference toPLUS
is free also, and thus it is special (dynamic).
When a variable is declared special, a free reference to it is to the most recent dynamically bound value for it; all bindings of it become special (dynamic) bindings. When a variable is declared constant, free references to it are to its permanent constant value, and compilers are free to fold that constant into the code they emit.
We introduce the concept of global variables; when a variable is global, free references to it are to the value cell of the symbol named by the variable, and all bindings of it are still lexical.
To avoid a compiler warning for free use of a variable like*SOMEWHAT*
, the variable is declaredSPECIAL
. In order to have compilers for Lisp1 not to warn about the free use ofPLUS
, it would be unfortunate to have to declare itSPECIAL
.
As noted, in Common Lisp the default for free variable references isSPECIAL
; that is, a free variable reference is taken to be a special variable reference. If the default in Lisp1 were that free variable references were global (lexical), then it would make sense for a compiler not to warn about such free variable references.
Inspired by ACTORS [Greif and Hewitt][Smith and Hewitt], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [Church], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to:
⑴ alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc. by clarifying the embedding of non-recursive control structures in a recursive host language like LISP.
⑵ explain how to use these control structures, independent of such issues as pattern matching and data base manipulation.
⑶ have a simple concrete experimental domain for certain issues of programming semantics and style.
The variables in a lambda expression are dummy or bound variables because systematically changing them does not alter the meaning of the expression. Thusλ[[u;v];v2+u]
means the same thing asλ[[x;y];y2+x]
.
We shall sometimes use expressions in which a variable is not bound by a lambda. For example, in the function of two variablesλ[[x;y];xn+yn]
the variablen
is not bound. This is called a free variable. It may be regarded as a parameter. Unlessn
has been given a value before trying to compute with this function, the value of the function must be undefined. ……
When the compiler is called upon to compile a function, it looks for anEXPR
orFEXPR
on the property list of the function name. The compiler then translates this S-expression into an S-expression that represents a subroutine in the LISP Assembly Language (LAP). LAP then proceeds to assemble this program into binary program space. Thus anEXPR
, or anFEXPR
, has been changed to aSUBR
or anFSUBR
, respectively. ……
Free variables in compiled functions must be declared before the function is compiled. ……
A variable is bound in a particular function when it occurs in a list of bound variables following the wordLAMBDA
orPROG
. Any variable that is not bound is free. ……
When a variable is used free, it must have been bound by a higher level function. If a program is being run interpretively, and a free variable is used without having bee bound on a higher level, error diagnostic *A 8* will occur.
If the program is being run complied, the diagnostic may not occur, and the variable may have value NIL.
There are three types of variables in compiled functions: ordinary variables,SPECIAL
variables, andCOMMON
variables.SPECIAL
andCOMMON
variables must be declared before compiling. Any variable that is not declared will be considered an ordinary variable.
When functions are translated into subroutines, the concept of a variable is translated into a location where an argument is stored. If the variable is an ordinary one, then a storage location for it is set up on the push-down list. Other functions cannot find this private cell, making it impossible to use it as a free variable.SPECIAL
variables have the indicatorSPECIAL
on their property lists Following the indicator there is a pointer to a fixed cell. When this variable is bound, the old value is saved on the push-down list, and the current value is stored in theSPECLAL
cell. When it is no longer bound, the old value must be restored. When a function uses this variable free, then the quantity in theSPECIAL
cell is picked up.SPECIAL
variables are declared by using the pseudo-functionspecial[a]
, wherea
is a list of variable names. ……SPECIAL
variables are inexpensive and will allow free communication among compiled functions. They do not increase run time significantly.SPECIAL
variables cannot be communicated between the interpreter and compiled functions.COMMON
variables have the flagCOMMON
on their property lists; however, this is only used to inform the compiler that they areCOMMON
, and is not needed at run time.COMMON
variables are bound on an a-list by the compiled functions. When they are to be evaluated,eval
is given this a-list. This happens at run time.
The use ofCOMMON
variables will slow down any compiled function using them. However, they do provide complete communication between interpreted and compiled functions.COMMON
variables are declared bycommon[a]
, wherea
is a list of variable names. ……
LISP is designed in such a way that all functions for which the possibility of recursion can exist are in fact recursive. This requires that all temporary stored results related to the computation that is in progress be set aside when a piece of coding is to be used recursively, and that they be later restored. This is done automatically and need not be programmed explicitly.
All saving of temporary results in LISP is performed on a linear block of storage called the push-down list. Each set of stored data that is moved onto the push-down list is in a block labeled with its size and the name of the subroutine from which it came. Since it is in the nature of recursion that the first block to be saved is always the last block to be restored, it is possible to keep the push-down list compact.
There are several ways in which to maintain the values of free variables (and, thus the computational environment) in order to handle cases like the one above. All of the techniques used involve some increase in time. One approach makes it easy to access the value of a free variable is local. This approach is favored in recent implementations of LISP. We shall call it the "shallow access" approach.
Another approach is to make it relatively easy to enter and leave a block in which a free variable is bound, but relatively expensive to access a free variable. This approach is used in the classical "alist" implementation of LISP. Several Algol systems have opted for a similar approach. We shall call this the "deep access" approach. Both of these approaches allow one to modify the value of the variable as well as access it. The access time is approximately the same as modification time in both approaches.
Let us consider the "shallow access" approach first. By "shallow access" we mean that the value of a free variable can be obtained in a single fetch from memory. Since the current value may be stored in a difficult-to-determine location up in the stack, a special cell for its current value is used. In many recent LISP implementations this special values cell is stored as a property of the atom structure, and is unique to the free variable. In order to maintain the proper value in the special cell, extra work must be done every time a function or a block is entered in which the variable is an argument or is local. On entering such a function or block one usually saves the old value which is in the special cell on the stack alone with sufficient information to allow one to know from where the stack value came. On leaving such a function or block one must remember to store the value saved in the stack in the special cell.
The "deep access" approach forces one to search upward in the stack for the most recent value of the free variable. Such a search may be quite deep and slow if the free variable was bond vary far up in the stack. However, this approach has the advantage of avoiding the need for an extra special value cell. In addition, we shall see later that this approach sometimes allows one to use free variables with greater ease and flexibility than the "shallow access" approach. ……
Certain LISP systems use "deep access" in interpreted functions and "shallow access" in compiled functions. if free variables in compiled functions are declared to beCOMMON
, their values will be stored on the "alist". In this manner one can solve the environment problem. The cost is a very slow interpreter and "deep access" to free variables.
There are several important consequences of closing every lambda expression in the environment from which it is passed (i.e., in its "lexical" or "static" environment).
First, the axioms of lambda calculus are automatically preserved. Thus, referential transparency is enforced. This in turn implies that there are no "fluid" variable bindings (as there are in standard stack implementations of LISP such as MacLISP).
Second, the upward funarg problem [Moses] requires that the environment structure be potentially tree-like.
Finally, the environment at any point in a computation can never be deeper than the lexical depth of the expression being evaluated at that time; i.e., the environment contains bindings only for variables bound in lambdas lexically surrounding the expression being evaluated. This is true even if recursive functions are involved. ……Furthermore, it is not even necessary to scan the environment for the variable, since its value must be in a known position relative to the top of the environment structure; this position can be computed by a compiler at compile time on the basis of lexical scope.
The biggest hit for me while at SAIL in late '69 was to really understand LISP. Of course, every student knew aboutcar
,cdr
, andcons
, but Utah was impoverished in that no one there used LISP and hence, no one had penetrated the mysteries ofeval
andapply
. I could hardly believe how beautiful and wonderful the idea of LISP was [McCarthy 1960]. I say it this way because LISP had not only been around enough to get some honest barnacles, but worse, there wee deep flaws in its logical foundations. By this, I mean that the pure language was supposed to be based on functions, but its most important components — such as lambda expressions quotes, and conds – were not functions at all, and instead were called special forms. Landin and others had been able to get quotes and conds in terms of lambda by tricks that were variously clever and useful, but the flaw remained in the jewel. In the practical language things were better. There were not justEXPR
s (which evaluated their arguments), butFEXPR
s (which did not). My next questions was, why on earth call it a functional language? Why not just base everything onFEXPR
s and force evaluation on the receiving side when needed? I could never get a good answer, but the question was very helpful when it came time to invent Smalltalk, because this started a line of thought that said “take the hardest and most profound thing you need to do, make it great, an then build every easier thing out of it”. That was the promise of LISP and the lure of lambda – needed was a better “hardest and most profound” thing. Objects should be it. ……
This Smalltalk language (today labeled -71) was very influenced by FLEX, PLANNER, LOGO, META II, and my own derivatives from them. ……
As I mentioned previously, it was annoying that the surface beauty of LISP was marred by some of its key parts having to be introduced as “special forms” rather than as its supposed universal building block of functions. ……
An elegant approach was suggested in a CMU thesis of Dave Fisher [Fisher 70] on the syntheses of control structures. ALGOL60 required a separate link for dynamic subroutine linking and for access to static global state. Fisher showed how a generalization of these links could be used to simulate a wide variety of control environments. One of the ways to solve the “funarg problem” of LISP is to associate the proper global state link with expressions and functions that are to be evaluated later so that the free variables referenced are the ones that were actually implied by the static form of the language. The notion of “lazy evaluation” is anticipated here as well.
DEFINE
- This is analogous to the MacLISPDEFUN
primitive (but note that theLAMBDA
must appear explicitly!). It is used for defining a function in the "global environment" permanently, as opposed toLABELS
(see below), which is used for temporary definitions in a local environment.DEFINE
takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom).
Atoms which are not atomic symbols (identifiers) evaluate to themselves. Typical examples of such atoms are numbers, arrays, and strings (character arrays). Symbols are treated as identifiers or variables. They may be lexically bound by lambda-expressions. There is a global environment containing values initially have as their values primitive operations such as, for example,CAR
,CONS
, andPLUS
. SCHEME differs from most LISP systems in that the atomCAR
is not itself an operation (in the sense of being an invocable object, e.g. a valid first argument toAPPLY
), but only has one as a value when considered as an identifier.
There are several important consequences of closing every lambda expression in the environment from which it is passed (i.e., in its "lexical" or "static" environment).
First, the axioms of lambda calculus are automatically preserved. Thus, referential transparency is enforced. This in turn implies that there are no "fluid" variable bindings (as there are in standard stack implementations of LISP such as MacLISP).
Second, the upward funarg problem [Moses] requires that the environment structure be potentially tree-like.
Finally, the environment at any point in a computation can never be deeper than the lexical depth of the expression being evaluated at that time; i.e., the environment contains bindings only for variables bound in lambdas lexically surrounding the expression being evaluated. This is true even if recursive functions are involved. ……Furthermore, it is not even necessary to scan the environment for the variable, since its value must be in a known position relative to the top of the environment structure; this position can be computed by a compiler at compile time on the basis of lexical scope.
LISP and APL were two early influences, as was Stephen Wolfram's 1981 SMP symbolic computation language.
Two general approaches were taken in order to have programming languages with fully specified semantics: (i) Better specification methods were developed that were adequate to fully describe existing large programming languages such as PL/1. (ii) New languages were developed with clean mathematical structure that were more amenable to formal description. McCarthy pioneered the latter approach in basing the LISP 1.5 programming language on a simpler functional language, sometimes called “pure LISP” or M-expressions, that was defined in a mathematical style, independent of a particular machine or implementation.
Pure LISP allows the definition and evaluation of functions over S-expressions. The lambda notation for functional abstraction is borrowed from Church’s lambda calculus, but otherwise there is little similarity between the two systems. Pure LISP has no higher-order functions, and call-by-value evaluation order is implicitly assumed. Two special constructs, conditional expressions and the label operator, allow recursive functions to be defined. Limited as it is, pure LISP is nevertheless powerful enough to express all partial recursive functions and hence provides an adequate basis for a theory of computation.
The LISP 1.5 programming language extends pure LISP in many ways that make it more useful in practice but at the same time tend to destroy its clean mathematical properties. Its semantics is defined by an interpreter written in a mixture of pure LISP and English. The distinction between programs and data is blurred. Higher-order functions, assignment, and a global symbol table are added.