POP-2

Summary

POP-2 (also called POP2) is a programming language developed around 1970 from the earlier language POP-1 (developed by Robin Popplestone in 1968, originally named COWSEL) by Robin Popplestone and Rod Burstall at the University of Edinburgh. It drew roots from many sources: the languages Lisp and ALGOL 60, and theoretical ideas from Peter J. Landin. It used an incremental compiler, which gave it some of the flexibility of an interpreted language, including allowing new function definitions at run time and modification of function definitions while a program runs (both of which are features of dynamic compilation), without the overhead of an interpreted language.[1]

POP-2
ParadigmMulti-paradigm: structured, reflective, procedural
FamilyLisp: POP
Designed byRobin Popplestone; Rod Burstall, Steve Hardy; Robert Rae, Allan Ramsay
DevelopersUniversity of Edinburgh
University of Sussex
First appeared1970; 54 years ago (1970)
Stable release
1975 / 1975; 49 years ago (1975)
Typing disciplinedynamic
Implementation languageassembly
PlatformElliott 4130, ICT 1909, BESM-6, PDP-10, PDP-11
OSGeorge, TOPS-10, Unix
LicenseProprietary
Major implementations
WPOP
Dialects
POP-10
Influenced by
Lisp, ALGOL 60, COWSEL (renamed POP-1)
Influenced
POP-11

Description edit

Stack edit

POP-2's syntax is ALGOL-like, except that assignments are in reverse order: instead of writing

a  := 3;

one writes

3 -> a;

The reason for this is that the language has explicit notion of an operand stack. Thus, the prior assignment can be written as two separate statements:

3;

which evaluates the value 3 and leaves it on the stack, and

-> a;

which pops the top value off the stack and assigns it to the variable 'a'. Similarly, the function call

f(x, y, z);

can be written as

x, y, z; f();

(commas and semicolons being largely interchangeable) or even

x, y, z.f;

or

(x, y, z).f;

Because of the stack-based paradigm, there is no need to distinguish between statements and expressions; thus, the two constructs

if a > b then
       c -> e
   else
       d -> e
   close;

and

if a > b then
       c
   else
       d
   close -> e;

are equivalent (use of close, as endif hadn't become a common end-of-if-clause notation yet).

Arrays and doublet functions edit

There are no special language constructs to create arrays or record structures as they are commonly understood: instead, these are created with the aid of special builtin functions, e.g., newarray[2] (for arrays that can contain any type of item) and newanyarray[3] to create restricted types of items.

Thus, array element and record field accessors are simply special cases of a doublet function: this is a function that had another function attached as its updater,[4] which is called on the receiving side of an assignment. Thus, if the variable a contains an array, then

3 -> a(4);

is equivalent to

updater(a)(3, 4);

the builtin function updater returning the updater of the doublet. Of course, updater is a doublet and can be used to change the updater component of a doublet.

Functions edit

Variables can hold values of any type, including functions, which are first-class objects. Thus, the following constructs

function max x y; if x > y then x else y close end;

and

vars max;
   lambda x y; if x > y then x else y close end -> max;

are equivalent.

An interesting operation on functions is partial application, (sometimes termed currying). In partial application, some number of the rightmost arguments of the function (which are the last ones placed on the stack before the function is involved) are frozen to given values, to produce a new function of fewer arguments, which is a closure of the original function. For instance, consider a function for computing general second-degree polynomials:

function poly2 x a b c; a * x * x + b * x + c end;

This can be bound, for instance as

vars less1squared;
   poly2(% 1, -2, 1%) -> less1squared;

such that the expression

less1squared(3)

applies the closure of poly2 with three arguments frozen, to the argument 3, returning the square of (3 - 1), which is 4. The application of the partially applied function causes the frozen values (in this case 1, -2, 1) to be added to whatever is already on the stack (in this case 3), after which the original function poly2 is invoked. It then uses the top four items on the stack, producing the same result as

poly2(3, 1, -2, 1)

i.e.

1*3*3 + (-2)*3 + 1

Operator definition edit

In POP-2, it was possible to define new operations (operators in modern terms).[5]

vars operation 3 +*;
    lambda x y; x * x + y * y end -> nonop +*

The first line declares a new operation +* with precedence (priority) 3. The second line creates a function f(x,y)=x*x+y*y, and assigns it to the newly declared operation +*.

History edit

The original version of POP-2 was implemented on an Elliott 4130 computer in the University of Edinburgh (with only 64 KB RAM, doubled to 128 KB in 1972).[6]

POP-2 was ported to the ICT 1900 series on a 1909 at Lancaster University by John Scott in 1968.

In the mid-1970s, POP-2 was ported to BESM-6 (POPLAN System).

Later versions were implemented for Computer Technology Limited (CTL) Modular One, PDP-10, ICL 1900 series (running the operating system George). Julian Davies, in Edinburgh, implemented an extended version of POP-2, which he named POP-10 on the PDP-10 computer running TOPS-10. This was the first dialect of POP-2 that treated case as significant in identifier names, used lower case for most system identifiers, and supported long identifiers with more than 8 characters.

Shortly after that, a new implementation known as WPOP (for WonderPop) was implemented by Robert Rae and Allan Ramsay in Edinburgh, on a research-council funded project. That version introduced caged address spaces, some compile-time syntactic typing (e.g., for integers and reals), and some pattern matching constructs for use with a variety of data structures.

In parallel with that, Steve Hardy at University of Sussex implemented a subset of POP-2, which he named POP-11 which ran on a Digital Equipment Corporation (DEC) PDP-11/40 computer. It was originally designed to run on the DEC operating system RSX-11D, in time-shared mode for teaching, but that caused so many problems that an early version of Unix was installed and used instead. That version of Pop-11 was written in Unix assembly language, and code was incrementally compiled to an intermediate bytecode which was interpreted. That port was completed around 1976, and as a result, Pop-11 was used in several places for teaching. To support its teaching function, many of the syntactic features of POP-2 were modified, e.g., replacing function ... end with define ... enddefine and adding a wider variety of looping constructs with closing brackets to match their opening brackets instead of the use of close for all loops in POP-2. Pop-11 also introduced a pattern matcher for list structures, making it far easier to teach artificial intelligence (AI) programming.

Around 1980, Pop-11 was ported to a VAX-11/780 computer by Steve Hardy and John Gibson, and soon after that it was replaced by a full incremental compiler (producing machine-code instead of an interpreted intermediate code). The existence of the compiler and all its subroutines at run time made it possible to support far richer language extensions than are possible with Macros, and as a result Pop-11 was used (by Steve Hardy, Chris Mellish and John Gibson) to produce an implementation of Prolog, using the standard syntax of Prolog, and the combined system became known as Poplog, to which Common Lisp and Standard ML were added later. This version was later ported to a variety of machines and operating systems and as a result Pop-11 became the dominant dialect of POP-2, still available in the Poplog system.

Around 1986, a new AI company Cognitive Applications Ltd., collaborated with members of Sussex university to produce a variant of Pop-11 named AlphaPop running on Apple Mac computers, with integrated graphics. This was used for many commercial projects, and to teach AI programming in several universities. That it was implemented in an early dialect of C, using an idiosyncratic compiler made it very hard to maintain and upgrade to new versions of the Mac operating system. Also, AlphaPop was not "32-bit clean" due to the use of high address bits as tag bits to signify the type of objects, which was incompatible with the use of memory above 8 Mb on later Macintoshes.

See also edit

References edit

General
  • Burstall, R.; Collins, J.; Popplestone, R. (1968). Programming in Pop-2. Edinburgh: Edinburgh University Press.
  • Davies, D.J.M. (1976). "POP-10 Users' Manual". Computer Science Report (25).
  • Smith, R.; Sloman, A.; Gibson, J. (1992). "POPLOG's two-level virtual machine support for interactive languages". In D. Sleeman and N. Bernsen (ed.). Research Directions in Cognitive Science. Vol. 5: Artificial Intelligence. Lawrence Erlbaum Associates. pp. 203–231.
  • POP references
Inline
  1. ^ Burstall, R.M.; Collins, J.S.; Popplestone, R.J. (1968). POP-2 Papers (PDF). London: The Round Table.
  2. ^ Rubinstein, Mark; Sloman, A. (October 1985 – April 1989). "Help Newarray". University of Birmingham. Retrieved 22 March 2024.
  3. ^ Hardy, Steven; Williams, John; Sloman, A. (January 1978 – April 1986). "Help Newanyarray". University of Birmingham. Retrieved 22 March 2024.
  4. ^ Sloman, A. (April 1985). "Help Updater". University of Birmingham. Retrieved 21 March 2024.
  5. ^ POP-2 Reference Manual, page 217, and An Introduction to the Study of Programming Languages, by David William Barron, page 75
  6. ^ Dunn, Raymond D. (February 1970). "POP-2/4100 Users' Manual" (PDF). School of Artificial Intelligence. University of Edinburgh. Retrieved 3 June 2022.

External links edit

  • The Early Development of POP
  • Computers and Thought: A practical Introduction to Artificial Intelligence
  • An Introduction to the POP-2 Programming Language, by P. M. Burstall and J. S. Collins. POP-2 Reference Manual, by P. M. Burstall and J. S. Collins.