I’ve got a real treat for you pathological programming fans!
Today, we’re going to take a quick look at the worlds most *useful* pathological programming language: TECO.
TECO is one of the most influential pieces of software ever written. If, by chance, you’ve ever heard of a little editor called “emacs”; well, that was originally a set of editor macros for TECO (EMACS = Editor MACroS).
As a language, it’s both wonderful and awful. On the good side, The central concept of the language is wonderful: it’s a powerful language for processing text, which works by basically repeatedly finding text that matches some kind of pattern, taking some kind of action when it finds it, and then selecting the next pattern to look for. That’s a very natural, easy to understand way of writing programs to do text processing. On the bad side, it’s got the most god-awful hideous syntax ever imagined.
History
———
TECO deserves a discussion of its history – it’s history is basically the history of how programmers’ editors developed. This is a *very* short version of it, but it’s good enough for this post.
In the early days, PDP computers used a paper tape for entering programs. (Mainframes mostly used punched cards; minis like the PDPs used paper tape). The big problem with paper tape is that if there’s an error, you need to either create a *whole new tape* containing the correction, or carefully cut and splice the tape together with new segments to create a new tape (and splicing was *very* error prone).
This was bad. And so, TECO was born. TECO was the “Tape Editor and COrrector”. It was a turing complete programming language in which you could write programs to make your corrections. So you’d feed the TECO program in to the computer first, and then feed the original tape (with errors) into the machine; the TECO program would do the edits you specified, and then you’d feed the program to the compiler. It needed to be Turing complete, because you were *writing a program* to find the stuff that needed to be changed.
A language designed to live in the paper-tape world had to have some major constraints. First, paper tape is *slow*. *Really* slow. And punching tape is a miserable process. So you *really* wanted to keep things as short as possible. So the syntax of TECO is, to put it mildly, absolutely mind-boggling. *Every* character is a command. And I don’t mean “every punctuation character”, or “every letter”. *Every* character is a command. Letters, numbers, punctuation, line feeds, control characters… Everything.
But despite the utterly cryptic nature of it, it was good. It was *very* good. So when people started to use interactive teletypes (at 110 baud), they *still* wanted to use TECO. And so it evolved. But that basic tape-based syntax remained.
When screen-addressable terminals came along – vt52s and such – suddenly, you could write programs that used cursor control! The idea of a full-screen editor came along. Of course, TECO lovers wanted their full screen editor to be TECO. For Vaxes, one of the very first full screen editors was a version of TECO that displayed a screen full of text, and did commands as you typed them; and for commands that actually needed extra input (like search), it used a mode-line on the bottom of the screen (exactly the way that emacs does now).
Not too long after that, Richard Stallman and James Gosling wrote emacs – the editor macros for TECO. Originally, it was nothing but editor macros for TECO to make the full screen editor easier to use. But eventually, they rewrote it from scratch, to be Lisp based. And not long after that, TECO faded away, only to be remembered by a bunch of aging geeks. The syntax of TECO killed it; the simple fact is, if you have an alternative to the mind-boggling hideousness that is TECO syntax, you’re willing to put up with a less powerful language that you can actually *read*. So almost everyone would rather write their programs in Emacs lisp than in TECO, even if TECO *was* the better language.
The shame of TECO’s death is that it was actually a really nice programming language. To this day, I still come across things that I need to do that are better suited to TECO than to any modern day programming language that I know. The problem though, and the reason that it’s disappeared so thoroughly, is that the *syntax* of TECO is so mind-bogglingly awful that no one, not even someone as insane as I am, would try to write code in it when there are other options available.
A Taste of TECO Programming
——————————
Before jumping in in and explaining the basics of TECO in some detail, let’s take a quick look at a really simple TECO program. This program is absolutely *remarkably* clear and readable for TECO source. It even uses a trick to allow it to do comments. The things that look like comments are actually “goto” targets.
0uz ! clear repeat flag ! <j 0aua l ! load 1st char into register A ! <0aub ! load 1st char of next line into B ! qa-qb"g xa k -l ga -1uz ' ! if A>B, switch lines and set flag ! qbua ! load B into A ! l .-z;> ! loop back if another line in buffer ! qz;> ! repeat if a switch was made last pass !
The basic idea of TECO programming is pretty simple: search for something that matches some kind of pattern; perform some kind of edit operation on the location you found; and then choose new search to find the next thing to do. The example program above finds the beginnings of lines, and does a swap-sort. So it finds each sequential pair of lines; if they’re not in the right order, it swaps them, and sets a flag indicating that another pass is needed.
TECO programs work by editing text in a buffer. Every buffer has a *pointer* which represents the location where any edit operations will be performed. The cursor always sits *between* two characters.
The first thing that most TECO programs do is specify what it is that they want to edit – that is, what they want to read into the buffer. The command to do that is “ER”. So to edit a file foo.txt, you’d write “ERfoo.txt
“, and then hit the escape key twice to tell it to execute the command; then the file would be loaded into the buffer.
### TECO Commands
TECO commands are generally single characters. But there is some additional structure to allow arguments. There are two types of arguments: numeric arguments, and text arguments. Numeric arguments come *before* the command; text arguments come *after* the command. Numeric values used as arguments can be either literal numbers, commands that return numeric values, “.” (for the index of the buffer pointer), or numeric values joined by arithmetic operators like “+”, “-“, etc.
So, for example, the C
command moves the pointer forward one character. If it’s preceded by a numeric argument *N*, it will move forward *N* characters. The J
command jumps the pointer to a specific location in the buffer: the numeric argument is the offset from the beginning of the buffer to the location where the pointer should be placed.
String arguments come *after* the command. Each string argument can be delimited in one of two ways. By default, a string argument continues until it sees an Escape character, which marks the end of the string. Alternatively (and easier to read), if the command is prefixed by an “@” character, then the *first character* after the command is the delimiter, and the string will continue until the next instance of that character.
So, for example, we said “ER” reads a command into the buffer. So normally, you’d use “ERfoo.txt<ESC>
“. Alternatively, you could use “@ER'foo.txt'
“. Or “@ER$foo.txt$”. Or “@ERqfoo.txtq
“. Or even “@ER foo.txt
“.
Commands can also be modified by placing a “:” in from of them. For most commands, “:” makes them return either a 0 (to indicate that the command failed), or a -1 (to indicate that the command succeeded). For others, the colon does *something else*. The only way to know is to know the command.
TECO has variables; in it’s own inimitable fashion, they’re not called variables; they’re called Q-registers. There are 36 global Q-registers, named “A” through “Z” and “0”-“9”. There are also 36 *local* Q-registers (local to a particular *macro*, aka subroutine), which have a “.” character in front of their name.
Q-registers are used for two things. First, you can use them as variables: each Q-register stores a string *and* an integer. Second, any string stored in a Q-register can be used as a subroutine; in fact, that’s the *only* way to create a subroutine. The commands to work with Q-registers include:
* “nUq”: “n” is a numeric argument; “q” is a register name. This stores the value “n” as the numeric value of the register “q”.
* “m,nUq”: both “m” and “n” are numeric arguments, and “q” is a register name. This stores “n” as the numeric value of register “q”, and then returns “m” as a parameter for the next command.
* “n%q”: add the number “n” to the numeric value stored in register “q”.
* “^Uqstring”: Store the string as the string value of register “q”.
* “:^Uqstring”: Append the string parameter to the string value of register “q”.
* “nXq”: clear the text value of register “q”, and copy the next “n” lines into its string value.
* “m,nXq”: copy the character range from position “m” to position “n” into register “q”.
* “.,.+nXq”: copy “n” characters following the current buffer pointer into register “q”.
* “*Qq”: use the integer value of register “q” as the parameter to the next command.
* “nQq”: use the ascii value of the Nth character of register “q” as the parameter to the next command.
* “:Qq”: use the length of the text stored in register “q” as the parameter to the next command.
* “Gq”: copy the text contents of register “q” to the current location of the buffer pointer.
* “Mq”: invoke the contents of register “q” as a subroutine.
There are also a bunch of commands for printing out some part of the buffer. For example, “T” prints the current line. The print command to print a string is control-A; so the TECO hello world program is: “^AHello world^A<ESC><ESC>”. Is that pathological enough?
Commands to remove text include things like “D” to delete the character *after* the pointer; “FD”, which takes a string argument, finds the next instance of that argument, and deletes it; “K” to delete the rest of the *line* after the pointer, and “HK” to delete the entire buffer.
To insert text, you can either use “I” with a string argument, or <TAB> with a string argument. If you use the tab version, then the tab character is part of the text to insert.
There are, of course, a ton of commands for moving the point around the buffer. The basic ones are:
* “C” moves the pointer forward one character if no argument is supplied; if it gets a numeric argument *N*, it moves forwards *N* characters. C can be preceeded by a “:” to return a success value.
* “J” jumps the pointer to a location specified by its numeric argument. If there is no location specified, it jumps to location 0. J can be preceeded by a “:” to see if it succeeded.
* “ZJ” jumps to the position *after* the last character in the file.
* “L” is pretty much like “C”, except that it moves by lines instead of characters.
* “R” moves backwards one character – it’s basically the same as “C” with a negative argument.
* “S” searches for its argument string, and positions the cursor *after* the last character of the search string it found, or at position 0 if the string isn’t found.
* “number,numberFB” searches for its argument string between the buffer positions specified by the numeric arguments.
Search strings can include something almost like regular expressions, but with a much worse syntax. I don’t want to hurt your brain *too* much, so I won’t go into detail.
And last, but definitely not least, there’s control flow.
First, there are loops. A loop is “n<commands>”, which executes the text between the left brack and the right bracket “n” times. Within the loop, “;” branches out of the loop if the last search command failed; “n;” exits the loop if the value of “n” is greater than or equal to zero. “:;” exits the loop if the last search succeeded. “F>” jumps to the loop close bracket (think C continue), “F<" jumps back to the beginning of the loop.
Conditionals are generally written "n"Xthen-command-string|else-command-string'". (Watch out for the quotes in there; there's no particularly good way to quote it, since it uses both of the normal quote characters. The double-quote character introduces the conditional, and the single-quote marks the end.) In this command, the "X" is one of a list of conditional tests, which define how the numeric argument "n" is to be tested. Some possible values of "X" include:
* "A" means "if n is the character code for an alphabetic character".
* "D" means "if n is the character code of a digit"
* "E" means "if n is zero or false"
* "G" means "if n is greater than zero"
* "N" means "if n is not equal to zero"
* "L" means "if n is a numeric value meaning that the last command succeeded"
Example TECO Code
——————–
This little ditty reads a file, and converts tabs to spaces assuming that tab stops are every 8 spaces:
FEB :XF27: F H M Y<:N ;'.U 0L.UAQB-QAUC<QC-9"L1;'-8%C>9-QCUD S DQD<I >>EX
That’s perfectly clear now, isn’t it?
Ok, since that was so easy, how about something *challenging*? This little baby takes a buffer, and executes its contents as a BrainFuck program. Yes, it’s a BrainFuck interpreter in TECO!
@^UB#@S/{^EQQ,/#@^UC#@S/,^EQQ}/@-1S/{/#@^UR#.U1ZJQZ^SC.,.+-^SXQ-^SDQ1J# @^U9/[]-+<>.,/<@:-FD/^N^EG9/;>J30000<0@I//>ZJZUL30000J0U10U20U30U60U7 @^U4/[]/@^U5#<@:S/^EG4/U7Q7; -AU3(Q3-91)"=%1|Q1"=.U6ZJ@i/{/Q2@i/,/Q6@i/} /Q6J0;'-1%1'>#<@:S/[/UT.U210^T13^TQT;QT"NM5Q2J'>0UP30000J.US.UI <(0A-43)"=QPJ0AUTDQT+1@I//QIJ@O/end/'(0A-45)"=QPJ0AUTDQT-1@I/ /QIJ@O/end/'(0A-60)"=QP-1UP@O/end/'(0A-62)"=QP+1UP@O/end/'(0A-46)"=-.+QPA ^T(-.+QPA-10)"=13^T'@O/end/'(0A-44)"=^TUT8^TQPJDQT@I//QIJ@O/end/'(0A-91) "=-.+QPA"=QI+1UZQLJMRMB -1J.UI'@O /end/'(0A-93)"=-.+QPA"NQI+1UZQLJMRMC-1J.UI'@O/end/' !end!QI+1UI(.-Z)"=.=@^a/END/^c^c'C>
If you’re actually insane enough to want to try this masochistic monstrosity, you can get a TECO interpreter, with documentation and example programs, from [here][teco-site].
[teco-site]: http://almy.us/teco.html
Wow.
My brain just OD-ed on syntax insanity.
And people complain about python’s love of whitespace… which isn’t that big of a deal (and is fairly useful) after your first non-trivial program.
I have fond memories of hacking teco both for emacs libraries and standalone programs. But I’m glad I’m not doing that any more. I still have some of my own code — the emacs library stuff can be a little more readable due to long English names in subroutine calls (m.m) and variable names (m.v), but in general it’s pretty much line noise.
Ah, this brings back fond memories of youth.
Whoa! It’s been a long time since I’ve seen the words “TECO” and “VT52” in the same paragraph. I believe that happened in one of DEC’s mill buildings.
I enjoy your blog.
Peter
Sweet! Now I can run the BrainFuck implementation of DeCSS through a TECO implementation of BrainFuck!
FORTRAN looks good by comparison.
This gives me a wicked idea: I should write a description of BASIC in the style of a “pathological programming” post. Umpteen features of each dialect I encountered — from 1983-vintage BASIC-XL to Visual Basic 6.0 — could qualify them for pathological status.
My favorite languages are TECO and APL, although J is almost like a mongrel between APL and TECO …
The series of posts could easily swell out to be about general pathological issues. This one reminds me of meeting the -vi editor first time on a Sun workstation in Dallas. IIRC it has three main states. I was there to work on microelectronic processes, not computers, so I hated the threshold factor at the time.
(But actually the real pathology IIRC was that the default account was set up to use the three button mouse (before gaming) and the manual was written for two buttons usage. Or maybe I thought I could adapt the computer to my habits, instead of vice versa. Anyhoo, the setting to change was hidden away under the moved extra button functions. Sick, I tell you!)
Blake:
I thought *I* was insane… But actually *using* TECO to run a brainfuck interpreter to run DeCSS… That’s just twisted.
I just missed the TECO era myself, but I remember older hackers playing a game of “what does your name do in TECO?” In any case, that totally looks like line noise. 😉
The real successor to TECO isn’t emacs, it’s vim. See towers of hanoi and mandelbrot in vim.
beza1e1:
Maybe in *spirit*, vim is the successor to teco. But historically, emacs is the successor. (And personally, I think that linguistically, I’d have to call Perl the child of teco. Perl has much of the same general sensibility of teco as a language; and it’s almost as hard to read. 🙂 )
Also, I’m not sure, but I don’t think that vim is actually turing complete. Certainly expressive, but I don’t think that it quite makes it all the way.
Wow, what a blast from the past. I might still have a program for creating a table of contents written in TECO back in the late 70’s. Of course, it might be archived on punch cards. Debugging, now that was a fun job in TECO.
it may have been implicit in the background, but the architecture of the ITS OS (not TOPS, rather the “Incompatible Timesharing System”, a kind of an *alter* ego to CTSS, with *unusual* security features) which ran on the MIT-AI machines contributed to TECO’s power. in particular, this OS received characters from each and every terminal when a key was depressed, not merely when, as was common in display terminals at the time, a CRLF or Return was entered. this meant the program could respond after each and every key. that meant that when using TECO, a skillful operator could massage code or text as they went along, and minimize the number of keystrokes required to achieve a goal. the effect was especially mesmerizing on the AI Lab’s Knight displays, in the dark.
as bad as TECO seems, two points.
first, writing TECO macros was and is as deep an intellectual activity as writing APL, J, or perhaps as studying wei-qi, and was fun. indeed, people often got teased when they showed off their collection of macros because they were essentially arguing “My thesis will be so much easier to do and faster if I have just the right set of TECO macros to do it with.”
second, Data General ripped TECO off from DEC, although they called it something else entirely. (i believe DG was founded by DEC refugees.) i had to use a DG machine for a project at IBM Federal Systems and colleagues were astonished when i sat down at this brand new DG machine and began editing without much trouble using their editor: it was TECO under a new name.
David Harmon is right about the game of using your name as TECO input. By a strange coincidence, “Robert G. Munck” is an Ada compiler.
Unfortunately, it’s Ada 82.
I’ve heard rumors that somebody I knew briefly in school had written a BASIC interpreter in vi (without the m) macros. I didn’t attempt to verify this, and probably wouldn’t’ve been able to follow what was going on if I had tried.
Apparently the macro language does have two stacks, which along with some sort of decision-making gives you turing-completeness.
Q: and what happens if you make a mistake with teco?
A: you Type Every Character Over
I liked TECO, both as a computer scientist and as a professional writer. Before that, when I started programming in 1966 (yes, 40 years ago), my word processing consisted of redlining printouts and then replacing incorrectly keyed punchcards, thus correcting one line of text at a time.
When I got to Caltech in 1968, we had one computer system for the whole campus, inclding NAS/JPL. It was an IBM 7090/IBM 7094 pair of computers, with a cool smart interface (which crashed if one computer ordered the other to rewind). Besides FORTRAN, I mostly ran out home-grown language CITRAN (based on JOSS). CITRAN supposedly prevented self-modification, but I found a hack that bypassed that limitation.
In any case, I began using TECO as word processor for writing fiction (including what I contend was the world’s first Cyberpunk story published) and snailmail letters. I coded paragraphs, and then combined them in different permutations for different subsets of readers. Fortunately, my grandmother never got my erotic poems to girlfriends.
I never liked vi as much as TECO. This had only minor influence on me when I co-implemented JOT (Juggler Of Text) to Ted Nelson, and demo’d it at the world’s first personal computer conference, Philadelphia, 1976. That was before Apple, IBM, and Tandy made PCs, by the way. Being the 2nd in command at Xanadu is another story, however.
So, if you think TECO is still the best tool for some problems, but it’s syntax is horrid… what would you suggest for a modern successor to TECO?
Oh, I used to love TECO. It was such a step up from UPDATE (which
painfully allowed line by line edit of a disk file via punch card input). I remember having to convert Fortran source to PL1, and a macro of no more than a few lines long, did 90% of the changes automatically.
Of course the real joke was that any random collection of say 16 bytes would powerfully change the edit file -in some sort of profound way, but figuring out what it really did was quite a project.
Jason:
I really don’t know of anything that has the features of TECO that I thought were so useful. The closest that I can come in modern languages would probably be either Perl or Icon, but neither of them really handles the unstructured text-buffer as well as TECO.
One of the things that was really great about TECO was that it was fundamentally oriented towards a text buffer. Not a buffer of lines – but a totally free-form buffer. You could choose to view it as a list of lines *if you wanted*, by using line-oriented commands. But you didn’t *have* to. You could treat it as lines up to some point, then treat line-breaks as just another character for a while, then back to lines. You could make an “array” in a buffer by putting one value on each line; you could make a “map” in a file by putting name,value pairs on each line; and then you could just look at those as text.
I don’t know of anything comparable today.
Nice article. A big nit, the use of the term “mini” as in minicomputer (aka PDP8, PDP11 and Vax) is incorrect. Teco was used on DEC timesharing machines, such as the PDP-10, which were huge, sucked power, and cost a million dollars or more.
Later on, the minicomputers got blessed with their own version of TECO, but it predates the concept of a mini-computer.
This article brought tears to my eyes. I learned and used TECO on a pdp-10 while working for a defense subcontractor in New England in the mid 1970s. I thought then and I think now that TECO kicks ass, although I use emacs nowadays
Do I get extra points for locating “Introduction to Time Sharing”, Mai 1979 reprint of the DEC manuals inside of 15 minutes? This includes oldies-but-goodies such as:
* Introduction to Time Sharing
* Introduction to TECO (Text Editor and COrrector)
* DEC System 10 TECO (reflecting the software as of version 23)
* Maklib User’s guide
* PIP Peripheral Interchange Program
My real name starts with a D, so typing my name into TECO tended to wreak havoc and destruction 🙂
And boy, the DEC-10 was NOT a mini, we sat at glass teletypes and watched the altar boys behind glass move between the sacred shrines, putting up new hosties (=tapes) every now and then.
An exercise in an early computing class was the three-tape sorting routine using MergeSort….
The thing that always fascinated me about TECO was the idea of a programming language in which literally any string was a valid (if nonsensical) program. Many times, looking at the bizarre things which occur in Perl as a result of its philosophy of not requiring to to follow any sane sort of syntax, instead just trying to do something close to what you probably wanted (the perl manuals make frequent references to what they call the “perl guesser”, the part of the parser that presented with a string tries to guess what it is you were trying to say), I have daydreamed about a language which had literally no syntax errors and in which all possible typos generate different incorrect behavior instead of an error. But TECO actually was that, more or less.
I would like to nominate x86 assembly for similar treatment.
TECO’s “free-form buffer” thingie: Are you familiar with the xTalk family of languages (i.e., HyperTalk (HyperCard), SuperTalk, Revolution (nee Transcript), and a few others)? If I understand what you mean by “free-form buffer”, in xTalk you can treat any text as a free-form buffer. You can slice up text by “characters” or “chars” (i.e., a string’s individual characters), by “words” (delimited by pretty much any whitespace character, AFAIK), by “items” (which are ordinarily delimited by commas, but you can use different characters as item delimiters, too), or by “lines” (delimited by returns).
Here are a few examples of valid xTalk code:
put “fred” into word 4 of ThisVar
put char 3 of word 6 of line 13 of ThisVar into ThisChar
if (word (X1 + Y1) of ThisVar) = (char A to B of ThatVar) then
put “-complete” after word WordKount of ThisVar
set the itemDelimiter to “;”
put item SemiColNum of ThisText into Fred
set the itemDelimiter to “,”
put Fred into item CommaNum of ThisText
And here’s something cute which comes to you “factory equipped” in at least one xTalk, and can be written as a function in any other:
replace “tuba” with MyFavInstrument in BigTextVar
“the worlds most useful pathological programming language: TECO”
I think that the most in widespread use pathological programming language would be the Sendmail configuration file language.
According to you guys, is it possible to program in natural language ? Here is an article on this field in New Scientist:
( sorry, I cut-copy the whole article, because the link is dead ):
Writing software is a painstaking business in which you can’t afford to slip up: get a single character wrong and the instructions either do nothing or go horribly wrong.
In one infamous software error, a misplaced minus sign resulted in a fighter jet’s control system flipping the aircraft on its back whenever it crossed the equator.
Now a new system that takes the drudgery – and some of the potential for slip-ups – out of programming is about to be launched. Its inventor hopes it will one day turn us all into programmers.
Bob Brennan, a software engineer at Cambridge-based start-up Synapse Solutions, has developed a piece of software that allows you to write a program by keying in what you want it to do in everyday language.
Dubbed MI-Tech – short for machine intelligence technology – the software translates a typed wish list into machine code, the basic mathematical language understood by the microprocessors inside computers.
Double meaning
But this is no easy task, because everyday language is riddled with ambiguities and double meanings. “MI-Tech can resolve these ambiguities,” claims Brennan, because it has been taught about the significance of context in the English language.
At the heart of MI-Tech is a store of logical rules. These allow it to extract instructions from statements in ordinary language, which it then translates into machine code. In its present form, MI-Tech has a limited lexicon of only a few hundred words, but Brennan claims this is sufficient for most of the tasks you might ask it to carry out.
Brennan says his program can write code in a fraction of the time that it takes trained programmers. He spent months writing a program manually, producing hundreds of pages of code. But given “just three pages of monologue”, MI-Tech generated a program that performed exactly the same tasks.
Strict syntax
Vikram Adve, a programming-language researcher at the University of Illinois at Urbana-Champaign remains sceptical. “Every programming language that I have heard of has a well-defined syntax and well-defined semantics,” he says.
And for a very good reason: all programming languages operate on instruction compilers and hardware that are essentially dumb. “Neither can really interpret the intention of the programmer,” says Adve. So programming languages are deliberately designed to be unambiguous to avoid confusion.
Brennan agrees that previously this required strict syntax. “The problem before was that computers couldn’t cope with ambiguities, but now they can,” he says. MI-Tech’s small lexicon means there is less room for confusion. And if it’s unsure of your meaning, MI-Tech will just say it doesn’t understand.
Brennan is not going into any detail about how the system works until his patents are granted. But he hopes to be licensing his program to software companies within 18 months so that they can build it into their own packages. If that happens, you might well be able to add programs of your own design to your PC – without knowing how to code.
1900 GMT, 4 April 2001
by Duncan Graham-Rowe
New Scientist Online News
My first computing experiences were on a DECsystem 10 using a TTY33 and I vaguely remember using TECO to edit Algol code. Editing code on a paper roll printout at 110 baud (chunka-chunka-chunka) was a very slow and painful process. Backing up sourcecode on the paper tape punch/reader that was part of the terminal, was a good idea. Lunar lander was the first “game” I remember, I think it was written in Dartmouth Basic. Now I find it faintly amusing that I’ve been coding since before my manager was born 🙂
I did a co-op term at DEC in the mid-90s, and TECO was still there in VMS for anyone who wanted it. In the internal mathematics newsgroup, someone posted (or had posted) a TECO program that calculated the digits of π using the spigot algorithm.
ekzept: yes, Data General was founded by DEC refugees. The first chapter of Tracy Kidders “Soul of a New Machine” gives some of the history.
One day, a friend at school had an array of data that needed to be transposed before his program could read it. I whipped up a a TECO macro in about 5 minutes, and it ran in about ten. Yes, it would have been quicker to just reverse the indexes in his program’s data read loops. And, as i hadn’t saved the macro, it’s lost. Even the next day, i’d no idea how i might have done it.
In ’87, i picked up a free TECO for DOS, written in C. It was rather incomplete. I gave up.
TECO is an RPN stack language, like forth. It has block structure. I’ve written TECO with indention. It doesn’t make it that much more readable.
Still, IMO, it beats lisp as an extension language. In the original EMACS, i could drop into TECO mode and perform wonders. ITS TECO had some nice extensions, like ‘eat whitespace around point’. These days, with EMACS, i make do with keyboard macros. Pretty sad. At least i can pipe regions through arbitrary filters. I write C programs if i have to.
“… is it possible to program in natural language?”
Henh. Back in 1974 or so, a company dared to come out with a programming that they *trademarked* as “English”!
Anyway: My introduction to TECO was in 1970, on OS/12, which was OS/8 with extra features to use on a PDP-12, which, in turn, was a curious hybrid of a PDP-8 and a LINC (or some such). The default operating system was DIAL, on which the editor’s sole “commands” consisted of two *knobs*, one to scroll up and down through the file, and the other to scroll back and forth on the current line (and parts of the adjacent lines).
Enter TECO for the PDP-12:
It used the PDP-8 command set entered via the Skokie (IL) ASR-33 Teletype. However, instead of having to type ht$$ to see the results of a (set of) command(s), the text buffer was constantly displayed on the PDP-12’s CRT. (Bright green on black — whooee!) The line containing “point” was always the middle line, with a screen cursor under it.
Once I left college, I didn’t encounter TECO again until 1978, when I took a job working on PDP-10’s, using TEC124,
that is, TECO v1.24, together with some local extensions.
After that job, I went to work for another company, using VAXen, and again I used TECO to varying degrees up until 1985 or so.
I am here to tell you that learning “vi” was no mean feat — it was cracked up to be trivial to learn and use, but I found it insanely difficult. At this point, I can use “vim” about as completely as I once used TECO — but without the looping, conditionals, character classes (e.g. a single letter to represent all alphas, or all numerals, or all punctuation, or all word separators, etc), and the ability to embed a newline/CRLF/EOL into a string, and so on, “vi” and its variants are no match for TECO.
Ah, nostalgia for TECO. I was exposed to TECO but never had a chance to learn it. I’ve never learned Emacs either, although I’m ashamed to admit it, being a Lisp freak (as well as a Forth freak).
I did learn vi though, and that’s what I use to this day. When we got our first Unix machines, we were told not to use Emacs because it was a memory hog, so I learned vi.
It appears that my Brainfuck interpreter has gained some notoriety. I’m glad nobody has asked me why I actually did it 😉
TECO: Pathological, but nifty.
Learned and used it on dinky 11/03s for many useful
utilities, long before discovering C. Reformat an
assembler listing/cross-reference so all the octal
values here and here, but not there or there, are
converted to hex? Sure, our cross-compiler team
was going to add that capability RSN – but customer
wanted their hexy listings this week. Worked fine,
if a bit slow.
Want a tool to reassemble/link all the changed source
files in a directory? Here’s a macro which builds a
list of files, writes a command file, chains to the
latter… which runs the assembler for each file, then
invokes TECO again to run another macro for the link
phase, then chains etc etc. Years later I learned of
UNIX and make(1) and wept for ten minutes.
hki- Chris
$ewtty:$ex$$
Believe it or not, I STILL use TECO on rare occasions when it seems to be the quickest tool to get the job done. And I’ve been on PDP11/23, VAX, PC, and now UNIX/Linux. Right now I’m basically forced to use GVIM, but for rearranging large text files, sometimes TECO just has that edge…
I still use TECO. In the early 90’s, I downloaded the source so I could port it to an HP-UX machine I was working on (it’s my favorite editor). Since I was working on a lot of porability issues for programs, I would move TECO from machine to machine to find the idiosyncracies of each system. I’m still using it on my Solaris 8 boxes at work and my Mandriva boxes at home.
Reply to ekzept – The DG version was called SPEED. I used that too
Richard Stallman and James Gosling wrote emacs – the editor macros for TECO.
You mean Guy Steele, not James Gosling. Gosling wrote the first Emacs-like editor for Unix, and this was many years after Stallman and Steele wrote Emacs on the ITS. (Goslin’s Emacs was later superceded by the more advanced GNU Emacs, which people nowadays use).
I still use TECO. I’ve been programming for a living since 1975, and aside from a short gap of a few months in the mid 80’s when I couldn’t find TECO for DOS, I have used it virtually every day. There are things TECO can do that can’t be done with any other editor. It was a great day when I found a version that ran on Intel platforms that was not restricted to the 32k buffer limit, but the old one still comes in handy to break up really, really large log files into more manageable chunks. I use other editors as well, but there’s nothing like TECO for real work.
kk
Great post about TECO. I have recently become interested in it as I have been investigating machines that have no full-screen terminal. In particular I am very interested in its use on the pdp-8.
I’m a relative newcomer to this fantastic and yet pathological language. I stubbornly refuse to call it an editor, because it is not one. It is a programming language with a built-in editor, not vice-versa. Having discovered it in early 2007, I’m still learning as I go, but I’ve already set it up as my primary editor—and I’m not even a fully-blown Geek of Computer Science, but only a Geek in Training 😛 I think there’s only three packages one needs to get some good tech writing-slash-d.t.p. work done on DOS, Windows, UNIX, you name it: teco, tex, and troff. Well, more is handy too, but that’s about it.
I do have a question about Mr Ekloef’s Brainf. interpreter—there are several control sequences written as ^ followed by a letter. Sometimes, in TECO, ^A means press ^ then press A, and sometimes ^A means Ctl/A. This is ambiguous—can’t there be a simpler convention to follow?
I think TECO is not really that horrible