retroforth/glossary.forth
crc b08d62ffe6 glossary: start working on the server cleanups
FossilOrigin-Name: 3eb9c5b78ac4aa3255bcaa6dd01f2697d3bcdc9bb0f1463336625b4e7010db5d
2019-01-23 14:42:09 +00:00

598 lines
15 KiB
Forth
Executable file

#!/usr/bin/env retro
# Overview
This is an application for looking up and updating the
documentation for the words provided by RETRO.
# Prepare for Command Line Processing
This application can take a variable number of arguments.
I first check to make sure at least one was passed. If
not, just exit.
~~~
sys:argc n:zero? [ #0 unix:exit ] if
~~~
~~~
:GLOSSARY-TOOL ;
~~~
If execution reaches this point there's at least one
argument. I use a loop to store arguments into an array
named `Args`.
~~~
'Args d:create #32 allot
#0 sys:argc
[ dup sys:argv s:keep over &Args + store n:inc ] times
drop
~~~
And then populate constants for each one I care about.
~~~
#0 &Args + fetch 'QUERY s:const
#1 &Args + fetch 'TARGET s:const
#2 &Args + fetch 'TARGET2 s:const
~~~
# Data Set
I like plain text formats, so the data is stored as a
text file, with one line per word. Each line has a number
of fields. These are tab separated. The fields are:
| name | 0
| data stack | 1
| address stack | 2
| float stack | 3
| general description | 4
| interpret time description | 5
| compile time description | 6
| class handler | 7
| example 1 | 8
| example 2 | 9
| namespace | 10
| interface | 11
I use a variable named `SourceLine` to point to the
current line contents.
~~~
'SourceLine var
~~~
And a helper word to skip a specified number of fields.
~~~
:skip (n-) [ ASCII:HT s:split drop n:inc ] times ;
~~~
Then it's easy to add words to return each individual
field. I use `skip` to implement `select`, which selects
a specific field.
~~~
:select (n-s)
@SourceLine swap skip ASCII:HT s:split nip ;
~~~
And then named words to access each field I'm using a set to
generate these. It makes it easier to add fields later.
The other way would be to define them manually:
:field:name #0 select ;
:field:dstack #1 select ;
...
~~~
#0 { 'name 'dstack 'astack 'fstack
'descr 'itime 'ctime 'class
'ex1 'ex2 'namespace 'interface }
[ 'field: s:prepend d:create
dup compile:lit &select compile:call compile:ret
&class:word reclass n:inc ] set:for-each drop
~~~
# Display an Entry
I implement a word to display an entry. This will use a
format like:
name
Data: -
Addr: -
Float: -
A description of the word.
Class Handler: class:word | Namespace: global | Interface Layer: all
If there are specific notes on interpret or compile time
actions, or any examples, they will be displayed after
the description.
Note to self: This is horribly messy and should be rewritten.
~~~
{{
:s:putfmt (s-) s:format s:put ;
:name field:name '%s\n\n s:putfmt ;
:data field:dstack '__Data:__%s\n s:putfmt ;
:address field:astack '__Addr:__%s\n s:putfmt ;
:float field:fstack '__Float:_%s\n\n s:putfmt ;
:description field:descr '%s\n\n s:putfmt ;
:interpret-time field:itime s:length 0; drop
field:itime 'Interpret_Time:\n__%s\n\n s:putfmt ;
:compile-time field:ctime s:length 0; drop
field:ctime 'Compile_Time:\n__%s\n\n s:putfmt ;
:| '_|_ s:put ;
:class field:class 'Class:_%s s:putfmt ;
:namespace field:namespace 'Namespace:_%s s:putfmt ;
:interface field:interface 'Interface_Layer:_%s s:putfmt ;
:example1 field:ex1 '{n/a} s:eq? not 0; drop
field:ex1 s:format '\nExample_#1:\n\n%s\n\n s:putfmt ;
:example2 field:ex2 '{n/a} s:eq? not 0; drop
field:ex2 s:format '\nExample_#1:\n\n%s\n\n s:putfmt ;
---reveal---
:display-result
name
data (stack)
address (stack)
float (stack)
description
interpret-time
compile-time
class | namespace | interface nl
example1
example2 ;
}}
~~~
# Interactions
With the command line data extracted, I can now move on
to the words for handling specific interactions.
There are five primary roles:
* describe word
* add word
* delete word
* edit word
* export data
* list missing words
## Describe a Word
~~~
:matched? (-f) field:name TARGET s:eq? ;
:find-and-display-entry
'words.tsv [ s:keep !SourceLine matched? [ display-result ] if ] file:for-each-line ;
~~~
## Missing Words
~~~
{{
'GlossaryNames d:create #4097 allot
:restrict-scope 'GLOSSARY-TOOL d:lookup fetch !Dictionary ;
:record-name !SourceLine field:name s:keep over &GlossaryNames + store ;
:populate-names #1 'words.tsv [ record-name n:inc ] file:for-each-line
n:dec &GlossaryNames store ;
:in-set? dup &GlossaryNames set:contains-string? ;
---reveal---
:display-missing
restrict-scope populate-names
populate-names [ d:name in-set? [ drop ] [ s:put nl ] choose ] d:for-each ;
}}
~~~
## Add a Word
This just adds a stub to the end of the words.tsv file.
You'll need to run the edit commands for each field to
fully populate it.
~~~
{{
'ADD var
:template '%s\t-\t-\t-\t{n/a}\t\t\tclass:word\t{n/a}\t{n/a}\t{n/a}\t{n/a}\t{n/a}\n ;
:prepare 'words.tsv file:A file:open !ADD ;
:cleanup @ADD file:close ;
---reveal---
:add-word
prepare
TARGET template s:format [ @ADD file:write ] s:for-each
cleanup ;
}}
~~~
## Delete a Word
This works by reading each line and writing them to a new file.
Entries that match the word to delete are discarded. The new
file then replaces the original `words.tsv`.
~~~
{{
'NEW var
:matched? (-f) field:name TARGET s:eq? ;
:prepare '/tmp/words.new file:W file:open !NEW ;
:keep-entry @SourceLine [ @NEW file:write ] s:for-each ASCII:LF @NEW file:write ;
:cleanup @NEW file:close 'mv_/tmp/words.new_words.tsv unix:system ;
---reveal---
:delete-entry
prepare
'words.tsv [ s:keep !SourceLine matched? [ keep-entry ] -if ] file:for-each-line
cleanup ;
}}
~~~
## Edit a Word
Editing is a bit tricky. To keep things as simple as possible, I export
each field to a separate file under `/tmp/`.
~~~
:export-fields
field:name '/tmp/glossary.name file:spew
field:dstack '/tmp/glossary.dstack file:spew
field:astack '/tmp/glossary.astack file:spew
field:fstack '/tmp/glossary.fstack file:spew
field:descr '/tmp/glossary.descr file:spew
field:itime '/tmp/glossary.itime file:spew
field:ctime '/tmp/glossary.ctime file:spew
field:class '/tmp/glossary.class file:spew
field:ex1 '/tmp/glossary.ex1 file:spew
field:ex2 '/tmp/glossary.ex2 file:spew
field:namespace '/tmp/glossary.namespace file:spew
field:interface '/tmp/glossary.interface file:spew ;
~~~
Since I'm dumping a bunch of files into `/tmp/`, I also clean up
when done.
~~~
:delete-temporary
{ '/tmp/glossary.name
'/tmp/glossary.dstack
'/tmp/glossary.astack
'/tmp/glossary.fstack
'/tmp/glossary.descr
'/tmp/glossary.itime
'/tmp/glossary.ctime
'/tmp/glossary.class
'/tmp/glossary.ex1
'/tmp/glossary.ex2
'/tmp/glossary.namespace
'/tmp/glossary.interface }
[ file:delete ] set:for-each ;
~~~
Cleaning the edited data is necessary. This entails:
- removing any trailing newlines
- converting internal newlines and tabs to escape sequences
~~~
:clean
dup s:length over + n:dec
fetch [ ASCII:LF eq? ] [ ASCII:CR eq? ] bi or [ s:chop ] if
here [ [ ASCII:LF [ $\ , $n , ] case
ASCII:CR [ $\ , $n , ] case
ASCII:HT [ $\ , $t , ] case
,
] s:for-each #0 ,
] dip ;
~~~
After an edit, I need to reassemble the pieces and write them out to
the file. I'll use `FOUT` as a variable for the file ID.
~~~
'FOUT var
~~~
And provide a word like `s:put` that writes to this:
~~~
:write-line (s-) [ @FOUT file:write ] s:for-each ;
:write-nl (-) ASCII:LF @FOUT file:write ;
~~~
~~~
:generate-entry
s:empty [ '/tmp/glossary.fstack file:slurp ] sip clean s:keep
s:empty [ '/tmp/glossary.astack file:slurp ] sip clean s:keep
s:empty [ '/tmp/glossary.dstack file:slurp ] sip clean s:keep
s:empty [ '/tmp/glossary.name file:slurp ] sip clean s:keep
'%s\t%s\t%s\t%s\t s:format write-line
s:empty [ '/tmp/glossary.class file:slurp ] sip clean s:keep
s:empty [ '/tmp/glossary.ctime file:slurp ] sip clean s:keep
s:empty [ '/tmp/glossary.itime file:slurp ] sip clean s:keep
s:empty [ '/tmp/glossary.descr file:slurp ] sip clean s:keep
'%s\t%s\t%s\t%s\t s:format write-line
s:empty [ '/tmp/glossary.interface file:slurp ] sip clean s:keep
s:empty [ '/tmp/glossary.namespace file:slurp ] sip clean s:keep
s:empty [ '/tmp/glossary.ex2 file:slurp ] sip clean s:keep
s:empty [ '/tmp/glossary.ex1 file:slurp ] sip clean s:keep
'%s\t%s\t%s\t%s\t s:format write-line ;
~~~
Next, get the editor from the $EDITOR environment variable.
~~~
'EDITOR s:empty [ unix:getenv ] sip 'EDITOR s:const
~~~
~~~
:edit:field (s-)
EDITOR '%s_/tmp/glossary.%s s:format unix:system ;
~~~
~~~
:select-field
export-fields
TARGET
'name [ 'name edit:field ] s:case
'dstack [ 'dstack edit:field ] s:case
'astack [ 'astack edit:field ] s:case
'fstack [ 'fstack edit:field ] s:case
'descr [ 'descr edit:field ] s:case
'itime [ 'itime edit:field ] s:case
'ctime [ 'ctime edit:field ] s:case
'class [ 'class edit:field ] s:case
'ex1 [ 'ex1 edit:field ] s:case
'ex2 [ 'ex2 edit:field ] s:case
'namespace [ 'namespace edit:field ] s:case
'interface [ 'interface edit:field ] s:case
drop ;
~~~
~~~
:handle-edit
'words.new file:W file:open !FOUT
'words.tsv
[ s:keep !SourceLine field:name TARGET2 s:eq?
[ select-field generate-entry ]
[ @SourceLine write-line ] choose write-nl
] file:for-each-line
@FOUT file:close delete-temporary
'mv_words.new_words.tsv unix:system ;
~~~
## Export Data
In addition to providing a readable piece of documentation for each word,
I provide the ability to export the data into a few formats.
### Glossary
The glossary file consists of the documentation for each word, with a
separator bar between each entry.
~~~
:horizontal-line
#72 [ $- c:put ] times nl nl ;
:export-glossary
'words.tsv
[ s:keep !SourceLine display-result horizontal-line ] file:for-each-line ;
~~~
### TSV
I also provide for exporting the tab separated file itself. This will
strip out fields beyond the standard set, which can save some space if
you edit/save the TSV data with a spreadsheet application.
~~~
:display-fields
{ &field:name
&field:dstack
&field:astack
&field:fstack
&field:descr
&field:itime
&field:ctime
&field:class
&field:ex1
&field:ex2
&field:namespace
&field:interface }
[ call s:put tab ] set:for-each nl ;
:export-tsv
'words.tsv [ s:keep !SourceLine display-fields ] file:for-each-line ;
~~~
### Handle Exports
This is a second level command processor for deciding which export format
to use.
~~~
:export-data
TARGET
'glossary [ export-glossary ] s:case
'tsv [ export-tsv ] s:case
drop ;
~~~
## Help
~~~
:show-help
'RETRO_Glossary_Tool s:put nl
#32 [ $- c:put ] times nl
'describe_<wordname> s:put nl
'delete_<wordname> s:put nl
'add_<wordname> s:put nl
'edit_<field>_<wordname> s:put nl
'export_<format> s:put nl
'missing s:put nl
#32 [ $- c:put ] times nl
'Editor_Fields: s:put nl
'__name\n__dstack\n__astack\n__fstack\n s:format s:put
'__descr\n__itime\n__ctime\n__class\n s:format s:put
'__ex1\n__ex2\n__namespace\n__interface\n s:format s:put
#32 [ $- c:put ] times nl
'Export_Formats: s:put nl
'__glossary s:put nl
'__tsv s:put nl
#32 [ $- c:put ] times nl
;
~~~
# Gopher and HTTP Server
This tool embeds a tiny Gopher and HTTP server designed to run
under inetd.
First, set the port and server to use. I default to 9999 and forthworks.com.
~~~
#9999 'PORT const
'forthworks.com 'DOMAIN s:const
~~~
Next, words to display the main index (when requesting / or an
empty selector).
Gopher protocol for directories dictates the following format:
<type><description>\t<selector>\t<server>\t<port>\r\n
So `display-entry` constructs these. The selectors chosen are
`desc wordname`; the server is hardcoded to forthworks.com in
this.
~~~
:display-entry (-)
PORT DOMAIN field:name dup '0%s\t/desc_%s\t%s\t%n\r\n s:format s:put ;
~~~
Next, `gopher:list-words` which iterates over each entry,
generating the directory line for each.
~~~
:gopher:list-words (-)
'words.tsv [ s:keep !SourceLine display-entry ] file:for-each-line ;
~~~
With the Gopher side of the index taken care of, I turn my
attentions towards HTTP. In this case, the index is an HTML
file with a bunch of hyperlinks. Since we can't just pass
any non-whitespace in the URLs, this uses the line number in
**words.tsv** instead.
As with the Gopher, there's a `display-entry` which makes
the HTML for each line, and an `http:list-words` which uses
this to build an index.
~~~
:sanitize (s-s)
here buffer:set
[ $< [ '&lt; [ buffer:add ] s:for-each ] case
$> [ '&gt; [ buffer:add ] s:for-each ] case
$& [ '&amp; [ buffer:add ] s:for-each ] case
buffer:add ] s:for-each buffer:start s:temp ;
:display-entry (n-n)
field:name sanitize over '<a_href="/%n">%s</a><br>\n s:format s:put ;
:http:list-words (-)
#0 'words.tsv [ s:keep !SourceLine display-entry n:inc ] file:for-each-line drop ;
~~~
Next, words to display a specific word.
~~~
'Target var
:matched? (-f) field:name @Target s:eq? ;
:gopher:display
'words.tsv [ s:keep !SourceLine matched? [ display-result ] if ] file:for-each-line ;
~~~
And then the actual top level server.
~~~
:eol? (c-f)
[ ASCII:CR eq? ] [ ASCII:LF eq? ] [ ASCII:HT eq? ] tri or or ;
:s:get (a-)
buffer:set [ c:get dup buffer:add eol? not ] while ;
'Selector d:create
#1024 allot
:css (-)
{ '<style>
'tt,_a,_pre,_xmp_{_white-space:_pre;_}
'*_{_font-family:_monospace;_color:_#aaa;_background:_#121212;_font-size:_large;_}
'a_{_color:_#EE7600;_}
'</style>
} [ s:put sp ] set:for-each ;
:entry '<xmp> s:put display-result '</xmp> s:put nl ;
:http:display (-)
#0 'words.tsv [ s:keep !SourceLine dup-pair eq? [ entry ] if n:inc ] file:for-each-line drop-pair ;
:handle-http
css
PORT DOMAIN
'<h2><a_href="http://%s:%n">RETRO_Glossary</a></h2><hr> s:format s:put nl
&Selector ASCII:SPACE s:tokenize #1 set:nth fetch
dup s:length #1 eq?
[ drop http:list-words ]
[ n:inc s:to-number http:display ] choose ;
:gopher:serve
&Selector s:get
&Selector #0 #5 s:substr
'/desc [ &Selector ASCII:SPACE s:tokenize #1 set:nth fetch s:chop s:keep !Target gopher:display ] s:case
'GET_/ [ 'HTTP/1.0_200_OK\nContent-Type:_text/html\n\n s:format s:put handle-http ] s:case
drop gopher:list-words ;
~~~
# Finish
This checks the command line arguments and calls the proper words to
handle each case.
~~~
:process-arguments
QUERY
'describe [ find-and-display-entry ] s:case
'export [ export-data ] s:case
'edit [ handle-edit ] s:case
'add [ add-word ] s:case
'delete [ delete-entry ] s:case
'serve [ gopher:serve ] s:case
'missing [ display-missing ] s:case
drop show-help ;
~~~
~~~
process-arguments
~~~