Prior to 2021 there was no dedicated source for Alph development notes. Any notes that did get published went into my public adversaria over on alph.laemeur.com. Here is a compilation of the Alph-related entries from that resource for the year 2020.
The @-time is when they were POSTed to the server (I didn't keep the adversaria up-to-date in 2020, so they were mostly dumped to the network at the end of the year); the <<XXXXXXXX>> number is when the note was created in my Docuplextron (a UNIX-time integer in base-36, of course) , and the (XXXXXXXX) is the time/date of the note's last modification.
These notes, as you might quickly ascertain, are more for my own use than for the benefit of the public, but I'm publishing them as artifacts of work.
<<KCV2CL4Y>> Alph link note (KHVUV24G)
The Alph link graph is a document.
The document has an ID (URI).
The document may have a type -- a template.
The URI for that template should be a URL to a human readable document describing the template and what it represents, what its member nodes are, etc.
The link graph document will have a "graph" property, which will be a collection of links.
In the Docuplextron, each item in the link store will be a link graph document.
<<KHQNZQV2>> Alph Linking TO-DO (KHVUUCT2)
Linkin TO-DO (KBK)
[ ]- We need a link-editor
[ ]- We need to get the link-store in shape to handle linking documents of the new structure (Document -> Graph -> EDL/Spans/Docs)
[ ]- We need loadExternal() to populate the link-store from:
- HTTP Headers
- <link> elements in HTML
- <a>nchors in HTML documents
- LD documents
[ ]- Pre-populate the link store with:
- A persistent link graph for tags/categories?
- A session link graph for HTTP header links and other ephemera?
[ ]- Finally, rewrite the link pointer stuff to work with the new link structures.
SHOULD BE A BLAST.
It's extremely annoying that inspiration is only the first part of anything. Then you actually have to do the work.
Here's a neologism: xines. Xanalogical 'zines. I'm sure "xines" is used somewhere already – it's too good not to have been – but I like it. What would a xanalogical 'zine be? I'd imagine something like Web 1.0 pages: short-ish; heavily-linked to other documents, both on- and off-site; exploratory/experimental with layout and graphics; but, these being xanalogical, the potential is there for re-use, cut/paste, mash-ups, deeply linked and respectfully collaborative.
Even if no one implemented the Linked-Data part of it; even if no-one implemented the server-side stuff, like partial transmission, automagic link-store, etc... -- it would be so enormously beneficial for a xanalogical web/system if people just published their writings in plain text, even as an alternative, quotable version. Don't even bother with the <x-text> tags in the HTML version! Just stick a link at the top of the article that says "Quotable plain-text version: <URL>".
Because even without any xanalogical/alphic server features, textfiles.com is one of the best sources for quotable text right now. The big down-side to that site is that much (most!) of the files are not UTF-8, and so addressing errors are inevitable.
Archive.org WOULD be a great resource, except their plain-texts are pretty-much all badly OCRed transcripts of scanned books, and they are so error-ridden that they're often unusable as quotable sources.
Gutenberg.org also would be great, except I've had problems with their weird caching/anti-leeching server features.
JRNL:KC6.0549 Link/Node editor
Perhaps a node editor instead of (or in addition to) a link editor in the docuplextron. You'd select a node (document, media fragment, edl), open the node editor, and it would compile all instances ... ah, I just had the idea.
Okay, we do both. Link editor looks like this:
[_NODE_] → [_relation_] → [_NODE_]
[store] [delete] [cancel]
Which is for editing a single link in a single graph document. Then a node editor is a compilation of all instances of a particular node across all of the graph documents in the workspace. You can manipulate each link
→ [_relation_] → [_NODE_] - [✓] [X]
→ [_relation_] → [_NODE_] - [✓] [X]
← [_relation_] ← [_NODE_] - [✓] [X]
→ [_relation_] → [_NODE_] - [✓] [X]
Okay, so what the heck is up with composition floaters, anyway? I haven't really worked on these things since Hyperama (2017), and now that I've been playing with the PROTODOCUPLEXTRON, they've become a point of interest. First issue: pressing [Enter] doesn't do anything. :( ... The orphan text in a blank composition pane is stored as a noodle ... I remember that much. In fact, any time you insert text into a composition pane, it's stored as a noodle. But, as [Enter] does not behave as one would expect (neither splitting a <p>, nor inserting a newline), I suspect that if I create a new paragraph using the DOMCursor, the contents of that paragraph will be a new noodle. Before I do that, what about Ctrl+Enter?
Gah. That didn't work. Haha. Ctrl+Enter commits a span to the network, remember? So I had to cancel the commit, and that fucked-up the span that I was editing because it was re-sourced to the network resource (even though the commit was cancelled) and I couldn't edit it anymore. So I'm editing this in a noodleBox now. Trying again with a new compositon...
Alright, bottom line seems to be this: at the moment, pressing Enter, or Alt+Enter, when editing a noodle inside of a composition pane, does nothing. I must be blocking/catching it somewhere. For what reason, I cannot recall. Actually, it may just be that the browser discards newlines in contentEditable elements (which is what both composition panes and noodleBoxes use) if those elements do not have "white-space: pre wrap" in their CSS. Maybe that?
Maybe orphan text in a composition floater should NOT be tied to the noodleStore. The whole point of Lamian/noodleStore was that we wanted plain-text fragments to be persistent in the browser -- but the contents of composition floaters ARE persistent. Noodles cannot be transcluded-from properly because their contents are not fixed ... you can't have the browser's built-in rich text editor mangling them ... it's a mess.
The editing experience is so good in the protoDPTron, I want that back in the Docuplexttron ... I think it was more like that in the original demo. So we may have to go pre-Lamian in composition floaters and just use regular ol' orphan text in those contexts, then come-up with a smart export-to-plain-text routine for when it's time to commit that text to the network.
And, of course, we'll need a graph editor, which will be several link editors embedded in a single document. The current link editor is more-or-less built on this model. Now that I'm thinking about it, it shouldn't be too much of a chore to extend and modify the current link editor code into these node, link, and graph editors that I have in mind. ...I think.
What would be interesting at some time down the road is a way of easily adding xanalink editors – that is, template-based link editors for specific types of graphs.
When a noodle is committed to a network resource, shouldn't it be gotten rid of locally? We don't want to lose track of them, though – so the following sequence should happen when we publish noodles:
- CONFIRM that the upload was successful and that the text fragment can be retrieved with byte-for-byte parity with the noodle as stored in the noodleStore
- Go through all workspaces in localStorage and REPLACE instances of the noodle with composition floaters containing the corresponding text transclusion
- PIN the network source globally – this feature still needs to be added, but the idea is that you can have network resources that are loaded and cached each time the Docuplextron is used – the rationale here is that when we commit text to the network and excise it from the noodleStore, we want that text to still be globally searchable in the Sources pane until explicitly un-pinned
- Finally, DELETE the noodle from the noodleStore
VISIBLE LINK STUFF...
- ...is actually pretty simple, and ...shouldn't be too hard to adapt to the new link structure?
- Doesn't need any shadowDOM modifications (woohoo!)
- As noted in the source, it's huge and needs to be broken-up.
- There's some XPath stuff in there, still. Are we keeping that?
- ...needs to be extended so that it gets rectangles for fragments inside of shadowDOMs. So...
- get all of Array.from(document.body.getElementsByTagName('X-DOM'))
- getParentItem() will work on these
- Then, iterate over all of those X-DOMs and do the querySelectorAll() calls on their shadowRoot instead of document.body
- I would say restructure the whole thing so that it queries floaters instead of document.body, but... then it would be broken for when we're not in Nelson-document mode. For that matter, though – all kinds of stuff is broken for non-Nelson-document mode! Maybe we don't care about that anymore?
[ Turns-out it wasn't actually that involved to change. I had only to write a few lines to include shadowRoots in queries to get it working. And it still works in regular Web page viewing mode.]
Trying to do some Web page editing in the Docuplextron – nothing xanalogical, just conventional Web page editing – and ... oy. I really left this thing in a poor state, didn't I?
- I need a cloneAfter function/command? When I am editing sectioned content and I want to create a copy of the current section after the present one... the problem now is that if I do a DOM copy/pase, the copied fragment always pastes INSIDE the node that the cursor is placed in.
- The HTML surround (Alt+H) function sorta works. But only sorta.
- When fired with Ctrl pressed, it just does an 'insertHTML' document.execCommand() which is A) not xanalogical, and B) an obsolete function and could be removed from Mozilla or Chromium at any time. Which is a shame, because it's so handy.
- The unmodified version of it has a few issues. It uses Alphjs.surround(), so it's xanalogical, but it also does a postMessage() call which... doesn't do anything? It's broken either way, when editing orphan text. It wraps the right-side orphan text in an X-TEXT that refers to nothing, and ... other stuff.
- The command needs to fist check and see if the selected text is orphan or not, then act accordingly.
I had figured out the solution to this problem some time last year, but I didn't make note of it anywhere – oops!
So, when we import document fragments into shadow DOMs in the workspace, all relative URLs are borked, right? Of course they are. I just re-wrote this awful little bit of code the other day that replaces all of the relative URLs in a document fragment with their fully-qualified equivalents. And that was, of course, completely unnecessary.
All we have to do is use a <base> element in the shadow DOM's <head>. The browser then resolves all of the relative URLs in the shadow DOM properly. So simple.
Wait. What? Does that actually work? I just realized that I was viewing the Web page after it had gone through my relative-URL-resolver function. Nevermind, maybe? AHAHAHAHA!!! Trying to think with two small kids banging on the drums in the next room! Great times!
Okay, nevermind. Was able to confirm last night that the <base> element is, in fact, ignored by the browser in shadow DOMs. *sigh* Oh well.
The good news, if there is any, is that my re-basing code seems to work pretty well, tho' it doesn't resolve URLs in stylesheets yet.
<<KIXEVUCV>> Font Consideration (KIYUNCXO)
¿Use a monospace font in noodles by default?
This should then be easily toggled in the bar-menu. My rationale for defaulting to variable-width fonts in noodles was that a person shouldn't have to feel like they're editing on a terminal or a typewriter when writing plain-text. I worry that this is something that people don't like about plain-text – that it's always got to LOOK like plain-text. I've written extensively elsewhere about how plain-text should be treated as a text stream rather than a teletype control language. HOWEVER, unless I've got some other kind of obvious visual indicator, it would be beneficial to use a monospaced font to show users when they are editing plain-text and when they are editing rich text.
This has come to light as I'm exploring/improving the utility of the HTML editing features in the Docuplextron.
<<KIYSJUMB>> Web page authoring notes (KIYU7LDZ)
Successfully re-made my SDF home page in the Docuplextron, from scratch, more or less. Some notes on the experience:
- New HTML compositions MUST start with an orphan text paragraph or maybe even just a bare text node. X-TEXT transcluding from noodles is a nightmare.
- We need an insertAfter() function in DOMCursor.js and some easy key bindings for it; two reasons:
- We should be able to do a xanalogical paste that puts a copied element AFTER the current element instead of trying to split whatever X-TEXT the cursor is currently inside-of. Yes, sometimes we ARE splitting an X-TEXT and inserting a transclusion; sometimes we are splitting and inserting DOM fragments; and sometimes (the currently unhandled case) we just want to insert/paste a DOM fragment after the selected element.
- We need to be able to "break out" of the current block-level element and start a new one. ContentEditable does this when you press enter at the end of a paragraph or list-item, for example -- you don't just get a <br> inserted, you actually get warped into a newly-created <p> or <li> or what-have-you
In general, DOMCursor needs to be smarter about when elements should be split or when new things should be inserted. Is the system caret at the terminal index of the current node? Then we probably don't want to split it, right? Etc. It'll be tricky to find the right balance between smart, automatic behaviours for Alt+V, and bespoke key bindings for specific operations (split-and-insert-editable, insert-editable-after, split-and-insert-from-DOMCursor-killring, insert-from-DOMCursor-killring-after, replace-selection, swap-selection-with-killring, and on and on...)
- We NEED a way to edit the <head> of a shadow DOM. This doesn't even have to be fancy. It can just be opening a modal with the <head> element's source in a <textarea>. THAT would be loads better than the current method (developer console/document inspector).
At some point, we'll need a community server to test, because I shouldn't count on most people wanting to run Apache/alph.py on their local machine.
My hosting fees are about $150/yr
Domain registration fees are about $35/yr (for .io -- cheaper for others).
So, ten years domain/hosting prepaid would be about $1850.00.
We would want at LEAST ten years secured on a domain.
How many users can a cheap shared-hosting setup support? 50? 500?
Honestly, if I was admin on a test server, I can't imagine wanting to support more than 50 users as an initial test group.
How it is:
[Alt+I] inserts an editable X-TEXT at the caret, splitting one if necessary. [Buggy! X-TEXT does not get updated in alph.sources -- damaged in shadow DOM conversion?]
[Alt+SHIFT+I] inserts an editable X-TEXT, splitting one if necessary, but also SPLITS THE CONTAINING ELEMENT ...?
HOW SHOULD THIS WORK?
[Alt+I] checks to see if the document is contentEditable and makes it so if necessary, then inserts an orphan text node at the caret. While the document is contentEditable, the Docuplextron needs to monitor key presses to make sure that X-TEXTs with network sources are NOT actually contentEditable —or it needs to just lock-down each X-TEXT in the document by explicitly setting contentEditable="false" on them – which isn't as egregious as it might seem – while noodle sources can be editable but MUST ACTUALLY UPDATE in Lamian.
Reading through DREAM MACHINES tonight for inspiration. Never ceases to excite me, this thing.