Alph Developer Notes, 2021

This document is live, and entries will be appended over the course of the year. 

The permascroll is here: http://alph.io/xpub/devnotes/2021

—L.


What defines success for this project?

2021-01-17

On one hand, this is a personal endeavour: I'm creating a system that I can use for my own day-to-day writing and thinking tasks. It may appear highly idiosyncratic to outsiders, but it must be understood that I'm writing tools for myself, to fit my needs and my own habits and intuitions/instincts about how things should work.

On the other hand, this is a demonstration system: I want to show that xanalogical hypertext is not only possible and practicable in the current Web ecosystem, but I want to show the utility of such a system and provide a working proof-of-concept that others can build from.

I do not want to build a software empire. I do not want to replace the World Wide Web. If I create software that meets my own design goals, that I can use every day, Alph will be a success. If the software effectively demonstrates and generates interest in the xanalogical system model, then it will be a success. If someday I have a small community of like-minded operators of this software, I would consider it a tremendous success. And if Alph spawns or grows into something larger still, then I will consider it a success beyond any reasonable expectations.

2021-01-23T21:23:26.349Z
<<KK4FM8FM>> DOM Editing (KKA7Z71B)

HTML editing has got to be good. REALLY good. 

DOM Editing operations I want:

Insert DOM fragment at point 
  * Must split X-TEXT or orphan Text nodes correctly!

Insert DOM fragment AFTER DOMCursor target

Insert new element
  * 'at-point', or
  * 'after-current-element'

Replace DOM fragment
  * 'swap-with-killring' version
  * 'paste-over' version

Change element tag 
  * I've writtin DOMCursor.reTag(), but it needs to be bound to a menu or something

Toggle contenteditable
  * Yeah, we can do it in the attribute editor, but make it easier

I recently changed the bahaviour for the DOMCursor so that it shows the selection
halo only when the Alt key is pressed. I like this much better than toggling the DOMCursor halo as always-on/always-off. 

I'm now considering that a new command palette is necessary for selection operations, and that this palette should appear adjacent to the halo rather than at the top of the screen. 

As things are now, the HTML command palette only actually does one thing: wraps the current DOMCursor selection in a new element. ...that's certainly very useful, but hardly adequate!
 @2021-01-28T22:14:36.452Z
<<KKHETYEU>>  (KKHEZBBV)

One of the main reasons I'm doing Alph dev in Firefox is because Firefox has caret browsing. That is, when caret browsing is enabled, a caret is visible in the Web page at all times and can be moved around with the arrow keys. In Chrome(ium), caret browsing is no longer available and the only time a caret is visible/active in the document is when the current selection is inside of a contentEditable element. 

To get around this, I once had an SVG cursor/caret on the overlay layer, but it didn't work very well, so I got rid of it. But it might be time to bring it back, because the caret scales with the element that it is contained in, and when we're zoomed out from the currently active document/floater, the caret quickly disappears entirely. 

* @2021-02-02T11:12:20.320Z
<<KKNRFMYH>>  (KKNWJPWK)

Docuplextron features to-do:

- Add a section to the SETTINGS pane for custom timestamp formatting, and a toggle to disable timestamping/prepended newlines when committing text to a permascroll

- Add a button to the Alph pop-up bar to copy the EDL/fragment to the clipboard

- Keyboard navigation to move between floaters. 
  - Bind to Ctrl+Alt+arrows? I don't want to have to mode switch in/out of DOMCursor
  - We need this because when the ideas are flying fast, we need to be able to zip from noodle to noodle, sketching with text, hands on the keys - fast!
  - Just get it in there - work-out/tune the algorithm later

- Add section to SETTINGS pane to set the actual colors for color groups. Colors are hard-coded right now, which is ridiculous. Set-up some default classes in the main CSS file for "x-floater.noodleBox.color_01", "[...].color_02", etc.
* @2021-02-06T11:18:53.657Z
<<KKTLQ1R4>> Deprecate Proportional Selectors (KKTMJ1S8)

The proportional Alph Simple Selectors should probably be done-away with. They were largely conceived-of as a way to deal with the possibility that a publisher of an image file might replace the network resource with a higher or lower-resolution version, and I wanted there to be a way to specify a portion of an image that was independent of its resolution. 

However, in consideration of the fact that audiovisual resources may (should?) be afforded the same flexibility in amendment as text permascrolls, I now feel that the convenience/utility of proportional addressing is far outweighed by its liability to produce incorrect selections.

Instead, we should standardize on CSS pixels as the standard unit for image dimensions, and have:

1) the alph.py server detect image resolution from the image file's metadata and supply that information in its LD representation, and

2) Have the Docuplextron assume images are 96dpi unless there's some metadata that tells us otherwise. 
* @2021-02-06T17:50:17.191Z
<<KJVCDEZK>>  (KKU0I85F)

TODO L1D

- Make a form/template/thingie to open text documents AS permascrolls. So, you've got a top pane which shows the text file on the network, and a bottom pane which is a noodle editor. When the user Alt+Enters on the noodle editor, the top pane is updated and the bottom pane is blanked. 

- Make a metadata editor for Alphic resources. The ?editmeta Web form sucks and should be updated as well, but we can probably do something easily enough in the Docuplextron with the tooling we're developing for link editing.

- I should be able to select HTML in a noodlebox, or the scratchpad, or anywhere, and have it parsed and inserted into a floater.
@2021-02-07T22:25:26.779Z
<<KKVOKK2E>>  (KKVPSXQO)

Tuning-up a few things in preparation for a new demo – hopefully some time in the next week. 

Oh, let's shunt-in a note from January 13th that I forgot to post:
-----
Links! They're working! Again! For the third time! Some issues:

The link editor is really clunky and overly verbose. Ideally, I'd be able to select some text, hit Alt+L, and then be prompted to select the target of the link -- this I could either do by making a selection, clicking on a floater, image, etc., or simply typing a URL into a box. Then, I'd be prompted to select/input the relation between the nodes, and the graph to save the link to, with the option then given to open the full link editor. 

EDL nodes are not displayed at the moment. Gotta work that out in paintLinkTerminals(). Moreover, I need to decide what the best default behaviour for clicking an EDL link is – just show the contents of the EDL in a floater? Probably, if it's not visible in the workspace. 

Xpointers are not handled at the moment. Luckily, I wrote a lot of the back-end for that a long time ago, so it shouldn't be too much work to get 'em working with links.

The workspace save/load mechanics need to be monkeyed-with a bit. Right now, the contents of the LinkStore are persistent across all workspaces. We probably need to have graphs that load/unload with workspaces.

Link nexuses for full resources should be anchored to their tabs. We might also want to consider anchoring nexuses to the tabs of fragments that are in the workspace but off-screen.
-----

Anyway, back onto the subject of the demo: 

Here's an example I thought to give of the power of deep-linking and a hypertext system based on transclusion:

You go to the book store one day. You buy a copy of Moby Dick. As you read through it, you're jotting the occasional marginal note. Some days later, you're visiting a friend. You notice they've got a copy of Moby Dick. Nicer edition -- illustrated, maybe -- and you start thumbing through it. In the margins of your friend's edition, you see your own marginal notes. When you show the book to your friend, they do not see the notes. You give your friend a high-five with intent, and suddenly they see the notes too. 

This illustrates the whole xanalogical system perfectly. Because the book is transcluding from a master source-text of Moby Dick, your notes are linked to the master source, not your personal edition. Your notes/links are independent of any one copy or edition of the text, so when you view other copies/editions, your notes are right there, stored in your mental workspace and overlaid onto any instances of the source text that you encounter. If you want to share your notes with others, you need only to initiate some trivial transaction - an email or instant message - to send them your links which they may then integrate into their own workspace.

* @2021-02-09T21:17:25.738Z
<<KKYGXO97>>  (KKYI7NL1)

Still working on planning/scripting a demo, using the process to steer short-term development. Two features I want to have implemented within the next 24 hours:

- "VERSION" in floater context menu for floaters with X-DOM contents. This will prompt the user for the id/URI of the new or old version; three buttons [ARCHIVE EXISTING] [CREATE NEW] [CANCEL]. 
  - Both affirmative buttons will create a new floater and copy the DOM into it with the new URL set as its content-source; then it will attempt to POST the new floater to the given URL (the usual confirm dialog will fire). 
  - ARCHIVE EXISTING will create a <link> in the original document's <head> with a "predecessor-version" relation pointing to the new URL. It will likewise create a corresponding "successor-version" link in the newly-created document pointing back at the original document's URL. The intent here is that we are going to modify and save the document at the original URL, while the newly-created document will be an archive copy that we won't mess with.
  - CREATE NEW will create a <link> in the newly-created document's <head> with a "predecessor-version" relation pointing to the original document's URL; it will likewise create a corresponding "successor-version" link in the original document's <head> that points to the new URL as its "sucessor-version". The intent here is that the document we're posting to the new URL will be the new version that we modify, while the original document will be our archive version that we leave alone.

- "HEAD LINKS" in floater context menus for floaters with X-DOM contents. This will spawn a sub-menu with items corresponding to the <link> elements in the document's <head>.

Let's get it done, me!

* @2021-02-10T10:11:27.178Z
<<KKZ9RWJH>>  (KKZ9VOR0)

An idea about the middle-click/X buffer insertion issue on Linux:

I wonder if it would work to just toggle the contentEditable property on noodleBoxen when a floater is taken inhand, then reenable it when released. When does a contentEditable element gain/lose focus?
@2021-02-10T17:09:55.464Z
<<KKZLPPJ9>>  (KKZOUNC3)

I was about to start coding the version branching thing that I wanted to add yesterday, but ...okay, first of all it was a poor design. It didn't take into account the issue of having a "current version" linked to each document, and ... tracking/linking revisions with <head> links in HTML is a much larger pain in the butt than it should be: changes to any document's name/URL in the version tree require changes to AT LEAST two affected HTML files, and then there's the ridiculous idea that a relationship should have to be recorded twice with a different relation depending on the document it's stored in

VERSION-1
---
link_1: VERSION-1 --> successorVersion --> VERSION-2
link_2: VERSION-1 --> currentVersion --> VERSION-X

VERSION-2
---
link_3: VERSION-2 --> predecessorVersion --> VERSION-1
link_4: VERSION-2 --> currentVersion --> VERSION-X

link_1 and link_3 ARE THE SAME RELATIONSHIP, with different names.

Instead, shouldn't I just be concentrating on using LD graphs for this? Every version in the revision tree need only have one <link> in their <head> to an LD graph that has the whole revision tree in it. And if any resources change their name/location, only the graph document needs to be updated. Much simpler.

I still want to have <head> links accessible from the floater menu, though. So I'll do that. But for revision control, LD graphs are the plan now.

...now I just have to get graph export working correctly.
The server software has really been neglected. One of the most useful planned features – the automagic link registry – is currently in a woeful state. I've never properly spec'd its intended function, I've waffled with output formats and how/what kinds of links it will scan and store, etc..

The idea is this:

When an HTTP GET request comes in for resource X, the server looks for a Referer: header; if it finds one, it sends out a GET request for the referring document and scans it for links or transclusions to resource X. Those that it finds are stored in resource X's metadata (along with a lastChecked property, so that it doesn't just keep GETtting the referer document over and over again.)

Why does it do this? Because we want to be able to ask a resource where it's being used, and even more specifically, what fragments of the resource are being used. If I've got a text document at:

http://some.site/foo.txt

I want to be able to send it a fragment selector query, like this:

http://some.site/foo.txt?link=123-456

And have it return a list of documents that transclude or link-to that portion of the resource. 

Or, just sending a ?link query by itself will return a list of ALL the documents that transclude/link from/to the resource. 

Does it currently work? Partially. The "automagic" part - scanning 'Referer' documents - seems to have been broken or disabled, though it did work in the early days (I may have disabled it once I started running alph.py on actual Web hosts because it was logging a lot of errors, but I've only just glanced over some of the code this morning). In order to get a page scanned, you have to GET the ?link query with a URL to scan. And it does work with HTML documents, but it stores the information in a weird-ass format, and returns those links in the same weird-ass format. It's JSON ...it's "inspired by" (tainted by?) the Web Annotation framework, but not conformant with it. 

Translusions with X-TEXT elements are stored like:

{
  "id":"referer_URL",
  "type":"transclusion",
  "target": {
    "selector" [
      {
        "type":"TextPositionSelector",
        "source":"resource_URL",
        "start":origin_index,
        "end":extent_index
      } //, Additional fragments here....
    ]
  }
}

Hrm.

I made it like that in 2017/18, after I'd decided I was going to try to work within the Web Annotation data model. I didn't quite get there, though, did I?  Originally, I'd used a simpler JSON model/format (you can see it in the September 2016 demo video).

Now that I've got linking working in the Docuplextron, and a data model and LD vocabulary that I'm more-or-less happy with, it's time to retool linker.py so that it stores and reports links in a consistent format.

So, let's report links/transclusions in a container like this:

{
  "@context":"http://alph.io/terms.jsonld",
  "@id":resource_URL,
  "transcludedBy":[
    {
      "@id":transcluding_resource_URL,
      "transcludes":{
        "@type":"EDL",
        "@list":[
          {
            "@id":"resource_URL#selector",
            "@type":"AlphSpan",
            "@src":"resource_URL",
            "@origin":origin_index,
            "@extent":extent_index,
          {
          // ...other fragments...
       ]
    }
}

The link scanner also currently records LINK elements in a document's HEAD, as well as A elements in the document body. The anchor elements get an XPath-based 'id', which was sorta cool at the time, but I've since decided to use Xpointers in Alph instead of XPaths, so that needs to be updated as well. 

The trick here though, is ... how to store them. Or should we store them at all? And the Xpointer stuff needs to be added to the server source. Shouldn't be too hard to port over from Alph.js.

Furthermore, the link scanner needs to be updated to scan JSON-LD documents – and, potentially, other RDF serializations as well. And it should look at an HTML document's HEAD for links to JSON-LD linking documents and follow those up as well.
* @2021-02-13T12:02:05.059Z
<<KL3O2VDC>>  (KL3O5YG0)

Regarding the last entry, the "transcludedBy" relation should be "transcludingDocument", because the former simply sounds like the reverse of the "transcludes" relation. So, I'll update the LD vocabulary to that effect.
* @2021-02-16T12:19:15.646Z
<<KL7XDHAY>>  (KL7Z3XYE)

Working on documentation these past couple days, but some thoughts I've been having:

How do we want to support nodes in graphs that have literal values? Taking annotation as an example application:

Ideally, if a person were annotating something, they'd store their annotations in a text resource/permascroll somwhere and have links between the annotations and the subject material; then, when they shared their annotation graph with someone else, the actual content of the annotations would be out in the primedia somewhere so that the annotations themselves could be annotated. 

It would be very easy, though, to have annotations stored in the graph itself, a la Web Annotations. In a previous iteration of linking, I had sort-of supported links with data: URIs as link nodes. Another way of doing that is with blank nodes having "@value" attributes, or simply having string values. These are all workable approaches:

{
  "@id":"some_Anode",
  "note":[
    "A literal value as an annotation",
    {
      "@value":"An equivalent blank node."
      "@language":"en" // Although this one can carry other properties...
    },
    "data:text/plain;encoding=utf=8;A data URI as an annotation."
}

But these approach, while simple and easy to implement, are not particularly xanalogical. So... do we want any of them?

-----
Have an option to bundle noodles with exported workspaces?

-----
We should support something like a "linkedContext" relation in graphs, so that users can see linked fragments in situ?

User X is looking at two HTML documents, they're xanalogical, everything is hunky dory. They want to link between two fragments, so they highlight the A node, fire-off the linker, select their B node, store it. They then close the B document. Later, they click on the link terminal of the A node to summon the B node and:
  - The current behaviour is to just bring-up the linked fragment in a blank floater by itself
  - From this, the user can summon the authorialContext, publishingContext, etc., from the fragment's context menu
  - But what if the fragment was originally seen in another context entirely? That information should probably be stored with the link, right?

This also gets me thinking that the "authorialContext", "publishingContext" relations are a poor design. Innumerable viewing contexts are possible, all with different labels. Instead, of:

{
  "@id":"some_Anode",
  "authorialContext":"some_resource",
  "otherContext":"some_other_resource" // <-- "otherContext", as well as any other, would have to be defined in the JSON-LD vocabulary to be valid, right?
}

Should we do something like this?

{ "@id":"some_Anode",
  "context":[
    {
      "@id":"some_resource",
      "@label":"authorial"
    },
    {
      "@id":"some_other_resource",
      "@label":"other"
    }
  ]
}

Probably. Better design.


-----
Last thought for the night:

Because relations/edges are themselves nodes, and can be blank nodes in the graph with any number of properties, we can build links that trigger complex behaviours in the client. This is something Ted has suggested for decades, but I hadn't had a clear sense of how to pull it off until recently. I'll try to detail this at another time ...it's time right now for me to hit the hay.
* @2021-02-22T22:09:52.605Z
<<KL4Y12IL>> x-img (KLH4UQSS)

Time for a new element? Yeah, I reckon so. 

<x-img src="some_URL" originx="..." originy="..." extentx="..." extenty"..."></x-img>

X-IMG is simply a block generic block element that we use for clipping image sources. Most of the magic happens in the element's CSS.

If a request is sent for:

  http://some.site/image.png#123,456-623,956

Then we create this element:
<x-img	src="http://some.site/image.png" 
	origin="123,456" 
	extent="623,956" 
	style="background-image: url('http://some.site/image.png');
		background-position: 123px 456px;
		width: 500px;
		height: 500px;"></x-img>

And if we've acquired the image's native PPI and it is not 96 (CSS pixel PPI), we use "background-scale" to scale the image appropriately.
-----	
Also, it's time to actually start registering x-text as a custom element, because the lifecycle callbacks (attributeChangedCallback in particular) are going to be superduper useful in keeping element contents accurate.
* @2021-02-22T22:52:27.961Z
<<KLH4VTY4>>  (KLH6DH5A)

Fonts are an issue when styling shadow DOMs.

I've noticed issues, but I'd assumed they were relative URL problems – looks more and more now like Firefox (haven't tried Chromium yet) simply does not load Webfonts in shadow DOMs when the fonts/stylesheets are located on a different domain. I need to do some research on this, and some testing. Potential major bummer.
* @2021-03-06T01:06:59.478Z
<<KLWOS5UT>>  (KLX10QRV)

Okay, so... fonts.

It looks right now like @font-face rules in stylesheets that are in documents/fragments loaded into shadow DOMs from cross-origin sources are ignored. So... ugh. Is this only a Firefox thing? Can't tell for sure at the moment because:

About ten releases ago, Chrome decided that content scripts in Web Extensions can't do cross-origin requests. Only background scripts can. So the Docuplextron is, at the moment, pretty useless in Chrome, and I can't test the scope of this font issue until I start loading cross-origin resources via a background script and using the sendMessage() API to get those resources from the background script into the content script. LOTS OF FUN.

I could just ignore Chrome and try to carry-on with getting everything working in Mozilla via some baroque workaround, but this is probably a bad move. When Google sets a policy on something like this, Mozilla usually follows suit, so it may only be a matter of time before content scripts are crippled in the same way in Firefox.

So, I'll try to see the silver lining in this. I've been meaning to move some of the WebExtension's functionality into background scripts for a while. This will resolve a few issues; for instance, right now you can open the same workspace in two different browser tabs. And they will clobber each other in localStorage. Same thing for noodles open in multiple browser windows. A background script would be useful to keep track of what's open where, and make sure the user isn't destroying their own work across tabs/windows. Another benefit to a background script would be having Alph.sources be consistent across workspaces, and we can use it to cache resource and their metadata more efficiently.