The current system has bad performance when you've got 100,000+ tags.
I discovered that when the server returns 304, the browser gives the
ajax a 200 with the full response, and it's not clear to me if js can
know it got a 304. So, the tag set is being fully re-parsed from the
response on every page load. I was thinking that I should store that in
IndexedDB to avoid the parsing step, but... since the JSON.parse is
done by my common.get before it hits this function, it's meaningless.
Not to mention I still have to rebuild the datalist on every page since
of course that state isn't shared between tabs. Not worth the DB stuff.
We'll see what happens next.
This experiment of bringing Photos and Albums closer to parity in
search is going well so far. I have found some situations where it
is nice to only get albums back from search results.
There was always some semblance that two blank lines has some kind of
meaning or structure that's different from single blank lines, but
in reality it was mostly arbitrary and I can't stand to look at it
any more.
Any properties that are different in wide/narrow mode should be defined
in the correct media query. I got tired of having wide mode be the
default and then narrow mode having to unset/initial all the attributes
that aren't relevant to narrow.
It turns out that last-of-type only considers a single tag type,
it doesn't select last element of class if it has a different tag
than the other classed elements.
Foolishly, I was checking the length of the outputted easybake format,
which included lines for synonyms and multi-parent tags that shouldn't
be part of the tag count.
I skipped them during the commit where I added return to all onclicks
because I figure I won't be wrapping these kinds of attributes.
But I feel like it's better to be consistent and you never know when
it might happen.
Instead of requiring a page refresh to see the new tags. They
just won't be sorted.
Slight bummer, the datalist dropdown pretty much obscures the
whole thing anyway.