Notes – David Bushell
David is on board. Who else?
David is on board. Who else?
I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.
A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.
The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:
I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.
Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.
All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.
I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.
I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.
Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.
Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.
I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?
And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.
There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.
A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”
What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?
I’m just throwing this out there. I’d love it if someone were to run with it.
There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.
Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).
The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender"
.
Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script
element with a type
value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.
That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.
I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset
than source
: you give the browser some options, but ultimately you can’t force it to do anything.
I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js
with the simplest of suggestions:
{
"prerender": [{
"where": {
"href_matches": "/*"
},
"eagerness": "moderate"
}]
}
The eagerness
value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).
I still need to point to that JSON file from my HTML. Usually this would be done with something like a link
element, but for this particular API, I can send a response header instead:
Speculation-Rules: “/speculationrules.json"
I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.
Here’s the PHP I added to send that header:
header('Speculation-Rules: "/speculationrules.json"');
There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf
file for The Session on Apache:
<IfModule mod_headers.c>
<FilesMatch "speculationrules.json">
Header set Content-type application/speculationrules+json
</FilesMatch>
</IfModule>
A bit of a faff, that.
You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.
Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.
Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!
Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.
Mark’s write-up of the excellent Indie Web Camp Brighton that he co-organised with Paul.
The past weekend’s Indie Web Camp Brighton was wonderful! Many thanks to Mark and Paul for all their work putting it together.
There was a great turn-out. It felt like the perfect time for an Indie Web Camp. There’s a real appetite for getting away from ever more extractive silos and staking claim to our own corners of the web. Most of the attendees were at their first ever Indie Web Camp.
Paul asked me to oversee the schedule planning on day one, which I was happy to do. We made sure that first-timers got first dibs on proposing sessions. In the end, every single session was proposed by new attendees.
Day two was all about putting ideas into practice: coding, designing, and writing on our own website. I’m always blown away by how much gets done in just one short day. Best of all is when there’s someone who starts the weekend without their own website but finishes with a live site. That happened again this time.
I spent the second day tinkering with something I started at Indie Web Camp Nuremberg in October. Back then, I got related posts working here on my journal; a list of suggested follow-up posts to read based on the tags of the current post.
I wanted to do the same for my links; show links related to the one I’m currently linking to. It didn’t take too long to get that up and running.
But then I thought about it some more and realised it would be good to also show blog posts related to the link. So I did that. Then I realised it would be really good to show related links under blog posts too.
So now, if everything’s working correctly, then at the end of this post you will not only see related blog posts I’ve previously written, but also links related to the content of this post.
It was a very inspiring weekend. There’s something about being in a room with other people working on their websites that makes me super productive.
While we were hacking away on day two, somebody mentioned that they still find hard to explain the indie web to people.
“It’s having your own website”, I said.
But surely there’s more to it than that, they wondered.
Nope. If someone has their own website, then they’re part of the indie web. It doesn’t matter if that website is made with a complicated home-rolled tech stack or if it’s a Squarespace site.
What you do with your own website is entirely up to you. The technologies are just plumbing wether it’s webmentions, RSS, or anything else. None of it is a requirement. Heck, even HTML is optional. If you want to put plain text files on your website, go for it. It’s your website.
Subvert the status quo. Own a website. Make and share links.
In the same vein as that last link, Chris says what we’re all thinking:
Most of what we build is links from one page to another, and
form
submissions that send data from the browser to the server.
A fascinating look at the connections between hypertext and film editing. I’m a sucker for any article that cites both Ted Nelson and Walter Murch.
One of the first ever personal websites—long before the word “blog” was a mischievous gleam in Peter’s eye—was Justin Hall’s links.net. Linking was right there in the domain name.
I really enjoy sharing links on my website. It feels good to point to something and say, “Hey, check this out!”
Other people are doing it too.
Then there are some relatively new additions to the linking gang:
There are more out there for you to discover and add to your feed reader of choice. Good link hunting!
Next time you’re frustrated by a website that doesn’t provide an RSS feed, try using this tool:
Transform any old website with a list of links into an RSS Feed
I was impressed with how Safari now allows you to add websites to the dock:
It feels great to have websites that act just like other apps. I like that some of the icons in my dock are native, some are web apps, and I mostly don’t notice the difference.
For all intents and purposes, this is a desktop application created without a single line of Swift or Objective-C, or any heavy Electron wrappers.
Oh, and the application can work offline! Service workers, and browser storage are more than stable enough to handle a variety of offline loading patterns. These are truly exciting times to be building for the web!
There was one aspect that I was particularly pleased with. External links:
Links within a Safari-installed web app respect your default browser choice.
Excellent! Except it’s no longer true. At least not in some cases. The behaviour is inconsisent but I’m running the latest version of Safari on the latest version of Sonoma, and now external links in a Safari-installed web app are broken. They just stay in the same application.
I thought maybe it was related to whether the website’s manifest file has the display
value set to “standalone” rather than, say, “minimal ui”. Maybe the “standalone” instruction is being taken literally? But even when I change the value I’m still getting the broken behaviour.
This may sound like a small thing, but it completely changes the feel of using the web app. Instead of feeling like “I’m using an app that just happens to be on the web”, it now feels like “I’m using a web browser but with fewer features.”
I’ve been loving having Mastodon as a standalone app in my dock. It used to be that if I clicked on a link in a Mastodon post, it would open in my browser of choice (Firefox) where I could then bookmark it, or do any other tasks that my browser offers me. Now if I click on a link in Mastodon, I’m stuck in the same “app”. It feels horribly stifling.
I can right-click on a link and get options that still keep me in the same app, like “Open link” or “Open Link in New Window.” To actually open the link in my web browser, I have to select “Copy Link”, then go to my web browser, open a new tab, and paste the link in there.
This is broken. I hope it isn’t intentional. Maybe I’m just at the receiving end of some weird glitch. If this stays this way, I’ll probably just remove the Safari-installed web apps from my dock. They feel pointless if they’re just roach motels.
I’d love to file a bug for this, but this isn’t a Webkit bug, it’s a Safari bug (and the Webkit bug tracker is at pains to point out that Webkit and Safari are not the same thing). But have you ever tried to file a bug with Apple? Good luck!
Anyway, I sincerely hope that this change will be walked back. Otherwise websites in the dock are dead in the water.
Remy has turned his linkrot-battling technique into a service that you can use. He has more details on his blog.
John Willshire has been pondering web marginilia AKA stuff you put in your sidebar.
He has a particular fondness for the good ol’ blogroll. I’ve still got my analogue equivalent on my homepage—the bedroll. It’s a list of links to people who’ve stayed over. Maybe I should also have a regular blogroll, but I suspect it would just be a reproduction of feeds I’m subscribed to.
Then there’s marginalia at the level of a blog post, rather than a whole blog. Kevin Marks points out that this is something that Vannevar Bush described his theoretical memex doing—a device I was just talking about. Kevin created a proof of concept showing outbound and inbound links.
Outbound links are annoted versions of the A
elements in a blog post. Inbound links are webmentions (which should now include this post of mine).
Kevin has those links in the margins on either side of the blog post. I’ve also got links that go with my blog posts, but they’re displayed linearly:
Do they still count as marginalia when they’re presented vertically rather than alongside? For mobile devices, I’m not sure there’s any alternative.
After two days at border:none in Nuremberg, it was time for two days at Indie Web Camp, also in Nuremberg.
I hadn’t been to an Indie Web Camp since before The Situation. It felt very good to be back. I had almost forgotten how inspiring and productive they can be.
This one had a good turnout of around twenty people. We had ourselves an excellent first day of thought-provoking sessions. Then on day two it was time to put some of those ideas into action.
A little trick I like to do on the practical day is to have two tasks to attempt: one of them quite simple, and the other more ambitious. That way, as long as I get the simpler task done, I’ll always have at least something to demo at the end of the day.
This time I attempted three bits of home improvement on my website.
The first problem I set myself was ostensibly the simple one. But it involved regular expressions, so then I had two problems.
I wanted to automatically link up Mastodon usernames if I mentioned one in my notes. For example, during border:none I mentioned Brian’s mastodon username in a note: @briansuda@loðfíll.is.
That turned out to be an excellent test case. Those Icelandic characters made sure I wasn’t making unwarranted assumptions about character sets.
Here’s the regular expression I came up with. It’s not foolproof by any means. Basically it looks for @something@something.something
.
Good enough. Ship it.
My next task was a bit more ambitious. It involved SQL queries, something I’m slightly better at than regular expressions but that’s a very low bar.
I wanted to show related posts when you get to the end of one of my blog posts.
I’ve been tagging all my blog posts for years so that’s the mechanism I used for finding similar posts. There’s probably a clever SQL statement that could do this, but I ended up brute-forcing it a bit.
I don’t feel too bad about the hacky clunky nature of my solution, because I cache blog post pages. That means only the first person to view the blog post (usually me) will suffer any performance impacts from my clunky database queries. After that everything’s available straight from a cached file.
Let’s say you’re reading a blog post of mine that I’ve tagged with ten different keywords. I make a separate SQL query for each keyword to get all the other posts that use that tag. Then it’s a matter of sorting through all the results.
I loop through the results of each tag and apply a score to the tagged post. If the post shares one tag with the post you’re looking at, it has a score of one. If it shares two tags, it has a score of two, and so on.
I decided that for a post to be considered related, it had to share at least three tags. I also decided to limit the list of related posts to a maximum of five.
It worked out pretty well. If you scroll down on my recent post about JavaScript, you’ll see links to related posts about JavaScript. If you read through a post on accessibility testing, you’ll find other posts about accessibility testing. If you make it to the end of this post about Mars colonisation you’ll see links to more posts about exploring our solar system.
Right now I’m just doing this for my blog but I’d like to do it for my links too. A job for a future Indie Web Camp.
I was very inspired by Remy’s recent post on how he’s tackling link rot on his site. I wanted to do the same for mine.
On the first day at Indie Web Camp I led a session on link rot to gather ideas and alternative approaches. We had a really good discussion, though it’s always worth bearing in mind that there’ll never be a perfect solution. There’ll always be some false positives and some false negatives.
The other Jeremy at Indie Camp Nuremberg blogged about the session. Sebastian Greger was attending remotely and the session inspired him to spend the second day also tackling linkrot.
In the end I decided to stick with Remy’s two-pronged approach:
Here’s the JavaScript I wrote for the first part.
It’s very similar to Remy’s but with one little addition. I check to see if the clicked link is inside an h-entry
and if it is, I pass on the date from the post’s dt-published
value.
Here’s the PHP I wrote for the server-side redirector. The comments tell the story of what the code is doing:
curl
request to get the response headers from the URL. The time limit is set to 1 second.Not perfect by any means, but it works for the most common cases of link rot.
For the demo at the end of the day I went back into my archive of over 10,000 links and plucked out some old posts, like this one from December 2005. It takes a little while to do the rerouting but eventually you get to see the archived version from the same time period as when I linked to it.
Here’s another link from 2005. Here’s another. Those links are broken now, but with a little patience, you’ll still get to read them on the Internet Archive.
The Internet Archive’s wayback machine really is a gift. I can’t imagine how would it be even remotely possible to try to address link rot on my site without archive.org.
I will continue to donate money to the Internet Archive and I encourage you to do the same.
When you think of heraldry what comes to mind is probably knights in shining armor, damsels in distress, jousting, that sort of thing. Medieval stuff. But I prefer to think of it as one of the earliest design systems.
This totally checks out.
I really, really like the progressive enhancement approach that Remy is taking here with outbound links:
When a real user clicks on a link, it’s swapped out to be redirected through my own endpoint that checks if the URL is still OK, and if so permanently redirects the visitor, otherwise my endpoint checks the Web Archive for the URL and permanently redirects to that instead.
I think I’m going to do the same! I’d have to rewrite the server-side code in PHP, but that shouldn’t be too tricky.
This could a project for the next Indie Web Camp I attend.
I received this email recently:
Subject: multi-page web apps
Hi Jeremy,
lately I’ve been following you through videos and texts and I’m curious as to why you advocate the use of multi-page web apps and not single-page ones.
Perhaps you can refer me to some sources where your position and reasoning is evident?
Here’s the response I sent…
Hi,
You can find a lot of my reasoning laid out in this (short and free) online book I wrote called Resilient Web Design:
https://resilientwebdesign.com/
The short answer to your question is this: user experience.
The slightly longer answer…
For most use cases, a website (or multi-page app if you prefer) is going to provide the most robust experience for the most number of users. That’s because a user’s web browser takes care of most of the heavy lifting.
Navigating from one page to another? That’s taken care of with links.
Gathering information from a user to process on a server? That’s taken care of with forms.
This frees me up to concentrate on the content and the design without having to reinvent the wheels of links and form fields.
These (let’s call them) multi-page apps are stateless, and for most use cases that’s absolutely fine.
There are some cases where you’d want a state to persist across pages. Let’s say you’re playing a song, or a podcast episode. Ideally you’d want that player to continue seamlessly playing even as the user navigates around the site. In that situation, a single-page app would be a suitable architecture.
But that architecture comes at a cost. Now you’ve got stop the browser doing what it would normally do with links and forms. It’s up to you to recreate that functionality. And you can’t do it with HTML, a robust fault-tolerant declarative language. You need to reimplement all that functionality in JavaScript, a less tolerant, more brittle language.
Then you’ve got to ship all that code to the user before they can use your site. It might be JavaScript code you’ve written yourself or it might be a third-party library designed for building single-page apps. Either way, the user pays a download tax (and a parsing tax, and an execution tax). Whereas with links and forms, all of that functionality is pre-bundled into the user’s web browser.
So that’s my reasoning. At least nine times out of ten, a multi-page approach is leaner, more robust, and simpler.
Like I said, there are times when a single-page approach makes sense—it all comes down to whether state needs to be constantly preserved. But these use cases are the exceptions, not the rule.
That’s why I find the framing of your question a little concerning. It should be inverted. The default approach should be to assume a multi-page approach (which is the way the web works by default). Deciding to take a JavaScript-driven single-page approach should be the exception.
It’s kind of like when people ask, “Why don’t you have children?” Surely the decision to have a child should require deliberation and commitment, rather than the other way around.
When it comes to front-end development, I’m worried that we’ve reached a state where the more complex over-engineered approach is viewed as the default.
I may be committing a fundamental attribution error here, but I think that we’ve reached this point not because of any consideration for users, but rather because of how it makes us developers feel. Perhaps building an old-fashioned website that uses HTML for navigations feels too easy, like it’s beneath us. But building an “app” that requires JavaScript just to render text on a screen feels like real programming.
I hope I’m wrong. I hope that other developers will start to consider user experience first and foremost when making architectural decisions.
Anyway. That’s my answer. User experience.
Cheers,
Jeremy
It’s often said that it’s easier to make a fast website than it is to keep a website fast. Things slip through. If you’re not vigilant, performance can erode without you noticing.
It’s a similar story for other invisible but important facets of your website: privacy, security, accessibility. Because they’re hidden from view, you won’t be able to see if there’s a regression.
That’s why it’s a good idea to have regular audits for performance, privacy, security, and accessibility.
I wrote about accessibility testing a while back, and how there’s quite a bit that you can do for yourself before calling in an expert to look at the really gnarly stuff:
When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.
I recently did an internal audit of the Clearleft website. After writing up the report, I also did a lunch’n’learn to share my methodology. I wanted to show that there’s some low-hanging fruit that pretty much anyone can catch.
To start with, there’s keyboard navigation. Put your mouse and trackpad to one side and use the tab
key to navigate around.
Caveat: depending on what browser you’re using, you might need to update some preferences for keyboard navigation to work on links. If you’re using Safari, go to “Preferences”, then “Advanced”, and tick “Press Tab to highlight each item on a web page.”
Tab around and find out. You should see some nice chunky :focus-visible
styles on links and form fields.
Here’s something else that anyone can do: zoom in. Increase the magnification to 200%. Everything should scale proportionally. How about 500%? You’ll probably see a mobile-friendly layout. That’s fine. As long as nothing is broken or overlapping, you’re good.
At this point, I reach for some tools. I’ve got some bookmarklets that do similar things: tota11y and ANDI. They both examine the source HTML and CSS to generate reports on structure, headings, images, forms, and so on.
These tools are really useful, but you need to be able to interpret the results. For example, a tool can tell you if an image has no alt
text. But it can’t tell you if an image has good or bad alt
text.
Likewise, these tools are great for catching colour-contrast issues. But there’s a big difference between a colour-contrast issue on the body copy compared to a colour-contrast issue on one unimportant page element.
I think that demonstrates the most important aspect of any audit: prioritisation.
Finding out that you have accessibility issues isn’t that useful if they’re all presented as an undifferentiated list. What you really need to know are which issues are the most important to fix.
By the way, I really like the way that the Gov.uk team prioritises accessibility concerns:
The team puts accessibility concerns in 2 categories:
- Theoretical: A question or statement regarding the accessibility of an implementation within the Design System without evidence of real-world impact.
- Evidenced: Sharing new research, data or evidence showing that an implementation within the Design System could cause barriers for disabled people.
The team will usually prioritise evidenced issues and queries over theoretical ones.
When I wrote up my audit for the Clearleft website, I structured it in order of priority. The most important things to fix are at the start of the audit. I also used a simple scale for classifying the severity of issues: low, medium, and high priority.
Thankfully there were no high-priority issues. There were a couple of medium-priority issues. There were plenty of low-priority issues. That’s okay. That’s a pretty good distribution.
If you’re interested, here’s the report I delivered…
There are a few issues with the pink colour. When it’s used on a grey background, or when it’s used as a background colour for white text, the colour contrast isn’t high enough.
The SVG arrow icon could be improved too.
--red
is currently rgb(234, 33, 90)
. Change it to rgb(210, 20, 73)
(thanks, James!)currentColor
. Consider hardcoding solid black (or a very, very dark grey) instead.Alt text is improving on the site. There’s reasonable alt text at the top level pages and the first screen’s worth of case studies and blog posts. I made a sweep through these pages a while back to improve the alt text but I haven’t done older blog posts and case studies.
The site is using headings sensibly. Sometimes the nesting of headings isn’t perfect, but this is a low priority issue. For example, on the contact page there’s an h1
followed by two h3
s. In theory this isn’t correct. In practice (for screen reader users) it’s not an issue.
h3
instead of h1
.h3
headings for the industry sector (“Charities”, “Education” etc.) but these should probably not be headings at all. On the blog index page we use a class “Tags” for a similar purpose. Consider reusing that pattern on the case studies index page.h3
and the subsequent three headings are h2
s. Ideally this would be reversed: a single h2
followed by three h3
s.Sometimes the same text is used for different links.
The only form on the site is the newsletter sign-up form. It’s marked up pretty well: the input
has an associated label
, although a visible (clickable) label
would be better.
The site doesn’t use JavaScript to mess with tabbing order for keyboard users. The source order of elements in the markup generally makes sense so all is good.
The focus styles are nice and clear too!
The site is using HTML landmark elements sensibly (header
, nav
, main
, footer
, etc.).
Stéphanie has gathered a goldmine of goodies:
Articles, resources, checklists, tools, plugins and books to design accessible products
How do we write, design, and code a link that works for everyone on every device? Let’s dive into the world of creating the perfect link, without making a pig’s breakfast of it.