Rather than a working group (WG), a Sustainable Web Interest Group (IG) with open participation would better enable the Web Sustainability Guidlines (WSG) to have a larger impact sooner, with broader support.
Proposal: rewrite the current proposed SustyWeb charter as charter for a SustyWeb IG and provide a plan for publishing the WSG as a Note, rapidly iterating similar to how the Community Group is already iterating on the WSG Report, with the eventual goal of publishing an AC-approved W3C Statement to give it more formal standing.
An IG would have less process than a WG. It would for example avoid things like patent-related procedures, disclosures, etc. which should be unnecessary for the WSG.
The IG charter could also define custom success criteria for a WSG Statement that better reflects the varied needs of providing a broad spectrum of sustainability guidelines. A broad set of sustainability guidelines would better achieve sustainability goals than subsetting or restricting the guidelines to only those that pass a more precisely objective testability bar that is expected for purely technical specifications for interoperable independent implementations.
At a high level the WSG is also more similar to the Ethical Web principles, which itself is destined (eventually) for a W3C Statement.
Similar to its rel=me checking support (validator), IndieWebify should support
parsing, checking, and advising how to improve any rel=author (https://microformats.org/wiki/rel-author)
links found on a page.
IndieWebify should look for and check both:
<a rel="author" href="https://…">
and
<link rel="author" href="https://…">
tags, and display a list of all of them found on a particular page,
along with any additional information about each one (e.g.
type=,
title=,
and
hreflang=
attributes).
Ideally it should also check for a valid representative h-card at the destination of a rel=author link.
This is worth implementing both as its own IndieWebify feature
(especially so software & services like Mastodon could test an implementation),
and as a building block towards implementing a complete authorship validator
(issue #6).
Similar to its h-entry checking support (validator), IndieWebify should support
parsing, checking, and advising how to improve your h-feed (https://microformats.org/wiki/h-feed),
ideally re-using the existing h-entry checking code to also validate all h-entry items found inside the h-feed.
I finally understand why
Rambaldi
may have hidden so many inventions.
Forecast
When you invent something, you should forecast the impact of your invention in the current cultural (social, political, economic, belief systems) context, and if it
poses non trivial existential risk
or is likely to cause more harm than good
Shoulds
Then you should stop, and:
encrypt your work for a potentially better future context
or destroy your notes, ideally in a way that minimizes risk of detection of their deliberate destruction
and avoid any or any detectable use of your invention, because even the mere use of it may provide enough information for someone else to reinvent it who may not be as responsible.
In Addition
Insights and new knowledge are included in this meaning of “invention” and the guidance above.
Forecasting should consider both whether your invention could directly cause risk or more harm, or if it could be incorporated as a building block with other (perhaps yet to be invented) technologies to create risk or more harm.
Instead
Instead of continuing work on such inventions, shift your focus to:
work on other inventions
and document & understand how & why that current cultural context would contribute to existential risk or more harm than good
and work to improve, evolve that cultural context to reduce or eliminate its contribution to existential risk, and or its aspects that would (or already do) cause more harm than good
Da Vinci
The
Should (1)
provides a plausible explanation for why
Da Vinci
“encrypted” his writings in
mirror script,
deliberately making it difficult for others to read (and thus remember or reproduce).
Per
Should (2)
he also wrote in paper mediums of the time that were all destroyable,
and he may have been successful in destroying without detection,
since no one has found any evidence thereof, although such a lack of evidence is purely circumstantial and he may just as likely never destroyed any invention notes.
Methods & Precautions
Learning from Da Vinci’s example within the context of the
Shoulds, we can infer additional methods and precautions to take when developing inventions:
do not write initial invention notes where others (people or bots) may read them (e.g. most online services) because their ability to transcribe or make copies prevents
Should (2).
Instead use something like paper notes which can presumably be shredded or burned if necessary, or keep your notes in your head.
do not use bound notebooks for initial invention notes because tearing out a page to destroy may be detectable by the bound remains left behind.
instead use individual sheets of paper organized into folders. perhaps eventually bind your papers into a notebook. Which apparently
Da Vinci did!
“These notebooks – originally loose papers of different types and sizes…”
consider developing a simple unique cipher you can actively use when writing which will at least inconvenience, reduce, or slow the readability of your notes. even better if you can develop a
steganographic
cipher, where an obvious reading of your invention writings provides a plausible but alternative meaning, thus hiding your actual invention writings in plain sight.
Dream
Many of these insights came to me in a dream this morning, so clearly that I immediately wrote them down upon waking up, and continued writing extrapolations from the initial insights.
Additional Reading
After writing down the above while it (and subsequent thoughts & deductions) were fresh in mind, and typing it up, I did a web search for “responsible inventing” for prior similar, related, or possibly of interest works and found:
While this post encourages forecasting and other methods for avoiding unintended harmful impacts of inventions, I want to close by placing those precautions within an active positive context.
I believe it is the ultimate responsibility of an inventor to contribute, encourage, and actively create a positive vision of the future through their inventions. As
Alan Kay said:
“The best way to predict the future is to invent it.”
Comments
Comments curated from
replies on personal sites
and
federated replies
that include thoughts, questions, and related reading
that contribute to the primary topic of the article.
If some invention can pose a risk, should it be treated as a vulnerability?
Destroying/delaying an invention, in this case, could lead to it being re-invented and exploited in a different, less responsible, place.
Obviously, it doesn't mean that invention should be unleashed. But if it poses a risk, wouldn't it be more responsible to work on finding a way to minimize it, and, ideally, not alone?
There is probably no one good answer, and each case will be different.
I am unsure if it is always practical or possible, for an inventor to understand all the characteristics of their inventions and their impact beyond a very slim set of hops.
If things go well, I believe inventors can "believe their own hype", because they are human.
Questions:
Is it a free pass if you make something awful and can't take it back?
Would that make Ignorance a virtue?
This opens up many more problems, for both creators, and broader society.
Finished my second Broken Arrow #Skyrace 23k¹ yesterday in 6:52:44! #RingDasBell
This year’s #BrokenArrowSkyrace² 23k was actually that distance! I ran 23.3km with 4557' vertical climb! In contrast, last year’s "23k" race³ was rerouted (due to weather conditions) last minute to two laps of the 11k course, where my actual distance was 18.87km with 4905' vert.
I have been looking forward to this all year, to climbing the infamous "Stairway to Heaven" ladder to the top of Washeshu Peak (8885'/2692m elevation) for the first time (since last years’s race had to skip it).
This year’s Broken Arrow is the start of the Mountain Running World Cup⁴. It’s a rare sports event opportunity to compete with the best in the sport, to literally run the same trails they do, on the same day, with the same start (there are no waves), and finish line.
Lots to write-up, for now, I’m grateful for the experience and accomplishment.
Super grateful for everyone who came out to cheer and especially my coach whose training and guidance got me here.
A few notes:
Great lining up with so many friends.
Hot day. Filled my ice bandana at the first aid station (Snow King) which made the rest possible.
Steady hydration & fueling.
Fueling timeline notes (times are my H:MM race clock times from the start)
0:00 start 1:45 ate Picky Bar 2:00 finished Tailwind in 500ml bottle 2:08 Snow King aid station, refilled bottles one with water and the other with mandarin Tailwind, filled ice bandana with ice, picked up a few Spring Energy gels 3:15 ate Awesome Sauce gel 3:45 ate Awesome Sauce gel ~4:30 left Siberia aid station with refilled ice bandana, bottles, a few Spring Snacks, ate potato chips, a watermelon slice, salt+nuun add to one water bottle, mandarin Tailwind in the other 5:05 ate Awesome Sauce gel 5:35 (-13:39) left Julia aid station with another Spring Energy gel 6:03 ate Awesome Sauce gel 6:52:44 finish
Lots of incredible views along the way. The air was clean and quite breathable even nearing 9000'. Felt a bit slower but kept going within my capacity.
Kept an eye on the time remaining before cut-off compared to my distance and vert climbing remaining and pushed steadily when I could.
Finished with just over 7 minutes to spare before the official cut-off, to friends cheering on all sides. Saw and hugged my coach after ringing the bell at the finish.
POSSE (Publish on your Own Site, Syndicate Elsewhere) has grown steadily as a common practice in the #IndieWeb community, personal sites, CMSs (like Withknown, which itself reached 10 years in May!), and services (like https://micro.blog) for over a decade.
In its 12th year, POSSE broke through to broader technology press and adoption beyond the community. For example:
In its 19th year, the microformats formal #microformats2 syntax and popular vocabularies h-card, h-entry, and h-feed, kept growing across IndieWeb (micro)blogging services and software like CMSs & SSGs both for publishing, and richer peer-to-peer social web interactions via #Webmention.
Beyond the IndieWeb, the rel=me microformat, AKA #relMe, continues to be adopted by services to support #distributed #verification, such as these in the past year:
* Meta Platforms #Threads user profile "Link" field¹ * #Letterboxd user profile website field²
For both POSSE and microformats, there is always more we can do to improve their techniques, technologies, and tools to help people own their content and identities online, while staying connected to friends across the web.
Yesterday I proposed the idea of a “minimum interesting service worker” that could provide a link (or links) to archives or mirrors when your site was unavailable as one possible solution to the desire to make personal #indieweb sites more reliable by providing at least a user path to “soft repair” links to your site that may otherwise seem broken.
Minimum because it only requires two files and one line of script in site footer template, and interesting because it provides both a novel user benefit and personal site publisher benefits.
The idea occurred to me during an informal coffee chat over Zoom with a couple of other Indieweb community folks yesterday, and afterwards I braindumped a bit into the IndieWeb Developers Chat channel¹. Figured it was worth writing up rather than waiting to implement it.
Basic idea:
You have a service worker (and “offline” HTML page) on your personal site, installed from any page on your site, that all it does is cache the offline page, and on future requests to your site checks to see if the requested page is available, and if so serves it, otherwise it displays your offline page with a “site appears to be unreachable” message that a lot of service workers provide, AND provides an algorithmically constructed link to the page on an archive (e.g. Internet Archive) or static mirror of your site (typically at another domain).
This is minimal because it requires only two files: your service worker (a JS file) and your offline page (a minimal self-contained static HTML file with inline CSS). Doable in <1k bytes of code, with no additional local caching or storage requirements, thus a negligible impact on site visitors (likely less than the cookies that major sites store).
User benefit:
If someone has ever visited your personal site, then in the future whenever they click a link to your pages or posts, if your site/domain is unavailable for any reason, then the reader would see a notice (from your offline page) and a link to view an archive/mirror copy instead, thus providing a one-click ability for the reader to “soft-repair” any otherwise apparently broken links to your site.
Personal site publisher benefits:
Having such a service worker that automatically provides your readers links to where they can view your content on an archive or mirror means you can go on vacation or otherwise step away from your personal site, knowing that if it does go down, (at least prior) site visitors will still have a way to click-through and view your published content.
Additional enhancements:
Ideally any archive or mirror copies would use rel=canonical to link back to the page on your domain, so any crawlers or search engines could automatically prefer your original page, or browsers could offer the user a choice to “View original”. You can do that by including a rel=canonical link in all your original pages, so when they are archived or mirrored, those copies automatically include a rel=canonical link back to your original page or post.
The simplest implementation would be to ping the Internet Archive to save² your page or post upon publishing it. You could also add code to your site to explicitly generate a static mirror of your pages, perhaps with an SSG or crawler like Spiderpig, to a GitHub repo, which is then auto-served as GitHub static pages, perhaps on its own domain yet at the same paths as your original pages (to make it trivial to generate such mirror links automatically).
If you’re using links to the Internet Archive, you can generate them automatically by prefixing your page URL with https://web.archive.org/web/*/ e.g. this post:
It may be possible to write this minimum interesting service worker (e.g. misv.js) as a generic (rather than site-specific) service worker that literally anyone with a personal site could “install” as is (a JS file, an HTML file, and a one-line script tag in their site-wide footer) and it would figure everything out from the context it is running in, unchanged (zero configuration necessary).
Ran my 12th #BayToBreakers race in 1:59:54 on Sunday 2024-05-19.
After a comedy of transit struggles to get to the start line, I jumped in with Corral C runners (my bib was for Corral B) and started with them.
Great seeing the Midnight Runners crab rave cheer gang in Hayes Valley before Hayes Hill.
Made it into Golden Gate Park, and eventually saw Vivek and David Lam making their way back from the finish.
Just before the bison paddock, I saw Paddy & Eleanor walking back as well, and stopped to briefly chat with them.
Soon after I saw Adrienne and a few other #NPSF pals running and as they stopped to say hi to Paddy, I took off to go finish.
Adrienne and friends caught up to me on the last segment before Ocean Beach, and decided to run together. After turning the corner onto Great Highway, I could see the finish line. Glancing down at my watch there seemed to be enough time to finish under 2 hours if we picked it up. I asked Adrienne if we could try for a sub-2 hour time and she said to go for it. We picked up the pace and after crossing the finish line I stopped my Garmin — it read 1:59:54.
Oddly the official Bay to Breakers results (which are not at a linkable URL) showed 2:00:07. The only explanation I have is after the first timing strip after the finish line where I stopped my watch, there was a big crowd of loitering people that made it hard to keep moving, and cross a second timing strip. It is possible the first timing strip did not register my bib chip, and only the second timing strip picked it up. I have emailed Bay to Breakers to see if they can correct it, and included a link to my Strava activity that shows I recorded the entire race on my watch.
It was a harder race than usual, despite the good weather.
There were a few things that contributed. First, I had run each of the prior two days: 5km+ at Friday night’s Midnight Runners 5th anniversary run and run/walk celebration afterwards totaling ~5 miles, and then 6.5 miles at SFRC on the trails on Saturday.
I slept reasonably well the night before the race, and having checked the news announcements about the availability of transit options in the morning, planned accordingly. When I checked the actual train arrival times, none of the MUNI trains that were supposed to be running were running. I ran down to take the MUNI bus which was supposed to go downtown, except it stopped at Van Ness avenue, inexplicably, and the driver told everyone it was the last stop.
Admittedly I was already annoyed that SF MUNI for some reason decided to stop the MUNI trains the morning of Bay to Breakers that could easily have taken thousands of runners to near the race start at Embarcadero via the Market Street subway. Having the bus stop sooner than expected was a second disappointment and discouragement.
I (and many other runners) decided to run towards the start, which was still ~2 miles away at that point.
Upon reaching the Civic Center station on Market street, we realized from the street level displays that BART trains appeared to be running normally like any other Sunday, so we went downstairs and paid for a second transit ticket to take the BART a few stops.
The BART train was full of costumed Bay to Breakers runners. Disembarking at the Embarcadero station, I jogged/ran the rest of the way around the entrance corral maze to the right spot for Corral B entrants, and joined the group waiting at the start line.
Lessons learned: I am not trusting MUNI rail or bus into downtown on Bay to Breakers race day again, despite any announcements from SFMTA. Too many years of bad experiences.
However, BART seems reliable so I plan to find my way to taking BART in the future. Perhaps by taking a bus to the 16th street BART station, avoiding all street closures.
Having missed my start corral due to the transit mishaps, I didn’t see anyone else I knew. The combination of being annoyed at MUNI’s unreliability (both in what was announced vs what was running and premature bus termination) and starting in a crowd not knowing anyone took my motivation down several notches.
Still, the weather was pleasant yet cool, ideal for a race so I ran a pace that felt good for me, and kept an eye out for friends along the course. I stopped after mile 1 for a portapotty pitstop. Back in the chaos of Howard street and then Ninth to Hayes, I saw a few folks I knew from a distance.
Seeing and high-fiving the Midnight Runners crab race cheer crew at Hayes Hill turned my mood around though, and I enjoyed the rest of the race, from Hayes Hill through Golden Gate Park.
It was my slowest Bay to Breakers yet, however first in a while that I finished with friends!
After we grabbed our medals and snacks in the finish area, I hiked/jogged back to the Panhandle, found the Midnight Runners crab rave crew keeping the party going and joined in.
Great seeing old friends and meeting new amazing people as well. So many thoughtful inspiring conversations germinating new ideas for creative projects.
Took lots of photos and notes.
We recorded all the IndieWebCamp day 1 #BarCamp style breakout sessions, and I believe all the Beyond Tellerand talks were recorded as well. I’m looking forward to rewatching the sessions and talks and reconnecting with all the ideas and open tabs in my browser.
Aside: this past Tuesday, the second day of the 2024 Beyond Tellerand talks, was also the five year anniversary of my closing talk at btconf DUS 2019: _Take Back Your Web_ (https://www.youtube.com/watch?v=qBLob0ObHMw )
↳ In reply to hachyderm.io user thisismissem’s post@thisismissem@hachyderm.io re: “issue might be with what you're federating out maybe”, possibly except that regardless of what I’m federating out, the point in my reply to @flaki@flaki.social is that #Mastodon is still getting it half-right, which is a bug in Mastodon regardless of what I’m federating out.
Either Mastodon should be treating my hashtags precisely as hashtags, (re)linking them to the local tagSpace *and* ignoring them for link previews, or it should be treating them “purely” as links, and not changing their default/published hyperlink and considering them for a link-preview.
Re: “help to have the activities json representation” — my understanding is that should be automatically discoverable from my post permalink, so all that should be needed for a bug report is my post permalink. Perhaps @snarfed.org can clarify since I’m using https://fed.brid.gy/ to provide that representation.
Either way, is there a validator for the “activities json representation” that we can use to test a particular post permalink, have it auto-discover an activities json representation, and report back what it finds and the validity thereof?
For example, since my posts use the h-entry standard, I am able to validate my post permalinks using the IndieWebifyMe h-entry validator:
Which finds and validates that I have marked up my hashtags/categories correctly.
Re: “@flaki@flaki.social's Mention there got federated as a Link instead of as a Mention (since replying to this post didn't automatically include flaki's handle)” — this too sounds like a (different) Mastodon bug, since I believe @flaki@flaki.social was notified of my reply and mention of their handle. Perhaps Mastodon is getting it half-right: notifying but not canoeing¹?
Did you receive a notification in your Mastodon instance/client of this reply and its mention of your @thisismissem@hachyderm.io? Or only one but not the other?
↳ In reply to flaki.social user flaki’s post@flaki@flaki.social no cross-posting at all, that post, and this reply are federated directly from my personal domain. If you look at the top of my post in your Mastodon client / reader you can see that it’s from @tantek.com — no need for a username when you use your own domain.
Regarding “why the expanded link preview is to one of the (first) hashtags and not to one of the links in the post”, that’s likely a #Mastodon link preview bug with how it treats hashtags.
If you view your reply in your Mastodon (client), you can see that the first hashtag in my post #webDevelopers is correctly (re)linked to your Mastodon’s tagspace: https://flaki.social/tags/webDevelopers, so Mastodon is at least getting that part right, recognizing it as a hashtag, and linking it correctly for your view.
However, Mastodon is still for some reason using the default link for that hashtag on my site (where I am using https://indieweb.social as the tagspace¹) as the link for the link preview.
Since you use Mastodon, perhaps you could file an issue on Mastodon to fix that bug? Something like:
If Mastodon recognizes a hashtag and converts it to link to a local tagspace, it MUST NOT use that hashtag’s prior/default hyperlink as the link for the link preview shown on a post.
For #webDevelopers who like to try out pre-release features in #browsers, in addition to the numerous #Firefox experimental features which everyone has access to in Nightly Builds (as documented by MDN¹) did you know that #Mozilla also has Origin Trials?
Instructions and how to participate on the Mozilla Wiki:
In addition, we’ve linked to the #originTrial documentation pages of #GoogleChrome and #MicrosoftEdge if you want to check those out. Linkbacks welcome of course.
The Library of Infinite Loan is a physical world practice I conceived of many many years ago¹, implemented in minimal prototype form 5+ years ago², shared a summary with the #IndieWeb community at least four years ago at #IndieWebCamp Austin in 2020³ and last year in IndieWeb chat⁴, so it’s about time⁵ I wrote it down.
Summary: lend a #book from your personal library⁶ to a friend, on the conditions that they do not donate sell or dispose of it, and instead when they are done with it they return it or lend it to someone else who agrees to these conditions.
My goal was to create a book lending system that: * preserves books — effectively in a giant #distributed communal #library * makes lending easier fiscally, psychologically, emotionally for both parties * encourages direct person-to-person lending without intermediaries * grows a culture of non-zero-sum sharing, preservation, and longterm thinking
The basic steps to create a Library of Infinite Loan: 1. Create a separate space (like a particular bookshelf) for #books to infinite lend. A small shelf in a guest room or common space like a hallway works well. 2. Move books there that you are ok lending out and never seeing again 3. Label that space your “Library of Infinite Loan”, or invite guests to borrow from your “Library of Infinite Loan” 4. When visitors ask what that means, explain the Rules
Rules for borrowing from a Library of Infinite Loan (“the Rules”) 1. Keep it as long as you like 2. Do not sell donate or otherwise dispose of it 3. You may give it a. back to the person you borrowed from b. or back to its original purchaser if they wrote their name and web address inside c. or (lend it) to someone else who agrees to the Rules
There are several ways to extend / expand the Library of Infinite Loan: * custom book plate: design a custom book plate for yourself with room for your name (and web address) on it e.g. “From Tantek’s (@tantek.com) Library” (with space), print it on longterm adhesive paper, and place it inside new books you purchase. When you move a book to your Library of Infinite Loan, amend the book plate to say ”… Library of Infinite Loan” and attach a copy of the Rules. * add a “borrowers log” with blank lines for anyone you lend it to or they lend it to, transitively, to optionally add their name, web address, and a date of borrowing. Then amend the rules to allow returning a book to who you borrowed from or anyone in the borrower log or original purchaser. * more media: CDs, vinyl records, DVDs, LaserDiscs, VHS, cassette tapes, video game cartridges etc. * other things * large tools — which usually come in a box with instruction manual, so there’s a logical place to put an “owners plate”, “borrowers log”, and copy of the rules. * artwork — a great way to rotate art among a community
This is what I remember off the top of my head and with a little web searching. I know I have a bunch more notes in various places of my thoughts (and conversations) over the years about a Library of Infinite Loan. As I find those notes, I’ll post them as well.
I’m the current editor of the Vision for W3C and helped get it across the line this year to reach #w3cAB (W3C Advisory Board @ab@w3c.social) consensus to publish as an official Group Note, the first official Note that the AB (Advisory Board) has ever published.
I’m very proud of this milestone, as I and a few others including many on the AB¹, have been working on it for a few years in various forms, and with the broader W3C Vision TF² (Task Force) for the past year.
W3C also recently announced the Vision for W3C in their news feed:
One of the key goals of this document was to capture the spirit of why we are at #W3C and our shared values & principles we use to guide our work & decisions at W3C.
If you work with any groups at W3C, anything from a Community Group (CG) to a Working Group (WG), I highly recommend you read this document from start to finish.
See what resonates with you, if there is anything that doesn’t sound right to you, or if you see anything missing that you feel exemplifies the best of what W3C is, please file an issue or a suggestion:
Check that list to see if your concerns or suggestions are already captured, and if so, add an upvote or comment accordingly.
Our goal is to eventually publish this document as an official W3C Statement, with the consensus of the entire #w3cAC (W3C Advisory Committee).
One key aspect which the Vision touches on but perhaps too briefly is what I see as the fundamental purpose of why we do the work we do at W3C, which in my opinion is:
To create & facilitate user-first interoperable standards that improve the web for humanity
“Interoperability: We verify the fitness of our specifications through open test suites and actual implementation experience, because we believe the purpose of standards is to enable independent interoperable implementations.”
These are both excellent, and yet, I think we can do better, with adding some sort of explicit statement between those two about that “We will” create & facilitate user-first interoperable standards that improve the web for humanity.
In the coming weeks I’ll be reflecting how we (the VisionTF) can incorporate that sort of imperative “We will” statement about interoperable standards into the Vision for W3C, as well as working with the AB and W3C Team on defining a succinct updated mission & purpose for W3C based on that sort of input and more.
In a related effort, I have also been leading the AB’s “3Is Priority Project³” (Interoperability and the Role of Independent Implementations), which is a pretty big project to define and clarify what each of those three Is mean, with respect to each other and Incubation, which is its own Priority Project⁴.
As part of the 3Is project, the first “I” I’ve been focusing on has unsurprisingly been “Interoperable”. As with other #OpenAB projects, our work on understanding interoperability, its aspects, and defining what do we mean by interoperable is published and iterated on the W3C’s public wiki:
This is still a work in progress, however it’s sufficiently structured to take a look if interoperability is something you care about or have opinions about.
In particular, if you know of definitions of interoperable or interoperability that resonate and make sense to you, or articles or blog posts about interoperability that explore various aspects, I am gathering such references so we can make sure the W3C’s definition of interoperable is both well-stated, and clearly reflects a broader industry understanding of interoperability.
Last week I participated @W3.org (@w3c@w3c.social) #w3cAC (W3C Advisory Committee¹), #w3cAB (W3C Advisory Board²@ab@w3c.social), and #w3cBoard (Board of the W3C Corporation³) meetings in Hiroshima, Japan.
The AC (Advisory Committee) meeting was two days, followed by two days of AB and Board meetings which started with a half-day joint session (including the #w3cTAG), then separate meetings to focus on their own tasks & discussions.
The W3C Process⁴ describes the twice a year AC (Advisory Committee) Meetings⁵. In addition to members of the AC (one primary and one alternate per W3C Member Organization), the meetings are open to the AB (Advisory Board), the W3C Board, the W3C TAG (W3C Technical Architecture Group⁶@tag@w3c.social), Working Group⁷ chairs, Chapter⁸ staff, and this time also a W3C Invited Expert designated observer⁹.
The AC currently meets in the Spring on its own and at a shorter meeting in the Fall as part of the annual #w3cTPAC (W3C Technical Plenary and Advisory Committee¹⁰ meetings). The existence, dates, and location of the event are public¹¹, however the agenda, minutes, and registrants are generally Member-confidential. Since those individual links have their own access controls, I collected them on a publicly-viewable wiki page for easier discovery & navigation (if you work for a W3C Member Organization¹²):
Most of the W3C meeting materials and discussions were also W3C Member-confidential, however many of the presentations are publicly viewable, and a few more may be shared publicly after the fact.
Myself and others at #W3C who believe in pushing for more openness and transparency in standards work, even (or especially) governance of said work, will be doing our best to work with others at W3C to continue shifting our work accordingly.
Aside: I started the #OpenAB project when I was first elected to the AB (Advisory Board) in 2013, documenting it on the publicly viewable W3C Wiki, and updated it with the help of others since: https://www.w3.org/wiki/AB#Open_AB
Like most conferences, I got as much out of side conversations at breaks (AKA hallway track¹³) and meals as I did from scheduled talks and panels.
Because there are 88 keys on a standard piano, the 88th day of the year was established as a day to “celebrate the piano and everything around it: performers, composers, piano builders, tuners, movers and most important, the listener”.
I appreciate that Piano Day is on an ordinal day of the year (88th) rather than a Gregorian date (e.g. 8/8 or August 8th) which is subject to leap year variances. The 88th day of the year is the 88th day regardless whether it is a leap year or not.
From a standards perspective, we can express today’s Piano Day as 2024-088, an ISO ordinal date², however there is no standard date format for just "the 88th day of a year" without specifying a year (yearless).
There is (was) a way to specify a yearless month and day, like you might see as a birthday displayed on a social media site, without disclosing the year, or an annual holiday like May Day³, that is May 1st, without a specific year:
--05-01
This yearless date format (--MM-DD or shorthand --MMDD) was supported in the ISO 8601:2000 standard, but then dropped in the 2004 revision. This omission or deliberate removal was an error, because there are both obvious human visible use-cases (communicating holidays, and yearless birthdays as noted above), and other standards already depended on this yearless date format syntax (e.g. vCard⁴ and specs that refer to it like hCard and h-card).
Every version of ISO 8601 since 2000 has this flaw. Fixing (or patching) #ISO8601 is worth a separate post.
Returning to yearless ordinal dates, since they lack an interchange syntax, we can define one resembling the yearless month day format, yet unambiguously parseable as a yearless ordinal date:
---DDD
e.g. Piano Day would be represented as:
---088
We have to use three explicit digits because there's also pre-existing "day of the month" and "month of the year" syntaxes which are very similar, but with two digits:
--MM ---DD
This yearless #ordinalDate syntax (---DDD) is worth proposing as a delta "repair" spec to ISO 8601 (use-cases: Piano Day and others like Programmer’s Day⁵), alongside at least a restoration of the --MM-DD yearless month day syntax (use-cases: publishing holidays and yearless birthdays), perhaps also the ---DD day of the month and --MM month of the year syntaxes (use-case: language independent numerical publishing of Gregorian months and days of months), and propose adding a NewCal bim of the year syntax --B (numerically superior replacement for Gregorian months and quarters).
Join the open social web or be relegated the same fate as AOL, who couldn't even sustain their dominant instant messaging silo. #Twitter, #Pinterest, #Snapchat, #Quora, you're not special enough to survive on your own. And tick-tock #TikTok.
https://www.threads.net/@0xjessel/post/C4zGhshpn9t: “as everyone is trying out fediverse, today is a good reminder that threads support rel=me link verification -- another open web standard we adopted last year. this is useful right now because you can't see fediverse replies to your posts on threads yet. so if you use a mastodon alt account to reply to your threads posts, setting this up proves you are the owner of the mastodon and threads account. see post below on how to set this up: https://www.threads.net/@0xjessel/post/Cvu7-42PVpC: “to set your own up: 1. add your mastodon profile to your threads link in bio. 2. add your threads profile to your mastodon profile 3. save your profile and it should show as verified now” ”
↳ In reply to a comment on issue 71 of GitHub project “AB-public”Agreed @github.com/cwilso. Given the feedback in the comments, I accept that the marginal benefit of explicitly adding "malvertising" as less than the marginal costs of doing so (document length, jargon/uncommon term).
I’m open to other purely editorial changes that help simplify the Vision and improve its readability, but those should be proposed as separate issues / pull requests.
While an HTML style element for inline CSS needs nothing but simple start and end tags (as of HTML5 and later)
<style> p { color: red } </style>
a more robust style element requires a precise series of overlapping code comments.
Here is the answer if you want a code snippet to copy & paste
<style><!--/*--><![CDATA[*/ p { color: red } /* you may delete this sample style rule */ /*]]><!--*/--></style>
Here is why:
1. Not all HTML processors are CSS processors. While all modern browsers know how to parse CSS in style elements inside HTML, it is still quite reasonable for people to build HTML processors that do not, and many exist. There are plenty of ways to errantly or deliberately misplace markup inside a style element, like in a CSS comment, that such processors will not see, that can break them and cause unexpected and different results in different processors. Strictly speaking any use of > child combinator selector syntax should also be HTML escaped (as >) inside a style elment.
Thus it makes your HTML more parseable, by more processors, if you can hide the entirety of the style sheet inside the style element from such processing, including any child combinators. A CDATA section does exactly that:
<style><![CDATA[ p { color: orange } /* CDATA allows a </style> here to not close the element */ body > p { margin: 1em } /* CDATA also allows an unescaped > child combinator */ ]]></style>
2. However CSS syntax does not recognize a CDATA directive (even as of the latest published CSS Syntax Module Level 3¹ or editor's draft² as of this writing). CSS parsers may very well treat a CDATA directive as a syntax error that invalidates the subsequent style rule.
Thus we must hide the CDATA directive, its opening and closing markup, from CSS parsers. CSS code comments /* ... */ can do exactly that:
<style>/*<![CDATA[*/ p { color: orange } /* CDATA allows a </style> here to not close the element */ body > p { margin: 1em } /* CDATA also allows an unescaped > child combinator */ /*]]>*/</style>
3. This is close but still exposes HTML processors that do not process CSS to a minimal bit of content, the CSS comment opener and closer that are outside the CDATA section:
/* */
This recently showed up in a draft of the This Week in The #IndieWeb newsletter³, because portions of it are automatically constructed by parsing the HTML of MediaWiki pages for content, and one of those used a MediaWiki template that included a minimal style element to style the marked up content inserted by the template. A draft of the newsletter was showing raw CSS, extracted as text from the style element by the CSS-unaware parser extracting content. I was able to hide nearly all of it using CSS comments around the CDATA section opener and closer. Except for that little bit of CSS comment noise outside the CDATA section: /* */
Fortunately there is one more tool in our toolbox that we can use. Simple HTML/SGML comments <!-- --> are ignored at the start and end of style sheets⁴ (noted there as CDO-token⁵ and CDC-token⁶), and thus we can use those to hide the last two remaining CSS comment pieces that were leaking out, like this: <!-- /* --> and <!-- */ -->. Note that the portion of the HTML comment directives that are inside CSS comments are ignored by CSS processors, which is why this works for both processors that parse CSS and those that do not.
This last addition produces our answer, with no fewer than three different comment mechanisms (CDATA, CSS, HTML/SGML), overlapping to hide each other from different processors:
<style><!--/*--><![CDATA[*/ p { color: orange } /* CDATA allows a </style> here to not close the element */ body > p { margin: 1em } /* CDATA also allows an unescaped > child combinator */ /*]]><!--*/--></style>
By replacing those informative style rules with a style rule to be deleted, we have recreated the code snippet to copy & paste from the top of the post:
<style><!--/*--><![CDATA[*/ p { color: red } /* you may delete this sample style rule */ /*]]><!--*/--></style>
Q.E.D.
Afterword:
If you’re reading this in a traditional feed reader and see any red or orange text, then your feed reader has a bug (or a few) in its HTML parsing code.
If you View Source on this post’s original permalink or my home page you can see the more robust style element in a real world example, following the IndieWeb Use What You Make⁷ principle.
What I created while remotely participating at #IndieWebCamp Brighton 2024: wiki-gardened day 1’s BarCamp sessions notes pages, and documented my @-mention @-@-mention autolinking coding improvements I built the Sunday before.
Day 2 of IndieWebCamps is Create Day, where everyone is encouraged to create, make, or build something for their personal website, or the IndieWeb community, or both.
At the start of day 2, everyone is encourage to pick things to make¹. What to make at an IndieWebCamp² can be anything from setting up your personal website, to writing a blog post, redesigning your styling, building new features, helping other participants, or contributing to shared IndieWeb community resources, whether code or content.
Everyone is encouraged to at least pick something they consider easy, that they can do in less than an hour, then a more bold goal, and then perhaps a stretch goal, something challenging that may require collaboration, asking for help, or breaking into smaller steps.
For my "easy" task, I built on what another remote participant, @gregorlove.com completed the night before. gRegor had archived all the IndieWebCamp Brighton Sessions Etherpads onto the wiki, linked from the Schedule page³. gRegor had noted that he didn’t have time to clean-up the pages, e.g. convert and fix Markdown links.
I went through the 13 Session Notes archives and did the following: * converted Markdown links to MediaWiki links * converted indieweb.org (and some services) links to local wiki page links * fixed (some) typos
I point this out to provide an example of an IndieWeb Create Day project that is: * incremental on top of someone else’s work * community contribution rather a personal-focused project * editing and wiki-gardening as valid contributions, not just creating new content
I point this out to illustrate some of the IndieWeb community's recognitions & values in contrast to typical corporate cultures and incentive systems which often only reward: * new innovations (not incremental improvements) * solo (or maybe jointly in a small team) inventions, designs, specs, or implementations * something large, a new service or a big feature, not numerous small edits & fixes
In this regard, the IndieWeb community shares more in common with Wikipedia and similar collaborative communities (despite the #Indie in #IndieWeb), than any corporation.
For my "more bold" goal, I wrote a medium-sized post about the auto-linking improvements I made the Sunday before the IndieWebCamp to my personal website with examples and brief descriptions of the coding changes & improvements. * https://tantek.com/2024/070/t1/updated-auto-linking-mention-use-cases
My stretch goal was to write up a more complete auto-linking specification, based on the research I have done into @-mention @-@-mention user practices (on #Mastodon, other #ActivityPub or #fediverse implementations, and even across #socialMedia silos), as well as how many implementations autolink plain text URLs, domains, and paths.
I was one of a few remote participants in addition to ~18 in-person participants, the overwhelming majority of overall attendees, who demonstrated something at the end of IndieWebCamp Brighton 2024 day 2. See what everyone else made & demonstrated on Create Day: * https://indieweb.org/2024/Brighton/Demos
I also dropped auto-linking of URLs with user:password "userinfo", since they’ve been long abandoned and effectively deprecated because there’s fairly wide agreement that such basic HTTP authentication² was poorly designed and should not be used (and thus should not be linked).
If you’re curious you can take a look at https://tantek.com/cassis.js, which has updated functions: * auto_link_re() — regular expression to recognize URLs, @-mentions, @-@, and footnotes to link * auto_link() — specifically the code to recognize different kinds of @-@ and @-mentions and link them properly to profiles, domains, and paths.
This code is only live on my website (testing in production³ as it were) for now, and you’re welcome to copy/paste to experiment with it. I plan to test it more over the coming weeks (or so) and when I feel it is sufficiently well tested, will update it on GitHub⁴ as well.
With this additional auto-linking functionality, I feel I have a fairly complete implementation of how to auto-link various URLs and @-mentions, and plan to write that up at least as a minimal “list of use-cases and how they should work” auto-linking specification.
This (blog post) is my contribution to today’s #IndieWebCamp Brighton⁵ #hackday!
This was originally a project I wanted to complete during IndieWebCamp Nuremberg last October, however I was pre-occupied at the time with fixing other things.⁶
This past Saturday: finished the #InsideTrail Redtail Ridge 30k #trailRace in 6:00:59.
A few notes:
This was my first trail race of 2024, and first in over 6 months, since last year’s Marin Ultra Challenge 50k and Broken Arrow 23k races in June¹. Saw pal Henri after changing into my trail shoes in the Lake Chabot Regional Park parking lot. The storms had scared many away, fewer than 100 showed up to the combined 30k & 50k start.
The muddy rainy adventure began when we veered off the initial paved trail around the lake and onto a rocky uphill stretch. It was mostly an out-and-back course, with a bit of a loop in the middle. On the second half of that loop there was one fork in the trail without race markings. After spending minutes taking a peek down both options, I guessed right. About a half mile later a wooden trail post validated my choice.
I kept a sustainable run/hike pace, with some sliding in the mud, stepping around many ruts and puddles of unknown depths. Slower finish than 5 years ago², yet this time with a negative split, and earned my first DLF award!