draft0 - a shared blog by just some people

Go To Navigation
Show/Hide Navigation

Entries tagged 'cat:World Wide Web'

Web Pages About The Web Worth Reading

My thoughts about the state of the web and how to shape it are not the most interesting ones. They're fueled by articles and other pages I've read. I decided to link to a few here. Many more I've read and closed without saving the URL. But maybe I'll extend this list in the future so it becomes a curated reading list on the topic.

Web Browsers and Engines

  • grazer - grazer is like a web browser but trees are shorter
  • Dillo - fast and small graphical web browser
  • Servo - The embeddable, independent, memory-safe, modular, parallel web rendering engine
  • Verso - web browser built on top of the Servo
  • The Argonaut Constellation - range of software projects aiming to illustrate the potential for a more private JavaScript-free web
  • NetSurf - multi-platform web browser - small as a mouse, fast as a cheetah
  • -

Comment via email
A Document-centric Web

I've been thinking about what exactly it is that is wrong with the WWW and what to do about it. It feels like it became worse over the years. It's easy to make out individual reasons: Walled gardens, ad-centric web sites, bad mass-generated and LLM-generated at the top of all web searches, the lack of search results from small, personal web sites without an SEO budget or the malicious will to implement all the SEO tricks for the sake of it, egoistic algorithms (that push harmful and hateful content because it makes them feel good). Thinking about how these things became the way they are is sometimes interesting and may help by teaching a lesson. But it doesn't really bring us closer to a solution. I've had a few thoughts about what does, though. It's not as complicated as I make it sound here. It's just that I needed to think about it to be sure what I even want. And that is for the web to be made of documents with hyperlinks. If it is a text document, a searchable database, a set of image, a list of things, an article with pictures and audio samples, a video file that I can download and play, a tree of links to other documents or something similar to those, then it is something for the world wide web. If it is an application, it's not a document for the web. It's impossible to draw ab exact line here. But that's the general rule for me now. It's not that web apps shouldn't exist. People make then and people like using them. So that's fine. But that's not what the web was made for and not what I personally want when browsing the web. It's one thing that photo editors, t-shirt designers, CAD software, action games and all those things exist. You can use them or not. But it has become so normal that web sites are huge and require JavaScript to even load that sites started being huge and require JS even though it's not necessary to serve their purpose. And that has become normal, too, by now.

What can you do?

I've came to believe that there is no route to turning the web into the web that resembles my vision of a good web. It's not even just that I don't think that I (together with similar-mimded people) can't obtain the power to force the usage of certain software or to form habits in others. I don't think it would be a sustainable way with a lasting positive effect on the web. That doesn't mean that there aren't ways to bring others to use better alternatives to walled gardens, closed networks, sites with lots of ads and only 10 % content, sites that use dark patterns and search engines of the oligopositic type. You can promote small projects, share links to useful and interesting sites, talk about how you use the web, make it normal to use a search engine that doesn't only find bloated, commercial, SEO optimised sites. You can start your own projects, enrich the market with libre and other non-commercial software, submit pages to search engines with a curated index, create good content for alternative search engines to find. In this entry I'd like to talk about what I do to change my own experience of the web to the better, though, not about making the web better for everybody.

What I do for now

I needed to think about web things a lot before I realised how much of my own web experience I can change by making certain choices and how viable it can be to simply not use certain services. It's not a new idea to me. I don't do Windows, I use alternative front-ends to YouTube, I've used various unconventional operating systems on PCs and phones. But in regards to the web I thought that it's not that simple. Web sites are how they are and even nice web sites link to bloated pages with megabytes of unhelpful CSS and megabytes of maliscious JavaScript. If I use a browser that is fast and doesn't do JavaScript, my web experience is worse than it is with Firefox (I prefer LibreWolf, btw). Whenever I tried Dillo, Nersurf or something similar, or when I disabled JavaScript in Firefox, I didn't get along with at least some pages. It doesn't appear to be easy to simply decide accept that some pages don't work and just go on to the next one. There surely are use cases where this is not acceptable. But for the usual usees - everyday browsing and casual research - that shouldn't be a problem considering I already accept that some pages aren't accessible because I don't want to register with them. In a sense it's my fault that I don't like how the web is today. It's me who keeps visiting web sites that are like that. And I can stop that by doing some simple changes: Disable JavaScript and use a search engine that prefers non-commercial web sites in its search results.

Really?

Don't get me wrong: I doesn't feel like an improvement to disable JavaScript entirely. There are browser extensions that let you control what page is allowed to serve scripts and what you want to execute. I've tried that, but it's complicated to get it right and frustrating because you always have to configure stuff while browsing and that never stops. But maybe it is an improvement despite not feeling like one at first. I mean, quitting to take drugs to which your brain has developed a strom chemical dependency also is often a worse experience than continuing to take them. But also often it's worth getting used to not taking them anymore. It also doesn't seem like an improvement to only use search engines with tiny indexes that rarely return with the ideal search result you hoped for. Maybe this isn't a viable choice. I think there is no search engine of that type with a large enough index to recomment it for daily use. Those projects just aren't there, yet. But that may just be one more reason to use and support them more. And since they do get rid of all the sites that do things that I don't like, it could be an improvement to get used to using them.

What does that mean in practice?

Some web sites don't have any images anymore, some web sites only load ads and recommendations, but not the actual article, modern closed-plattform chat apps don't work, just as most other sites that can be calles web apps, keyboard focus doesn't start at the main input field, some sites aren't readable because all their styles are missing, burrying the content between or under thousands of things that should have started out hidden, on some sites certain links aren't working anymore, many audio and video players don't work because of attempts to prevent permanent file downloads, there are pretty much no ads. If you use a browser with a less than very popular rendering engine, add misaligned elements on many to almost all web sites, unreadable elements on some sites with unconventional styling and missing elements if they use unusual positioning options. If you only use a search engine that doesn't do commercial sites or whitelists desirable sites, add to that the feeling of trying out the web in 1995 unless you navigate to specific sites that you know contain what you are looking for. The web feels relatively small with a search engine like that. But even then it's huge. Maybe it's a quiestion of what you expect. If you really don't know what site you are looking for, use a universal search engine. If you have an idea where to find the information you are looking for, start at that site. It might be Wikipedia, Slashdot, Toms Hardware, an Invidious instance. The web is totally usable if you don't enter everything in the same search engine as a lefrex. I thought it was great at a time where we didn't do that. And for the rest (missing content, non-working sites): Those tend to be the sites that I wanted to filter out in the first place. So the endeavour seems to work as intended.

There are also sites that I would like to read that just happen to use a CSS trick or JavaScript that isn't supported by all browsers. Those are sites that don't pay a lot of attention to accessability design guides but don't have any bad intentions. I've made sites like this before. This article is being posted to sites that fall in this category. Feel free to contact makers of those sites to let them know that you would appreciate being able to read the pages. I know I should test a site in text browsers before publishing them. I never do. Nowadays I don't even test in any other browser than the main one that I'm using (except when using engine-specific style rules). We came to accept that it just looks the same in all browsers. That is something that web designers always wished for. And when Microsoft's browsers improved in this regard it felt like we were there. But it is also true that most users of the web use a browser with one of two/three engines (depends on where you make the cut and call it a new engine after a fork). I think it does still make sense to test a web site in different browser engines. It doesn't have to look great in a text browser (although that would be the best), but maybe Dillo and NetSurf. If you cover those two, you cover pretty much everybody and you don't even have to test the site in Firefox or Chromium.

So, what did I change? For work: Nothing. Corporate dictates what software I use for what. On my private laptop, I currently use NetSurf as my main web browser. I use LibreWolf for two purposes: Copy individual bookmarks or URLs from open tabs, and go to sites that don't work in NetSurf when I don't have the time to find an alternative solution. For chat apps I use their "native apps" although I suspect that they are all just the web app shipped with their own browser. For social media I'm trying out different mastodon/fediverse clients for Linux, which I wanted to do for a while anyway. For search I'm currently using various Searx/SearXNG instances. (I know, not that alternative. I don't want to ruin everything at once for me.) More than recently I deliberately navigate to a specific site instead of using a search engine and ending up on a site that I already knew. When looking for something on eBay, I don't find as many interesting things like before because the pictures are missing and I don't needlessly buy things as much now. When searching for some random information or doing some curiosity research I close many search results directly or very soom after opening them because they aren't displayed properly. So far that doesn't bother me much. I'm already used to having to close tabs again right away because of cookie banners and other popups that make it impossible to get to the content without finishing a maze and reading a bunch of things for at least a minute. Not I open and close more search results, but get my ansers anyway. On video platforms, I open the video in an external player. It's nicer to have the player of my choice with my prefered UI and my custom configuration anyway. Some sites simulaniously look worse and better at the same time. I may have to scroll a bit to the content and it is obvious that the page wasn't designed to look exactly like that. But at least I don't get any grafical animations, lots of side-loaded unrelated content or ads. For shopping my options are very narrow. I already stopped using Amazon for other reasons a while ago. It's really not as much of a hassle as people seem to think. But much more shops than I expected rely on JavaScript for purchasing or logging in nowadays. (Probably at least for a CAPTHA.) It's pretty much all, actually. According to my rule from earlier, those are apps though, and there would be better ways to implement those. So, I don't have a solution other than switching back to LibreWolf or an app on my phone when I need to buy something online. So far, I didn't actually need anything, though. For online banking, sending a message to my insurance, using the Wayback machine and I predict much more, it is the same. For some things I will try to find alternatives. For others I will realise I don't have to. For some sites that I want to consume for enternainment it's disappointing when they don't work. There are so many alternaives for entertainment in all categories. I have so many ebooks, web books, audio books, lecture recordings, podcasts, … that I would like to consume when I get the time and energy to, I really don't need whatever interesting thing I've just found or somebody has just recommended. But now that I know it's there, I don't want to miss out. So far, this has been largest part of my negative experience after switching. But I haven't been at it for long. I'm curious to see how this will go for me.

(tba:links to previous entries, external links)

Comment via email
The web is too slow for most of what it's providing.

I have a certain fondness for the Gemini space ever since its initiation. I like the main idea and am continuesly glad to see it growing in size and usefulness. I imagine that this is similar to how the web looked like at some point. Although I'm not about to make a Gemini site, I sometimes end up browsing other's sites. And it's a better experience every time. New search engines and other services, new blogs, geeky, topical sites and collections. It's such a great space for consuming interesting information when you're not exclusively looking for one specific piece of information.

Recently I was reading some personal Gemini sites and regretting not having more time before initiating a long duration of sleep is inevitable, when I suddenly noticed that I wasn't connected to the WiFi I thought I was getting my internet through. I was getting my internet through an EDGE connection with one bar on my Unighertz. That is generally not enough to do anything on the web noawadays. I wouldn't even deem it enough for today's emails. Mastodon works with that kind of connection. But you have to wait a very long time for thumbnails. Twitter didn't work at that speed when it still existed. But browsing Gemini capsules it hardly makes a difference. Not different from what I expected. But having had this experience I'm even more confident now in saying that most web sites are shit. Even a page from this site with relatively few data to load besides the actual content often consists of more than 50% of things that are not the blog article (CSS, menu, markup).

I've often thought about different ways how the Web could be used in a way that provides a better experience by making sure that bloated, malicious and faux sites aren't even linked to, without impairing the usability too much. People had so many ideas. Some of them more viable than others. But neither those I've read or heard about nor those that I could come up with fundamentally solve the problem (if you want to see it as one) that the web technologies are extremely flexible and powerful to the extend that they might as well be seen as infinitely capable of any function (and resource usage) imaginable only restricted by the client and server hardware in combination with the fact (and I think it has been sufficiently proven as such by reality) that people will create web sites that make use of the possibilities simplified by the accumulation of abstraction layers built on the core web technologies. Bad web sites will be built and linked to. Avoiding them will always be a hassle and can probably only be reliable if huge cutbacks are made by excluding the majority of web sites completely. An improved web experience in the form I imagine it would likely be easiest to achieve by starting from the ground up and creating a new web, possibly by restricting to a set of older technologies. Often suggested are CSS2, (X)HTML4 and a very limited set of JS instructions, although any set of technologies in various versions, no JavaScript, a new set of HTML tags, etc. has been suggested by now. Trying to find a way to technologically realise such a new, independent web might be a moot task.

A web built exclusively on older versions of the technologies that make up the World Wide Web seems desirable. It's what I and many others are used to already. It's very easy to make sure existing web sites work as expected on that new web: If they worked in Firefox 2, they're good. But taking this idea of a new web further, one might want to modify the feature set to prevent the same development happen to parts of the new web that happened to parts of the current web. And that's exactly the path of thoughts that Solderpunk seems to have followed and that might have lead them to the feature set of Gemini. (I'm sepculating here.) I don't agree with every single design decision. But almost all have a very good reason that adresses something that went wrong with the Web.

I've com to believe that this is the way to go if you want to escape the bloated web.

The Bad Web

A lot has been written about the declining usablity of the World Wide Web due to web sites not respecting what visitors really want or need. So I'll just summarise here before I'll try to get to the point.

Megabytes of CSS and JS to display 15 Kilobytes of content and another few megabytes of ads and other bloat. It has become completely normal to have a hundret and more tracking cookies installed for wisiting a single web page. Many popular sites can't be read by at least some people because the distracting ads have become too much. Thise are the first major problems that come to my mind. Depending on who you ask the problematic development has started in the last couple of years, about a decade ago, in the mid-2000s or even in the 1990s. But most people above a necessary age to have experienced the difference seem to agree that the web experience was a better one in 2005 than today. Back then RSS was integrated by many popular sites. You could use it to read Twitter and subscribe to YouTube channels, for example, and sites that still offer it in the background used to place links to feeds visibly, not hidden in the source code for browser extensions to discover them. Web browsers themselves could not only display RSS feeds but placed an icon next to the address bar when a feed for the currently viewed page was available. RSS is often used to show how the web was more open in general. Even commercial web sites were created with a more open approach. A site were you had to register before you could view its content was an exception for which privacy was the reason, not monetary expectation or greed. This is the time to which most people seem to want to return to. When I say most people, I mean most people whose thoughts on the open web I read, which is those who post to the open web and are interested in such things to a degree that they want to write about it. So what I probably mean is "most people who are dissatisfied with the current state of the web". It's possibly that most people, or most internet users, love the way things are going now and hate the ideas advocated of the open web have, whether that is for or against their own good.

Sometimes I boost thought-out or new takes on the subject, well formulated demands or promotions of software solutions on the fediverse. And I often think about this myself. Because the web has brought me so many nice things and I want it to be a positive thing in society as well, which, overall, it doesn't seem to be anymore. What does the web need to make it better again?

First of all, the open web isn't gone, nor has it shrunk in size. There way more personal blogs, open networks and non-commercial projects out there than 20 years ago. Even new web forums open all the time. But it's less visible below the very very loud, commercial web. Maybe the greedy web is a good name for what I mean. Not every commercial web site is an example of how the web is devdeloping in a bad direction in my view. I want to be able to get information about a business from the business-owner themselves when I'm interested in their services, for example.

A search engine that returns links to non-commercial sites first, unless you really need information that can only be found on a page of a greedy site. I think- let's just not talk about the many problems (not even just challanges) that such a search engine would introduce if it is to be useful in practice.

A browser that only links to non-bloated/non-tracking/non-greedy/open web sites or warns when a link leads to a less-nice site. Again, I don't have the time right now to list all the problem that there would be if an attempt to implement this would be made. Maybe I'll write another entry about my deeper thoughts on the technological solutions that I mention here. But these thoughts don't contain any real solutions. So I don't know.

Create a literal small web, that only uses resources from and only links to, web sites that are following the same standard (e.g. only (X)HTML4, maybe only CSS2, possibly restriction on JS usage). That is in principle similar to building a whole new network, as is Gemini and Gopher doing. (I know Gopher isn't new, but I reckon the majority of sites is.) I forgot what other protocols with similar aims are there. As far as I know none that are widely used. There are initiatives to restrict the WWW to a smaller or older set of standards. Those probably influence site builders (mainly in personal web sites), but won't change the web as a hole. And so you'll eventually while browsing come across a site that doesn't restrict itself it what it's linking to, or you'll catch yourself linking to a bloated site because it's important to link to the original source of something.

JavaScript needs to be optional again. I've recently come to think that this is actually the one major goal among the technological changes that the current web would need to undergo in order to make it user-friendly and more usable again. In a time where you couldn't 100% expect that visitors were using a client that understood JavaScript, and had it enabled, web developers didn't have much of a choice and built in fallbacks so that a site was still usable without JavaScript. But the number of visiting clients without very good JavaScript became so small that it started to look optional, and in reality became not only optional but even rare, that fallbacks are included. JavaScript really took over the web. I could make so many words around this but don't have much time left this morning. Not only are there sites that are empty without JS loading the entire HTML. Such a thing isn't even special anymore.

If you are creating a new web browser, please include a switch in the GUI that allows to enable/disable JavaScript permanently (until deliberately switched on again) either entirely or for the currently viewed site. Or, maybe make it off by default.

tbd:this entry needs some links;write follow-up entries

Comment via email
Alternative Web Browser Engines

I don't think I have to spell out the problem with the current shape of the web browser landscape in detail. Almost every HTTP client uses one of the now three big engines, WebKit, Blink and Gecko. Blink, as the big bad Google one is definitely one that nobody should consciously choose to use. Not everybody trusts WebKit much more because it is developed by another powerful global player that inadvertently collects more data in one hand than can be considered healthy, Apply. And Gecko, the one developed by Mozilla has been criticises for being pushed into a direction that is less free and user-friendly than it is expected from Mozilla. Since Mozilla has received large sums in funding from Google for a while it can be argued that any recent fork of the engine has a history of being influenced by Google. All the other large engines are no longer developed and thus not seen as a possible choice for the future. But there are some options to be discussed for users who want to avoid using a browser that relies on one of the big engines. And there are signs that the lack of competitive differences in browser engines will be reduced in the coming decade.

I like that the recent financial development in the SerenityOS/Ladybird project has prompted discussions about alternative browser engines and has shone some light on upcoming new browser engines written from the ground up. The interest in a new kid on the block has been growing over the last couple of years. So it should not surprise that several projects are undergoing and aim to create alternatives to the current big three browser engines. But in my eyes most of them are still not well-known enough. Not all are equaly useful. So I've decided to mention some options that could replace a mainstream browser at least in some use case.

Using Old Browsers

Yes, I'm covering the less practicle options, too. Using an old browser with security issues that won't be fixed not only might add security holes in the system it is used on, it also doesn't do anything to tackle the problem. The development of browser engines that are used in newer releases will not be affected by some people using them less. Choosing an older browser merely sends a signal and possibly changes statistics to look like you now also forget updating for a long time. But there is range of different lightweight browsers with engines that have no trouble with HTML4 and CSS2 and also offer good JavaScript support. KHTML (Konqueror) becomes an option again, Internet Explorer with Trident or EdgeHTML could get another chance and an old Opera with Presto could continue to shine. But you might need to maintain an envirement of outdated libraries to make your choice over a long time.

Flow

The Flow browser with its own HTML engine is developed by ekioh, a company with experience in developing browser for various embedded devices. As a free product, the preview of a Raspberry Pi version is available. But browsers for all major operating systems are planned. It uses an existing JavaScript engine the JS support is better than with other newly written browser engines. But it also doesn't add as much competition in the market in this regard. I have not tested Flow. As a product from a commercial business it didn't look interesting enough, yet, even though it is reportedly relatively mature.

Haphaestus

The Haphaestus TV Browser is a project by Adrian Cochrane that forms a web browser specifically aimed to be easily navigated with with few buttons (like on a TV remote control). It is a very interesting project because it is made up from several individual parts (CSS engine, font rendering, box layout engine, …) that all are written from the ground up in Haskell. A JavaScript engine is not part of the project. The web is nowadays more pleasant to browse without JS anyway. Adrian also recently started a free year-long course that encourages others to write their own HTML rendering engine (surely with the intention to collect experience for future paid programming courses).

Goanna

Of the browsers that use the Goanna engine, I see Pale Moon recommended most often. Goanna is a fork of Gecko that, by now, differes in features from the current gecko engine. It looks like Goanna may be the way to keep using a relatively old engine that supports all majer web standards very well with an actively developed browser that runs reliably in the OS of your choice. It may not be performing as well as current Gecko browsers like LibreWolf, but there's supposedly a smaller chance that their developers have been influenced by payments from Google.

Dillo

Small, realy light-weight, very simple and currently with no intention to pay any attention to scripts. Older versions of Dillo had been ported to many other systems. The current version 3 is only developed for the now big OSs (Linux, BSD, OS X). But it is very light-weight and snappy compared to mainstream browsers. CSS support is still lacking in the eyes of users who are used to every positional property to work. (Float support is also still missing.)

LibWeb, LibJS

The Ladybird web browser is becoming more known now that a company has been formed around it that has been promised major funding for the next few years. The project started as "the web browser of SerenityOS" but has since gained indipendent support and in turn supported hope that a novel web browser with a new engine will be established on the market of web browsers in the near future. The libweb and libjs libraries were started to build an intependent engine behind Ladybird. An application that will be seen as a usable alternative to Firefox in daily use with no need to fall back on another browser is still far away though.

NetSurf

Another small web browser that good HTML, CSS and JS support is NewSurf. LibDOM, LibCSS and Duktape (JavaScript) are combined to create a simple and portable browser. NetSurf can be found on Atari, Haiku, Linux, RISC OS and other systems. I've also found it a few times on my journay through alternative operating systems (about which I still have to write in this blog). It's the portable, small browser.

Servo

This is one that I like a lot personally. A noval web engine written in Rust. There is currently no full-fledged browser application that uses it. But there is a GUI demo that allows to test the engine with any URL. In my experience, web sites with elaborate design that make use of different layout rules and a lot of JavaScript tend to render better with Servo than with a current Ladybird (LibWeb, LibJS). Support for CSS3 rules is also better than with the small browsers NetSurf and Dillo. The potential is more readibly visible with this one compared to other new projects. Maybe the fact that browsers based on Servo only exist for specific devices is responsible for it being less known than Ladybird. No wonder Mozilla sucks up Serve during their project to replace parts of Gecko with re-implementations in Rust. I guess eventually there will be no big difference between the two engines.

Text-based web browsers are probably not an option for most users. Most web sites are designed for a graphic layout only and even when a page is structured well and can be read with a screen reader, sighted people usually prefer a GUI to a text-based interface. But if you want or a use case requires it, text browsers like links2 or lynx are also worth checking out, of course. Another thing that I'd like to mention here because it could be considered an alternative web browser are auditory browsers. But this entry is meant to be about browser engines, not browsers. Otherwise there would be many more projects that I should mention (browsers with a small user base, forks of Firefox, browsers for quick keyboard use).

Edit: Here's an interesting chart about the live of web browser engines since the first one in 1990.

Comment via email

I remember when Firefox got the new feature to re-open the last closed tab. That was a real useful invention. Do you know what we did before that was possible? … Yes, it was exactly how you imagine it.

I also remember when web browsers didn't have tabs. I'm still not sure whether that was such a good invention after all. Why is there no popular alternative? Opera folded a long time ago with its thumbnail buttons. There are extensions to order, manage and group tabs differently. I think I'd like to try vertical tabs in a bar again for a while. Or just separate windows. Let the window manager manage them. There must be a good reason why every attempt to do that is quickly abandoned. But I'd still like to try it. There are browsers that don't have tabs. But those don't have an engine that I'd like to use for daily web browsing.

It doesn't look like there's any interesting setting to change the tab display in Fitefox.

Comment via email
The Old Fediverse

Gestern dachte ich mir: Wieso gab es früher (vor 2016? 2015?) so wenige starke Bestrebungen, ein Fediverse in Form von Sozialen Netzwerken zu entwickeln?

Die häufig verwendeten Vergleiche Telefon- und E-Mail-Netz sind ja nicht die einzigen Beispiele dafür, dass es sinnvoll ist und funktioniert. In der Geschichte des Internetz waren schon fast von Anfang an fast alle und so gut wie alle weit verbreitet eingesetzten Protokolle dezentral aufgebaut.

Ich denke es liegt daran, dass es das Web schon gab und Menschen, die Dienste baueten, versucht haben, so wenig wie möglich neuzuerfinden. Es heißt ja in Sprichworten, das Rad neu zu erfinden sei Recourcenverschwendung und produziere neue Rad-Varianten mit neuen Nachteilen, die tendenziell überengineert sind. Und nicht nur Unix-Fans finden, dass Technologien simpel und modular aufgebaut sein sollten. Das Web ist schon dezentral. Und es gibt schon lange genug Technologien, mit denen die Funktionen der großen, beliebten Sozialen Netzwerke abgebildet werden können. Weblogs mit öffentlichen oder auf Mail basierenden Kommentaren, Pingbacks, Commented Re-Shares, … Und um den Content so wie er in App oder Webseite üblicherweise von Menschen abgerufen wird darstellen zu können braucht es auch heute nichts als einen Webserver, HTML und CSS.

Dass ein eigenes Protokoll dafür benötigt wird, um Plattformen zu bauen, die von Massen akzeptiert und zügig adaptiert werden, obwohl es seinerzeit Facebook, Twitter und Co. auch nicht brauchten; das ist eine Einsicht, die von der entwickelten Community erst mal breit angenommen werden musste.

Mit dem World Wide Web hat sich das Internet bzw. dessen Nutzung sehr weit verbreitet. Aber das liegt nur teilweise an den Möglichkeiten im und Fähigkeiten des Webs. Unkonventionelle, textbasierte Kommunikationsformen, auch mit großer Asynchronität zwischen Sender- und Empfängerzahlen, waren schon vorher die größte Stärke des Internets. Von daher finde ich es richtig, in Teilen gut vergleichbare Massenkommunikationsplattformen aus Vor-Web-Zeiten Fediverse zu nennen. (Zuerst stieß mir das ein wenig auf.) Das war ein guter Gedankenstarter.

Comment via email
Community Based Discovery of Interesting Content on the Small Web
Warning: This entry uses a LOT of words to come to a banale conclusion. Skip to the last paragraph for a tl;dr.

I was thinking about possibilities how interesting web sites could be discovered without relying on general-purpose search engines, web directories or unstructured recommendations in blog posts or threads in bulletin boards. Search engines will likely always be prown to SEO and therefore commercial content popping up among search results when that is not what the seacher is looking for. Web directories can be very nice and helpful if they are maintained well. But the criteria by which links are selected and categories don't always fit the needs of the visitor who is in search of new interesting content. Personal recommendations are worth a lot and I like it when people care enough about a web site or blog posts to share a link in chats or web forums. But they don't satisfy the use case that I have in mind. What I mean is the use case of wanting to create an aggregated feed of content (blog posts, other text posts, videos, audio podcasts, etc...) without learning about every single source of interesting content individually first. If you've been a member of a large social media platform you probably know how helpful it can be, especially to somebody who is new, to be able to follow sources that produce similar content quickly, making it worthwhile to stay, even though you'll want to do a finer selection of what goes into your feed over time. On Twitter I used the retweets of some few accounts with similar interests to build a very interesting feed quickly, and follow and remove single accounts over time to build perfectly individualised lists for myself. On Reddit, you can join a few really big subreddits and have some interesting stuff instantly, then over time find smaller and even more interesting subreddits that weren't among the search results of your favourite search terms.

With weblogs and the small web though, you have to know or find some web sites first to get just a little bit of interesting stuff, then click through a lot of blog rolls and link lists to find some more. It can be a very interesting journey and pastime. Maybe it fits the mentality of bloggers who don't publish on large platforms. But not everybody sees this as a good thing. And looking at it practically, somebody who wants to switch from consuming a single large social network to reading many small independent content producers does not have it as easy as somebody switching from one large social network to another.

Lists on Twitter and Shared Circles on Google+ are the perfect intermediate between picking out yourself what you want in your feed and following what everybody else follows. You do pick yourself, based on a list of interesting sources a friend or like-minded person has shared with you, but you don't have to pick every source individually. Likely there will be content among the possibly hundrets of authors you've started following with one click that you don't like to read. Then it's up to you to put them into a different circle/on a different list or to unfollow them completely. But to start out with a good set of interesting bloggers, you didn't have to search through thousands of web sites yourself first.

I don't see a reason why this isn't done more often with weblogs and other interesting web sites. I've shared and received OPML files for this purpose before. But for some reason people don't usually post their collection of great RSS and Atom feeds on a topic publicly. I'd like to encourage you to do so. If my feed collection wouldn't be embarrassingly outdated, I'd make a start. But the reason why I started thinking about this topic is precisely that I don't have a well looked after list of feeds on any topic. I just haven't cared about them enough for years. I will get it in order and post it here at some point though.

Comment via email
Mastodon