Blog: Online Communities

Most of these posts were originally posted somewhere else and link to the originals. While this blog is not set up for comments, the original locations generally are, and I welcome comments there. Sorry for the inconvenience.

Shabbat Shuva (d'var torah)

The Shabbat between Rosh Hashana and Yom Kippur is called Shabbat Shuva, the Shabbat of returning, and it's customary for the d'var torah or sermon to focus on the themes of the season. This is the d'var torah I gave in our minyan yesterday.

--

Early in the pandemic, when grocery-store shelves were sometimes empty, I started growing a few things to see if I could produce at least a little of my own food. I've always had kind of a brown thumb, but I'd managed to not kill a basil plant that had come in a farm-share box the previous year, so I was game to try.

I didn't grow a lot – more herbs than vegetables – but the cherry tomatoes I planted were extremely bountiful. Encouraged by that success, I planted more. Last year I found myself fighting unknown critters -- I got a few of the tomatoes but I found more that were half-eaten on the ground. Netting didn't help. Tabasco sauce didn't help. So this year I tried a different variety and a different location.

I got to keep three tomatoes. On the day I was going to harvest six more -- they'd been almost ready the previous day -- I found that something had eaten all the tomatoes and most of the leaves besides. The plant looked dead. I left the dejected remains in the pot for the end-of-season cleanup and stopped watering it.

A couple weeks ago I was pruning some other plants and cut away all the dead stems on that plant while I was at it. Then an amazing thing happened: it put out new shoots, then new leaves, and this week, three small tomatoes. That plant stood up to attack followed by neglect and came back strong despite it all.

--

During the high holy days we focus a lot on our own actions and the things we have done wrong. We focus on making amends for our mistakes, on doing teshuva and turning in a better direction for the coming year. We try to make things right with the people we've hurt. These are all critical things to focus on, and I don't have much to add that hasn't been said hundreds of times before.

Instead, today I want to talk about being on the other side -- about being the one who has been hurt. We know what to do when those who hurt us do teshuva, but what about when they don't? Teshuva is hard, and we know it won't always come.

Read more…

Section 230

The Supreme Court will soon hear a case that -- according to most articles I've read -- could upend "Section 230", the law that protects Internet platforms from consequences of user-contributed content. For example, if you post something on Facebook and there's some legal problem with you, that falls on you, as the author, and not on Facebook, who merely hosted it. This law was written in the days of CompuServe and AOL, when message boards and the like were the dominant Internet discourse. While there's a significant difference between these platforms and the phone company -- that is, platforms can alter or delete content -- this still feels like basically the "common carrier" argument. This makes sense to me: you're responsible for your words; the place you happened to post it in public isn't.

Osewalrus has written a lot about Section 230 over the years -- he explains this stuff better and way more authoritatively than I do. (Errors are mine, credit is his, opinions are mine.)

When platforms moderate content things get more complicated, and I'm seeing a lot of framing of the current case that's rooted in this difference. From what I understand, that aspect is irrelevant, and unless the Supreme Court is going to be an activist court that legislates, hosting user-contributed content shouldn't be in danger. But we live in the highly-polarized US of 2023 with politically-motivated judges, so this isn't at all a safe bet.

The reason none of that should matter is that the case the court is hearing, Gonzales vs. Google, isn't about content per se. It's about the recommendation algorithm, Google's choice to promote objectionable content. This is not passive hosting. That should matter.

The key part of Section 230 says:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. (47 U.S.C. § 230(c)(1)).

The court can rule against Google without affecting this clause at all. The decision shouldn't be about whether Google is the "publisher" or "speaker". Rather, in this case Google is the advertiser, and Section 230 doesn't appear to cover promotion at all.

I'm not a lawyer, and I'm not especially knowledgeable about Section 230. I'm a regular person on the Internet with concerns about the proper placement of accountability. Google, Twitter, Facebook, and others choose to promote user-contributed content, while platforms like Dreamwidth, Mastodon, and many forums merely present content in the order in which it arrives. That should matter. Will it? No idea.

Moderation is orthogonal. Platform owners should be able to remove content they do not want to host, just like the owner of a physical bulletin board can. In a just world, they would share culpability only if objectionable content was brought to their attention and they did not act. At that point they've said it's ok, as opposed to saying nothing at all because nobody can read everything on a platform of even moderate size. This is how I understand the "safe harbor" provision of the Digital Millennium Copyright Act to work, and the same principle should apply. In a just world, as I said, which isn't the world we live in. (I, or rather my job title, am a registered agent for DMCA claims, and I have to respond to claims I receive.)

I really hope that the court, even a US court in 2023, focuses on the key points and doesn't use this case to muck with things not related to the case at hand.

Re: deja vu, all over again

New_public published a post, Déjà Vu, All Over Again, about the evolution of the web and the early days when people made stuff for fun instead of companies making stuff for brand impact and algorithms, and it struck a chord. The author invited comments, so here's what I posted:

--

I've been feeling that deja vu too. I was on Usenet before the great renaming, and much later when I joined LiveJournal and this "blogging" thing (pulled in by friends), I remember thinking that a blog or LJ was basically alt.fan.me and would people really care about what I, a nobody, wrote? I expected to read and be read by about a dozen people who were already friends, but things have a way of spreading. And I knew that from Usenet, where I built friendships with people I've never met and sometimes didn't know "real" names for, and it was all very cool and friendly and broadening.

The net, when freed from algorithms and branding and bubbles so that ordinary people can interact with other ordinary people without barriers, is a remarkable way to learn about people and places and subcultures very different from my own. I've formed friendships from people halfway around the world walking very different paths in life from mine. There's a whole big world out there, and the last thing I want is to be trapped in a bubble of people just like me, or as close as Twitter et al think they can come to that.

The revival -- I hope it's a revival and not just a blip on the way to the next corporate thing -- of decentralized, direct, person-to-person online interaction excites me. Coincidentally, I've been working my way through my older posts on LiveJournal and then Dreamwidth, pulling together stuff on my own domain now that I have one, and I'm realizing how much more I used to write and share. I don't know how much of the change in my behavior has been due to people moving from blogs to social media and the vibe changing, how much has been due to modern social censors who retcon what's acceptable and what's offensive, and how much is me being more lazy or distracted or busy or whatever. But, facing the stark contrast to "online me 15 years ago" and "today", I'm motivated to try to get more of the old, personal, human writing back, somehow.

Mastodon: thoughts after a few weeks

A few weeks ago I created an account on Mastodon and have been trying it out as an alternative to Twitter (and I suppose Facebook, which I don't use). I'm not leaving Dreamwidth, my friends here, and DW's support for longer-form posts; DW and "social platforms" are good at different things.

As I mentioned in a previous post, the part of the Mastodon community (-ies) that I've encountered so far feels to me like the earlier days of the Internet. It feels more friendly, helpful, and supportive than even pre-Musk Twitter (driven by algorithms and ad sales). It kind of reminds me of some of the more social Usenet newgroups of yore, like the Rialto and alt.callahans.

It's different, and different takes time to get used to, and different is sometimes better and sometimes worse. And getting set up isn't going to be as easy as going to Twitter or Facebook and clicking "sign up".

barriers to entry

I actually looked at Mastodon back in the spring, when the Twitter thing was starting to happen, but I bounced. You see, Mastodon isn't a service, like Twitter or Facebook is; it's a federated platform. The best analogy I've seen to setting yourself up on Mastodon is getting an email address. You can get email services from lots of places and they all inter-operate. Choose Gmail or outlook.com or your ISP's bundled account or your own server or anything else; no matter what you choose, you'll be able to send and receive email. Email providers aren't all the same and you might find your choices have consequences -- Gmail silently nukes certain messages and you'll never know, and aol.com is oft seen as a bad neighborhood. You choose an email provider, follow its rules, and deal with its issues -- and if you decide to move later, with some disruption you can. Your choice matters some, but it's not permanent.

Mastodon servers are like that. There are hundreds, maybe thousands, of Mastodon servers out there, and there are lists of recommended servers that you can find with a search for something like "find mastodon server", and from the outside it can be overwhelming. Back in the spring I saw that I had to Make Decisions first, and I didn't know enough to make decisions, and I hadn't seen the email analogy, and I was only casually looking and wasn't invested...and I walked away.

All of that is true today, too, except that more of my friends were moving there so I had a reason to dig a little deeper.

I found one of those pages of "50 servers you might consider" or some such, many of which are aligned to particular interests like Linux or open-source software or furries or art, and started browsing things I wouldn't mind being affiliated with. (Your Mastodon server, like your email provider, shows up in your "address", so there's an appearance aspect to it.) Servers can have their own moderation rules and terms of service and those are things I care about, so I read those pages on short-list candidates, eliminating some by what I found there. I identified a server that aligned well with my interests, my views on moderation, and the expected local conversation (more about that in a bit), and applied for an account.

Yeah, "applied" in this case. Some servers are totally open -- anyone can create an account. Some were but then Twitter started to implode and servers that had had 5000 people were seeing tens of thousands of new accounts and buckling under the load, so they went to a wait-list model. The server I joined asked for a short "why do you want to join this server?" message.

There are some huge, general-purpose, open servers. I recommend against trying to join them now. Across the network of all public Mastodon servers, there were something like a million new accounts in the first week of the Musk era. These servers aren't usually being run by well-funded megacorps but by mostly volunteers trying to keep up with demand.

the fediverse

Mastodon isn't a single site or a single thing. It'd decentralized and distributed. "Mastodon" is the name of the software. Strictly speaking, when you join a Mastodon server you are joining a server that is part of "the fediverse" -- "fed" like in "federated". People talk about being "on Mastodon", and what they mean is "on one of these servers", and sometimes a well-meaning person tries to correct your terminology, and I want to give y'all a heads-up.

The fediverse has other "things" besides Mastodon. There's a whole big set of open-source projects for sharing different kinds of things across a network, with an interface called ActivityPub at the center of it. I don't know very much about that stuff yet.

So, technically: there is the fediverse, and Mastodon servers are part of it, and so are other things. But there's no mastodon.com that runs it all, like twitter.com or facebook.com. Remember: like email, not like corporate social media.

(There is a mastodon.com. Of course there is; every URL you can imagine that consists of a single English word is claimed by someone. This one is a forestry site.)

sounds like a lot of work; how's this better than Twitter?

Still with me?

On the surface Mastodon looks kind of like Twitter, federation aside. You can see short posts from other people in a feed, and you can interact with them (liking them, replying to them, etc). There's a big difference, though, and I think it's an important difference that helps with constructive discourse instead of amplifying the loudest people.

Twitter creates, and Google+ after the early days created, a "feed" for you, curated by an algorithm. I don't know how G+'s worked; on Twitter, a post (tweet) is more likely to show up in your feed if it's posted by someone with a lot of reach (the reach get reacher), or if it has a lot of likes (encourages socks, bots, and echo chambers), or if it's somehow connected to someone you follow. That last seems to be the least important, anecdotally. I almost never use my Twitter feed because it's full of stuff I don't care about. In Musk's Twitter, rumor has it that paid members also get substantial priority.

Mastodon gives you multiple feeds (I'll get back to that), and the "algorithm" is "reverse chronological", like it is here on DW and probably on every blogging site you've ever used. You see stuff as it was posted, not something yanked out of its context from three days ago and pushed at you now, and not yanked out of its context of all the other conversation happening around it. Nothing has priority; you get what you asked for, in order. I've found the things I read and interact with here on DW to be much more thoughtful, nuanced, and civil than what I see on Twitter (granted post length is a factor too), and so far that's what I'm seeing on Mastodon too. (BTW, posts on Mastodon are by default 500 characters, larger than Twitter, and it's a server setting. I've seen one server that lets you use 5000 characters so long as you put most of it behind a cut tag.)

Mastodon also gives you multiple feed options, so you can choose the size of your fire hose. You can see just posts from (or boosted) by the people you follow, or just posts from your local server (regardless of who you follow), or a "federated" view that reaches out to other servers and does, um, something based on the people you follow and their connections. I haven't explored that one much yet. It's big. But it's still reverse chronological, no prioritization, no buying or shouting your way into top position.

I think that local feed will end up being pretty important. If you choose a server that aligns with some of your interests, then that "local" view can connect you with people who share those interests. Because people are usually multi-faceted and the instance is a home, not a topic restriction, you'll see a variety of content from the people there. It's not like Usenet newsgroups or Codidact communities where you can only talk about this thing here and not that thing, but there's a rough sort based on some shared interest, if you want to use that. (Of course, if you want to create multiple accounts on multiple servers, for example to separate personal and professional content, you can do that too.)

I'm being an armchair sociologist here with too few observations and no data, but I think this "local community of multi-faceted people" aspect will act somewhat like physical neighborhoods (back when we socialized with our neighbors, but maybe your barony or congregation is a model too) or like the more social Usenet groups. Because these online neighborhoods aren't bounded by geography or (probably) by culture, the people I see on that local feed are more heterogeneous, more diverse, more "like me in some ways, very unlike me in others". I hope easy interaction with that community will help build connections and resist polarization. I'm game to try the experiment, at least. On Twitter, only the loudest (and probably most extreme) "people not like me" would make it to the feed, the feed that was overrun with topics I don't care about from people I don't know so I never looked at it anyway -- but if I did look, I wouldn't find the "regular people", only the people with big fan followings.

(Aside: a week or so ago I came across a server for my city. So physical neighborhoods might be represented too.)

boosts and retweets

On Twitter, you can "retweet" something, which means "show this to my followers". On Twitter you can also retweet and add your own message. If you've seen tweets that embed other tweets, that's what's happening. So you might see Musk's latest policy flip-flop and retweet to your followers, adding a snarky comment of your own, and your retweet will be its own tweet, not part of the thread of replies to the original tweet.

On Mastodon you can "boost" something, which is like that first kind of retweet. I saw something that I wanted to add my own message to (further support in my case, not snark), and I couldn't figure out how to do it -- the "boost" button doesn't have an option for adding a comment. On investigation, I learned that this was an intentional design choice.

My initial reaction was "huh, weird". Then I thought "ok, maybe if you can't easily snipe at people you'll be less likely to snipe, so maybe that improves the climate?" and that sounded like a good idea. But since then I've seen more cases where it would have been helpful to either add something (as the booster) or comment to the booster not the original poster (as a reader). So I'm not sure how I feel about this now.

You can always do this manually, of course -- you can link to anything, after all. You won't get the fancy rendering, that thing that looks like an embedded tweet on Twitter. But if you decide to just boost something, instead of creating your own post, then people who want to respond to you can't. Like, if you didn't know that that thing you boosted has been debunked or has more context or something like that... no easy way to do that.

mindset

Mastodon, and the fediverse in general, exudes a scrappy "do more for yourself" mindset. There's no single entity making decisions for you -- what you see, how it's moderated, how the software works, etc. Servers are run by ordinary people who make those decisions for their servers only. Norms can vary. I expect that the most successful servers operate by some form of consensus, either up front or emergent (as people opt in or out). Servers can block other servers, so there's some level of shared baseline to operate in polite society. You can set up your own neo-Nazi server if you want to, but you might find that a lot of people don't want to talk with you.

I've seen the fediverse compared to anarchy (you and those with shared goals can do whatever you want), and I've also seen it compared to fiefdoms (somebody controls your server and it's probably not you). I don't think it's a fiefdom in the way that Twitter is; first, you can move to a different server, and second, that you can set up your own server for you and your friends mitigates if you don't like any of the options. A serf can't just say "well I'll take that land over there and do my own thing", because all land is ultimately owned by someone. On the Internet, you can buy a domain and set up shop -- the space isn't wholly owned. But whether you're a serf or an Internet denizen unhappy with the existing servers, you have to do work -- setting up your own place isn't free. And that effort can be a substantial barrier, too. So it's not a complete mitigation for networks with problematic owners, but I think we'll be better off on the fediverse than on Twitter or Facebook, which feels like an even bigger fiefdom to me. Time will tell.

Some Twitter-related links

If you are using your Twitter account to sign in to other sites ("the "sign in with Google/Facebook/Twitter/etc" system), you should stop doing that now. Also, if you are using SMS for two-factor authentication with Twitter, that same article has advice for you. Some parts of their 2FA setup have stopped working, and apparently SMS validation is now unreliable.

There is an outstanding thread -- on Twitter, natch -- about the kinds of things that SREs (site reliability engineers, the people who keep large systems running) worry about. Parts of large systems fail all the time; in a healthy setup you'll barely notice. Twitter is, um, not healthy.

Debirdify is a tool for finding your Twitter friends on the Fediverse (Mastodon), for those who've shared that info. It looks for links in pinned tweets and Twitter profile ("about") blurbs.

I'm at https://indieweb.social/@cellio, for anyone else who's there. I'm relatively new there, like lots of other folks, but so far the vibe takes me back to the earlier days of the Internet -- people are friendly, help each other, presume good intent, and have actual conversations. It is not Twitter; some intentional design choices appear to encourage constructive use and hinder toxicity. I hope to write more about Mastodon later.

The trust thermocline

John Bull wrote a post (in tweet-sized pieces, naturally) that rings true for me, and he gave a name for the phenomenon we're seeing with Twitter, saw with LiveJournal, and partially saw with Stack Overflow. The thread starts here on Twitter and here on Mastodon (the Fediverse). Selected quotes:

One of the things I occasionally get paid to do by companies/execs is to tell them why everything seemed to SUDDENLY go wrong, and subs/readers dropped like a stone. So, with everything going on at Twitter rn, time for a thread about the Trust Thermocline.

So: what's a thermocline?

Well large bodies of water are made of layers of differing temperatures. Like a layer cake. The top bit is where all the the waves happen and has a gradually decreasing temperature. Then SUDDENLY there's a point where it gets super-cold.

The Trust Thermocline is something that, over (many) years of digital, I have seen both digital and regular content publishers hit time and time again. Despite warnings (at least when I've worked there). And it has a similar effect. You have lots of users then suddenly... nope. [...]

But with a lot of CONTENT products (inc social media) that's not actually how it works. Because it doesn't account for sunk-cost lock-in.

Users and readers will stick to what they know, and use, well beyond the point where they START to lose trust in it. And you won't see that.

But they'll only MOVE when they hit the Trust Thermocline. The point where their lack of trust in the product to meet their needs, and the emotional investment they'd made in it, have finally been outweighed by the physical and emotional effort required to abandon it. [...]

Virtually the only way to avoid catastrophic drop-off from breaching the Trust Thermocline is NOT TO BREACH IT.

I can count on one hand the times I've witnessed a company come back from it. And even they never reached previous heights.

Social media and moderation

I've participated in a lot of online communities, and a lot of types of online communities, over the decades -- mailing lists, Usenet, blogging platforms like Dreamwidth, web-based forums, Q&A communities... and social media. With the exception of blogging platforms, where readers opt in to specific people/blogs/journals and the platform doesn't push other stuff at us, online communities tend to end up with some level of moderation.

We had (some) content moderation even in the early days of mailing lists and Usenet. Mostly1 this was gatekeeping -- reviewing content before it was released, because sometimes people post ill-advised things like personal attacks. Mailing lists and Usenet were inherently slow to begin with -- turnaround times were measured in hours if you were lucky and more typically days -- so adding a step where a human reviewed a post before letting it go out into the wild didn't cost much. Communities were small and moderation was mostly to stop the rare egregiously bad stuff, not to curate everything. So far as I recall, nobody then was vetting content that way, like declaring posts to be misinformation.

On the modern Internet with its speed and scale, moderation is usually after the fact. A human moderator sees (or is alerted to) content that doesn't fit the site's rules and handles it. Walking the moderation line can be tough. On Codidact2 and (previously) Stack Exchange, I and my fellow moderators have sometimes had deep discussions of borderline cases. Is that post offensive to a reasonable person, or is it civilly expressing an unpopular idea? Is that link to the poster's book or blog spam, or is the problem that the affiliation isn't disclosed? How do we handle a case where a very small number of people say something is offensive and most people say it's not -- does it fail the reasonable-person principle, or is it a new trend that a lot of people don't yet know about? We human moderators would examine these issues, sometimes seek outside help, and take the smallest action that corrects an actual problem (often an edit, maybe a word with the user, sometimes a timed suspension).

Three things are really, really important here: (1) human decision-makers, (2) who can explain how they applied the public guidelines, with (3) a way to review and reverse decisions.

Automation isn't always bad. Most of us use automated spam filtering. Some sites have automation that flags content for moderator review. As a user I sometimes want to have automation available to me -- to inform me, but not to make irreversible decisions for me. I want my email system to route spam to a spam folder -- but I don't want it to delete it outright, like Gmail sometimes does. I want my browser to alert me that the certificate for the site I'm trying to visit isn't valid -- but I don't want it to bar me from proceeding anyway. I want a product listing for an electronic product to disclose that it is not UL-certified -- but I don't want a bot to block the sale or quietly remove that product from the seller's catalogue.

These are some of the ways that Twitter has been failing for a while. (Twitter isn't alone, of course, but it's the one everyone's paying attention to right now.) Twitter is pretty bad, Musk's Twitter is likely to be differently bad, and making it good is a hard problem.3

Twitter uses bots to moderate content, and those bots sometimes get it badly wrong. If the bots merely flagged content for human review, that would be ok -- but to do that at scale, Twitter would need to make fundamental changes to its model. No, the bots block the tweets and auto-suspend the users. To get unsuspended, a user has to delete the tweets, admit to wrongdoing, and promise not to do it "again" -- even if there's nothing wrong with the tweet. The people I've seen be hit by this were not able to find an appeal path. Combine this with opaque and arbitrary rules, and it's a nightmare.

Musk might shut down some of the sketchier moderation bots (it's always hard to know what's going on in Musk's head), but he's already promised his advertisers that Twitter won't be a free-for-all, so that means he's keeping some bot-based moderation, probably using different rules than last week's. He's also planning to fire most of the employees, meaning there'll be even fewer people to review issues and adjust the algorithms. And it's still a "shoot first, ask questions later" model. It's not assistive automation.

A bot that annotates content with "contrary to CDC guidelines" or "not UL-certified" or "Google sentiment score: mildly negative" or "Consumer Reports rating: 74" or "failed NPR fact-check" or "Fox News says fake"? Sure, go for it -- we've had metadata like the Good Housekeeping seal of approval and FDA nutrition information and kashrut certifications for a long time. Want to hide violent videos or porn behind a "view sensitive content" control? Also ok, at least if it's mostly not wrong. As a practical matter a platform should limit the number or let users say which assistance they want, but in principle, fine.

But that's not what Twitter does. Its bots don't inform; they judge and punish. Twitter has secret rules about what speech is allowed and what speech is not, uses bots to root out what they don't like today, takes action against the authors, and causes damage when they get it wrong. There are no humans in the loop to check their work, and there's no transparency.

It's not just Twitter, of course. Other platforms, either overwhelmed by scale or just trying to save some money, use bots to prune out content. Even with the best of intentions that can go wrong; when intentions are less pure, it's even worse.

Actual communities, and smaller platforms, can take advantage of human moderators if they want them. For large firehose-style platforms like Twitter, it seems to me, the solutions to the moderation problem lies in metadata and user preferences, not heavy-handed centralized automated deletions and suspensions. Give users information and the tools to filter -- and the responsibility to do so, or not. Take the decision away, and we're stuck with whatever the owner likes.

The alternative would be to use the Dreamwidth model: Dreamwidth performs no moderation that I'm aware of, I'm free to read (or stop reading) any author I want, and the platform won't push other content in front of me. This works for Dreamwidth, which doesn't need to push ads in front of millions of people to make money for its non-existent stockholders, but such slow growth is anathema to the big for-profit social networks.


  1. It was possible to delete posts on Usenet, but it was spotty and delayed. ↩︎

  2. The opinions in this post are mine and I'm not speaking for Codidact, where I am the community lead. ↩︎

  3. I'd say it's more socially hard than technically hard. ↩︎

"What's your contribution on the Internet?"

Somebody on Dreamwidth asked (as part of a research project):

How do you make yourself useful to other people on the internet? What's your contribution to the internet?

That's not how I generally think about my activity online, but I said a few things in the moment:


Since the early days of Usenet I've been using the net to learn (self-enrichment), teach or share my knowledge and experience (I hope this helps others), and get to know people who are not like me and who I would never have met otherwise. I like to think that I have similarly contributed to others meeting diverse people from different cultures and contexts. The reasons were originally self-focused, but that's changed over time and with experience.

More actively, after close to a decade contributing to another Q&A network (asking, answering, curating, helping newcomers, moderating), I'm now working on an open-source, transparent, community-driven platform for knowledge-sharing. We're small and trying to grow and only time will tell if we truly helped others, but it's where I invest my community-building and platform-building efforts now.

I guess I served as a canary when that other place turned evil. No one ever signs up to be a canary.

I have used email, and restricted email lists, to both give and get counsel on personal matters. I think I've helped a bunch of people who were considering conversion to Judaism. I consider it a success that some of them did and some of them decided not to; it's not about recruiting but about helping people evaluate the fit.

One of those people was a seeker in Iran, where it was very dangerous to be out about that sort of thing. I think we (one other person and I, in a private chat room with this person) might have saved some lives that day, but I'll never know.

I had a remote intern a few years ago (pre-pandemic); I met her once, about halfway through the internship when I traveled to her location, but otherwise it was all done remotely. I've had in-person interns and junior hires before and I enjoy mentoring them; this was my first time doing it remotely. (I've since done it a couple more times.) Kind of relatedly, I received email last night from an SCA contact who's looking for a mentor for a student for a Girl Scout project. I don't know where this student lives.

I was contacted by a schoolteacher in Myanmar several years ago; her students were building a yurt based on an article I had allowed someone to publish online (it was originally in a paper SCA newsletter), and she had a question. Myanmar. My jaw dropped. Another time, I got email from somebody in Scotland asking me if it would stand up to force-12 winds (which I had to look up). This article was kind of a one-off; it's just a thing I wrote up, after learning from someone else (credited of course) and building one, because I needed something to live in at Pennsic. It wasn't a focus area for me; I've never been part of online yurt communities and stuff; I never promoted it anywhere. I don't even have direct access to edit it. A chance "sure, go ahead and put it on your site if you want" was pretty much my entire contribution to it being online. It makes me wonder how much the stuff that I've intentionally published and maintained has helped people that I'll never know about.

I'll never know most of the impact I have on others. I do the best I can to help it be positive impact.

Decisions as barriers to entry

I've been hearing a lot about Mastodon for a while and thought I'd look around, see if I know anyone there, see what it's like, see if it seems to work better than Twitter... and the first step is to choose a host community/server, from dozens of options. The options are grouped into categories like "Tech" and "Arts" and "Activism" and there's also "General" and "Regional". None of the regional offerings are my region, so I browsed General and Tech.

All of the communities have names and short blurbs. Some sound serious and some sound less-so. Mastodon is a Twitter-like social network, so -- unlike topic-focused Q&A sites, subreddits, forums, etc -- one should expect people to bring their "whole selves". That is, a person on a tech server is likely to also post about food and hobbies and world events and cats. From the outside, I can't tell whether the mindset of the Mastodon-verse it "well yeah, duh, the server you choose is really just a loose starting point because you need to start somewhere" or if there's more of a presumption that you'll stay on-topic (more like Reddit than Twitter, for example).

A selling point of Mastodon is that it's distributed, not centrally-managed; anybody is free to set up an instance and set the rules for that instance. One considering options might reasonably want to know what those rules are -- how will this instance be moderated? But I see no links to such things. Many instances also require you to request access, which further deters the casually curious.

I guess the model is that you go where your friends are -- you know someone who knows someone who knows someone with a server and you join and you make connections from there. That's a valid and oft-used model, though I wasn't expecting it here.

Seder-inspired questions

An online Jewish community I'm fond of has some unanswered questions that came out of Pesach this year. Can you answer any of them, dear readers?

  • Why do we designate specific matzot for seder rituals? We break the middle matzah; we eat first from the top one and use the bottom one specifically for the Hillel sandwich. Why? What's the symbolism? (I'm aware of the interpretation that the three matzot symbolize the three "groups" of Jews -- kohein, levi, yisrael -- but that doesn't explain these positional associations.)

  • If your house is always kosher for Pesach, do you have to search for chameitz? That is, is the command to search for chameitz, period, or is it to search for any chameitz that might be in your house, and if you know there isn't any you skip it?

  • Why does making matzah require specific intent but building a sukkah doesn't? When making matzah (today I learned), it's not enough to follow the rules for production; you have to have the specific intent of making matzah for Pesach, or apparently it doesn't count. This "intent" rule applies to some other commandments too. But it doesn't apply to building a sukkah; you can even use a "found sukkah", something that happens to fulfill all the requirements that you didn't build yourself, to fulfill the obligation. Why the difference?

I tried searching for answers for these but was not successful. I have readers who know way more than I do (and who can read Hebrew sources better than I can). Can you help?